From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 954F6C19F32 for ; Wed, 5 Mar 2025 18:44:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B501280021; Wed, 5 Mar 2025 13:44:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9644128000B; Wed, 5 Mar 2025 13:44:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 798BC280021; Wed, 5 Mar 2025 13:44:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 51CB128000B for ; Wed, 5 Mar 2025 13:44:10 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3E566B6F9B for ; Wed, 5 Mar 2025 17:27:56 +0000 (UTC) X-FDA: 83188180152.03.02ABC11 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf25.hostedemail.com (Postfix) with ESMTP id 3267CA0019 for ; Wed, 5 Mar 2025 17:27:53 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=P9gibsq1; spf=pass (imf25.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741195674; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mQfPowMtzeZLUBwXZYQU5v1yt0PyPUIi7t8i3xRpFoI=; b=TeEr+zCBD4k/7zFcgPQEpdBM1wau0uMnM/nXlckYqFwN9xsv2vi7UJ+0gpCzJhrBnsirf3 icHTcOu4EjR2VxFEndasuGUdSIYSHdpASkTBnxiibzTW3aqiEyHekdpP605v7Gd1AJQmI1 pOvKS+PV1F7PSBBdcYLZwj830tOtiQo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=P9gibsq1; spf=pass (imf25.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.50 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741195674; a=rsa-sha256; cv=none; b=Pefz653zag+7vcLIOBC4xMz/YzyAicVKsBkWJRXo+2HGxU3nEIJaUPYFH6j03/pbj8Qtz1 pSvsqJ5y9+ZVyxL+86cv+eLGVm/kBpxUdvs9+T5UfvQO9HX0Z53y3BbJD5m/Yo6fBduh4I bjTKrOXCTuevBPj8UjDGu0F15PG49Ho= Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-54964606482so3728978e87.0 for ; Wed, 05 Mar 2025 09:27:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741195672; x=1741800472; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=mQfPowMtzeZLUBwXZYQU5v1yt0PyPUIi7t8i3xRpFoI=; b=P9gibsq10G4Tu9aRX0bW0Z02Z4yjyXkRQ3nGQxGYn8IF8Q/EzuED8ZzZMIkOCn0xjK Oe6j5w2SaMFJzhc2Uo4qvPKhwNzZuBKYOfNfzzRGAlavQAbJLVmB7hePJxHrBuosv9Hf UuKozhA5e6RARFOv2U/qJfcWA9opNo0GuaOxBI6clKnBO5te/DCO8yj0em7csyKXhSKN 0n91OuyhJDe+PnyQh4ARXJ/Qbrecynylahxu5tkHa5nAnJ998Bu4qQ/uOjkCejnC02oe dJqTaU7bsX7EAdyqG8L7zy5DcXukaZjiaH1zvgaMU1N4xH4PTvmKbeQWN0pyOGET+4N5 MgSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741195672; x=1741800472; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=mQfPowMtzeZLUBwXZYQU5v1yt0PyPUIi7t8i3xRpFoI=; b=spNBgw2/RtHzArYNzo9lrzDVyDhrTBrwX5+VQcKOtB3B4LQMkizUkymExSneSLLqI2 d1HaNXOCm8WwpFwPa2QPpfPtbYhx4PuYa4O1wG0D9ufGj570kl/4MjOGPMEEFO4VDc1Y ZqWMr/Tt0I1OQO4dek8P/YMhTMhn+hpy0gGtmKXOdajY3tjUErlcBzkP9+YQO0hqUYFw KGA0ZMEgEMUBelPWlGMoKFiAa+f7uaFsYEmKPSPPHe/Xt2EZfUtxh9cXnnB1WCHX80xh HeCeLUcD+godtfCTL08veXYO3DS1KM4EWYgl3gXWDYN12+fgM32EuqDjgoy4DTpZXfCb zevg== X-Forwarded-Encrypted: i=1; AJvYcCWiEt4muWvP1c2fn17Jdsd/ihhwFE63ixkMUaxoka9SioIjUwoCGrpWmkbM4os96kXwHZvM4qvr2Q==@kvack.org X-Gm-Message-State: AOJu0YxOJmGo89QeCMDbBTOU9EjuofjokYRvYM2XFvXl4ebMlVlRqkRi L1V9AMbbiE1d+6RXIFSWsYMjlHvQoGa7kga/ZSWWCioIaopNgsK7 X-Gm-Gg: ASbGncs6WwjDP85WCXWxmhuo3GVFO6mA95ZgNnhlo3wIUWP2w+mZDPihpq5zdhILH9h npVgaxbiG0lzn3Wf9RzcTAsNsYk+dxPNVb6hv5B+w4bwxjbPiTdLBDm/Kx3OYCuPm/9AfM09UN5 IF6yXWargY/ThDr+H+2lE7kQHnidfT1hD0Byx58+nIc2DKDraxUbqAHQ8vbh1YwN6XHNS2Igtao torNp0Zl7hBLi38pPMGC+nTorx9HDmlTg771P8HPzPJQSoD4jUmP7hfm+UzO/l/3n2E5uVoSVfm cGObjnYBodH78XYGWdG0fNrMOaOkIeGANOOtfZClYch4HAes3hy4TAUuJOnGaNp4u0I= X-Google-Smtp-Source: AGHT+IHyD6K0X+cwMdho8elbWUJdV3bSzGzUYT8hcFGebMzXlBqxa8XmfvT3QC4L7DaVDpeBC8TCQg== X-Received: by 2002:a05:6512:39c5:b0:549:58d5:f899 with SMTP id 2adb3069b0e04-5497d37a98cmr1313981e87.37.1741195671718; Wed, 05 Mar 2025 09:27:51 -0800 (PST) Received: from pc636 (host-95-203-6-24.mobileonline.telia.com. [95.203.6.24]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-54963962fe4sm1070504e87.240.2025.03.05.09.27.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 09:27:51 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 5 Mar 2025 18:27:48 +0100 To: Ryosuke Yasuoka Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, simona@ffwll.ch, kraxel@redhat.com, gurchetansingh@chromium.org, olvaffe@gmail.com, akpm@linux-foundation.org, urezki@gmail.com, hch@infradead.org, dmitry.osipenko@collabora.com, jfalempe@redhat.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux.dev, linux-mm@kvack.org Subject: Re: [PATCH drm-next 1/2] vmalloc: Add atomic_vmap Message-ID: References: <20250305152555.318159-1-ryasuoka@redhat.com> <20250305152555.318159-2-ryasuoka@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250305152555.318159-2-ryasuoka@redhat.com> X-Rspam-User: X-Rspamd-Queue-Id: 3267CA0019 X-Rspamd-Server: rspam09 X-Stat-Signature: wntjsua3hfotm3uffekr91joi9ka1uxu X-HE-Tag: 1741195673-361258 X-HE-Meta: U2FsdGVkX1+mEMTRmTjCQOPE8Gi5MbFmbu9sNsFCDMuIpu/K8np4FMzEM+tDRkRDSi2RbKAA5stuVYE76qNaDuEdDntAXjuu7TvksGDcUnQmmrnrgH9RAEtcqa3eArleXJ2M3zsTWQb5BXVHI40UxAFlQqCVNz1t63+aFCEaWPUANylsOWW27DhvdwEVysD08WpFsbOBJY/p58qDSQr9exMJCSvgZ9FglE4i84viR64X2alMOGy0ODd0NTAW0E1MAWcbhUQqh9QUhzdex+ffseozhlWXJU0Vf50reNG+QqHnyDxYjKFgD8dcJMqg7DWp66P2zlOr1uUNxTimqj0Unm+QFHX3wmC4IDQkdZ+CTbdrHGH82Wxi4cSIBSWb80W/gVTzOqGXoeMZeyyiBD6br3N76gbZdsC94emyNs+x/GjPFtvcBa+Qh5seyVPE5qv8Ib607FLqsGIEqjJUWkI9pfcKsL8WrkW0IKUzExQFv098tkraezdtpXB3p3eUmy+9GQ9qTKXI7iU9lifT0/7cBDvC94/lwN1PGo+bCIWTAyEVtXX5Y+p8T7emHrnJJXLlLGo1619yG0FdVAnDpHOtWnPUcpx6IIxsfBcA+082H9zfQ3En19V1t/xFt7mOa6Q7tQ55QlFeYW1eh2kQknqSoxkHc+1IPuRNcxMWrxAta4n9j48LWaQ5PsuglWFaD8TCkyek6iHjHCgZuLNrVjPLrVarXrtoqhHDmTPPwDfWIbJyRhYdWk4YimjZhKSD2lvIdm2DwH18+t5v7oDdY2KxdSXzjijx01sHGHqp4RR4vbTGRnltidXXuQagYU3kRdmfWdHIj+soaMIz082Rc/y7M39WCQi+cvSm4hPzOs5nxTcv2p7U8uIOX6ty4MU2UFe7NXw/qCvW9OFbD/mFGUo9hxuIJ8r0M4igsnqdVl394RU32E3TfyvH5iQQnJ3O/+ukXzD9olAdjjopghEguCU IwucUOAc OCx1yu0hDJ6ZoKlcWRkUNrbYIT9kK3aLRsRQR1T9sY/2+/aUO0Xb7nKWoSH3EBCgO9SOcE/4WzogsLLBj6TIsUmVXr8zRiWiCQ3ukFUAEmWaaEFmYlu2jTRvK8nWgywaPmczK6O2A/9xwfAEszlUyCj7P7y6e+ARqnCdFs+CYyaTbGe6V1MMkBiWH9q3rj0Pj948AKU1aPebmwi7I8u8VBH8hR2qeO7MJCGZ7C+KFMA3+m7JTP8bJ+h7PRbSe2P/05wreg0aOoq9wP1GEY6hLemFUsUbFl/kcTyFm9rw4bJFxBMoNr7EpUrq+vvMxkUjxG9f0KG+oWYhxGsYgBBjueVRBpWwzRO6Q+fUDMejPKp3GE8Z+m2XJYD1IQpXHAXKgzno7uRdyMgqqzSY6cn/pLR7/aYe4sZLkD/Lt2laqdP9mrjpPxBIDRmOoxnqhyecPGxDVtvM6ma+oSlfD07NArwDpMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Mar 06, 2025 at 12:25:53AM +0900, Ryosuke Yasuoka wrote: > Some drivers can use vmap in drm_panic, however, vmap is sleepable and > takes locks. Since drm_panic will vmap in panic handler, atomic_vmap > requests pages with GFP_ATOMIC and maps KVA without locks and sleep. > > Signed-off-by: Ryosuke Yasuoka > --- > include/linux/vmalloc.h | 2 + > mm/internal.h | 5 ++ > mm/vmalloc.c | 105 ++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 112 insertions(+) > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 31e9ffd936e3..c7a2a9a1976d 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -190,6 +190,8 @@ void * __must_check vrealloc_noprof(const void *p, size_t size, gfp_t flags) > extern void vfree(const void *addr); > extern void vfree_atomic(const void *addr); > > +extern void *atomic_vmap(struct page **pages, unsigned int count, > + unsigned long flags, pgprot_t prot); > extern void *vmap(struct page **pages, unsigned int count, > unsigned long flags, pgprot_t prot); > void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot); > diff --git a/mm/internal.h b/mm/internal.h > index 109ef30fee11..134b332bf5b9 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1278,6 +1278,11 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, > void free_zone_device_folio(struct folio *folio); > int migrate_device_coherent_folio(struct folio *folio); > > +struct vm_struct *atomic_get_vm_area_node(unsigned long size, unsigned long align, > + unsigned long shift, unsigned long flags, > + unsigned long start, unsigned long end, int node, > + gfp_t gfp_mask, const void *caller); > + > struct vm_struct *__get_vm_area_node(unsigned long size, > unsigned long align, unsigned long shift, > unsigned long flags, unsigned long start, > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index a6e7acebe9ad..f5c93779c60a 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1945,6 +1945,57 @@ static inline void setup_vmalloc_vm(struct vm_struct *vm, > va->vm = vm; > } > > +static struct vmap_area *atomic_alloc_vmap_area(unsigned long size, > + unsigned long align, > + unsigned long vstart, unsigned long vend, > + int node, gfp_t gfp_mask, > + unsigned long va_flags, struct vm_struct *vm) > +{ > + struct vmap_node *vn; > + struct vmap_area *va; > + unsigned long addr; > + > + if (unlikely(!size || offset_in_page(size) || !is_power_of_2(align))) > + return ERR_PTR(-EINVAL); > + > + if (unlikely(!vmap_initialized)) > + return ERR_PTR(-EBUSY); > + > + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); > + if (unlikely(!va)) > + return ERR_PTR(-ENOMEM); > + > + /* > + * Only scan the relevant parts containing pointers to other objects > + * to avoid false negatives. > + */ > + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); > + > + addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, > + size, align, vstart, vend); > + > + trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend); > + > + va->va_start = addr; > + va->va_end = addr + size; > + va->vm = NULL; > + va->flags = va_flags; > + > + vm->addr = (void *)va->va_start; > + vm->size = va_size(va); > + va->vm = vm; > + > + vn = addr_to_node(va->va_start); > + > + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); > + > + BUG_ON(!IS_ALIGNED(va->va_start, align)); > + BUG_ON(va->va_start < vstart); > + BUG_ON(va->va_end > vend); > + > + return va; > +} > + > /* > * Allocate a region of KVA of the specified size and alignment, within the > * vstart and vend. If vm is passed in, the two will also be bound. > @@ -3106,6 +3157,33 @@ static void clear_vm_uninitialized_flag(struct vm_struct *vm) > vm->flags &= ~VM_UNINITIALIZED; > } > > +struct vm_struct *atomic_get_vm_area_node(unsigned long size, unsigned long align, > + unsigned long shift, unsigned long flags, > + unsigned long start, unsigned long end, int node, > + gfp_t gfp_mask, const void *caller) > +{ > + struct vmap_area *va; > + struct vm_struct *area; > + > + size = ALIGN(size, 1ul << shift); > + if (unlikely(!size)) > + return NULL; > + > + area = kzalloc_node(sizeof(*area), gfp_mask, node); > + if (unlikely(!area)) > + return NULL; > + > + size += PAGE_SIZE; > + area->flags = flags; > + area->caller = caller; > + > + va = atomic_alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area); > + if (IS_ERR(va)) > + return NULL; > + > + return area; > +} > + > struct vm_struct *__get_vm_area_node(unsigned long size, > unsigned long align, unsigned long shift, unsigned long flags, > unsigned long start, unsigned long end, int node, > @@ -3418,6 +3496,33 @@ void vunmap(const void *addr) > } > EXPORT_SYMBOL(vunmap); > > +void *atomic_vmap(struct page **pages, unsigned int count, > + unsigned long flags, pgprot_t prot) > +{ > + struct vm_struct *area; > + unsigned long addr; > + unsigned long size; /* In bytes */ > + > + if (count > totalram_pages()) > + return NULL; > + > + size = (unsigned long)count << PAGE_SHIFT; > + area = atomic_get_vm_area_node(size, 1, PAGE_SHIFT, flags, > + VMALLOC_START, VMALLOC_END, > + NUMA_NO_NODE, GFP_ATOMIC, > + __builtin_return_address(0)); > + if (!area) > + return NULL; > + > + addr = (unsigned long)area->addr; > + if (vmap_pages_range(addr, addr + size, pgprot_nx(prot), > + pages, PAGE_SHIFT) < 0) { > + return NULL; > + } > + > + return area->addr; > +} > + > /** > * vmap - map an array of pages into virtually contiguous space > * @pages: array of page pointers > -- > 2.48.1 > It is copy-paste code, so it is odd. The proposal is not a way forward to me. Unfortunately vmalloc is not compatible with GFP_ATOMIC, there is at least one place it is a page-table allocation entries where it is hard-coded to the GFP_KERNEL. Doing this without locks and synchronizations is not possible. -- Uladzislau Rezki