From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49244C004D4 for ; Thu, 19 Jan 2023 10:16:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D38066B007D; Thu, 19 Jan 2023 05:16:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CE6F86B0081; Thu, 19 Jan 2023 05:16:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAF506B0082; Thu, 19 Jan 2023 05:16:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AA98E6B007D for ; Thu, 19 Jan 2023 05:16:28 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 83EAC40512 for ; Thu, 19 Jan 2023 10:16:28 +0000 (UTC) X-FDA: 80371144056.30.23A7858 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf21.hostedemail.com (Postfix) with ESMTP id D5D881C000B for ; Thu, 19 Jan 2023 10:16:25 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=SNBBso14; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674123386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=do5/+sTvLxg/eH14yTKW5v02NlFyR1FqWkDGCgcKNfM=; b=CGsLdWe8AJXx6/Gqn01++ze/eL4waRPe1k+QbtnBrRqiJFq057dfjjqew7ElehCIwWTd95 3oAJHjN0iiso9gqXTe+B3ecaWQAFPxELOuci9aG5VnO7Uf1kWX3ZatG22+LYB1Plw9a9Nj 1qMc5edrZkx1hbftx9WGXq6pcECauVg= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=SNBBso14; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674123386; a=rsa-sha256; cv=none; b=GKBeBzx1vTpjuGikhrtibwOwaLlf7tPuuEEMZQrrdAMzwLPsKmHuCMn9y8agZgMgENQtsB a+YFBK6pLDjLl1SlqH9kfNr1zHFD9bgCXWUzqRO40gE3+QLlapwswfbWPQnhBTX8eJkkjQ JD5gcu5HxG57sijEyWirWWPzWBc1vho= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6FB3DB821A6; Thu, 19 Jan 2023 10:16:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 235C6C433D2; Thu, 19 Jan 2023 10:16:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674123383; bh=ivZ8XeT9R5Tu7sFzEb2BFOUhp34tjBYVpqg4GkHqdqE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=SNBBso14mmCoFITo1UcPfV+scP9aL6MrQ7HTQ4ybg453fT3zD0ofOYSOWo40Tshvc /aG/Yt5rqtGTOKZbYQ3ytDfuvvojN3lpoypd1Lav/JZEE/+MI4BnbgsB8oALHjmq6B n1IEASCpyCvqj+XpbhdcToJRYtPehU9HOI0SmIavWQhIFgTQkEMn9fcmBRAGrcvxjY JLMgXwjHhaiDhhK0DuDK2qJ3nCVeJRbOvTIaiuYPbyJnLMe0PwhE9clGzYgCSJ/JcU +9FlWDxXj4XZsiGSYNN3hXeUQhN9ecsqgfomQuZ9dSN4aoIHO+y1v8Q0Bl4SLn/jzJ aGDOgL94m5nuQ== Date: Thu, 19 Jan 2023 12:16:12 +0200 From: Mike Rapoport To: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org, Andrew Morton Subject: Re: [PATCH 1/5] mm: Add vma_alloc_zeroed_movable_folio() Message-ID: References: <20230116191813.2145215-1-willy@infradead.org> <20230116191813.2145215-2-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230116191813.2145215-2-willy@infradead.org> X-Rspamd-Queue-Id: D5D881C000B X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 79cw67h5yfk5tp3siq6xnn1zfqetw7pr X-HE-Tag: 1674123385-323751 X-HE-Meta: U2FsdGVkX1+PB6gkXrnAmj9Ud1pQn1QeSB0eC8NRg9zZui7bcz+Gzy1AVG+hluv0Cfk8ZQanODgdq1VJwbIIaNUQxccTPY1VZeXT/3KaUZJQXJwgwyHp+bStFH6Y/167Fit9ANHo6rZv6od41O+/+GEuBJi9GV3xMuvi9mn7SS108hjCpem4neGVvbDjknEtkuWWeNYVtF7bPdSkPig042Q0qNubDkgds1ET55vLNySJPlXX3ZzGEApcw6rbgZ5EFVyZmSFfPs6WeXf60LPsrowCTQt38UkP9fUxnsudUAjIgHdgIqD4VDI1xuv+O9wB/+98WZT/CQhVGby/IFQCa60UOxn3q1lFHnjnWp2GMZ8cX5BfP2NpJMA05QAHPg8nGHllUdKUQh0qMt7dAxrP07HPjZXre7sa3viJL5EruqricXkxrKUUX2yZBZksBd4DMaVoc0/L+SUWShcAZYM/Pce2QT4Wcno/SbWF5BhSQ3uthVDW5Dk58RsSbh9iBw8xCS6bNBug8db4FhmSbD+Ybea2jU/m100iUyDyig60+BiHjBGr4OxsnH8TqCPWoZ2rX6OuufADHKvLy1iIfIzokoTvttgs07Uzm+4bTO4jS10RtJTJC87SCwi2qUt3RwZ0vZYtOloYRnc1yrrlcZfBjqn0fL9A52Vh/ZC+xaJ9D4yuqWMJwtJvnjPEmugJHESscvZVsjZ1rDDuphbTd99ySDIsknX3+ms9D270B3PXNvs4bviapozYNmPQYuRNXLemYdeGoKm1ch/1AsbccOgZssGQbd6Wq+hPJfTHWWMMzndP7u6BRFO/3nXoQOgZ43WoVd5um6vNSer0GeKdVEke9svl8pz2Mzdzf0OCVXwidXro6vREycXxT7gFdOOJbFRiX4sF42UPWX+vrzIOjmaSL83t8nwaDItAg7NY9O8dHg0WZF6rBypSArwlffOvNfu7FtojSeZUJ1dMSPi+vaY 8tayXN2Y ERtfC5f/EKU406IWLImdwcrObawbBe+fK/b1YxdS9i7Gn/l0ldoS9XKu7s5/8xT9prvz0Sp5wUi+cqsM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 16, 2023 at 07:18:09PM +0000, Matthew Wilcox (Oracle) wrote: > Replace alloc_zeroed_user_highpage_movable(). The main difference is > returning a folio containing a single page instead of returning the > page, but take the opportunity to rename the function to match other > allocation functions a little better and rewrite the documentation > to place more emphasis on the zeroing rather than the highmem aspect. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > arch/alpha/include/asm/page.h | 5 ++--- > arch/arm64/include/asm/page.h | 4 ++-- > arch/arm64/mm/fault.c | 4 ++-- > arch/ia64/include/asm/page.h | 14 ++++++-------- > arch/m68k/include/asm/page_no.h | 5 ++--- > arch/s390/include/asm/page.h | 5 ++--- > arch/x86/include/asm/page.h | 5 ++--- > include/linux/highmem.h | 33 ++++++++++++++++----------------- > mm/memory.c | 16 ++++++++++------ > 9 files changed, 44 insertions(+), 47 deletions(-) ... > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index 56703082f803..9fa462561e05 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -208,31 +208,30 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr) > } > #endif > > -#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE > +#ifndef vma_alloc_zeroed_movable_folio > /** > - * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move > - * @vma: The VMA the page is to be allocated for > - * @vaddr: The virtual address the page will be inserted into > + * vma_alloc_zeroed_movable_folio - Allocate a zeroed page for a VMA. > + * @vma: The VMA the page is to be allocated for. > + * @vaddr: The virtual address the page will be inserted into. > * > - * Returns: The allocated and zeroed HIGHMEM page > + * This function will allocate a page suitable for inserting into this > + * VMA at this virtual address. It may be allocated from highmem or > + * the movable zone. An architecture may provide its own implementation. > * > - * This function will allocate a page for a VMA that the caller knows will > - * be able to migrate in the future using move_pages() or reclaimed > - * > - * An architecture may override this function by defining > - * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE and providing their own > - * implementation. > + * Return: A folio containing one allocated and zeroed page or NULL if > + * we are out of memory. > */ > -static inline struct page * > -alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, > +static inline > +struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > unsigned long vaddr) > { > - struct page *page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); > + struct folio *folio; > > - if (page) > - clear_user_highpage(page, vaddr); > + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr, false); Add __GFP_ZERO and simply return vma_alloc_folio(...)? > + if (folio) > + clear_user_highpage(&folio->page, vaddr); > > - return page; > + return folio; > } > #endif -- Sincerely yours, Mike.