From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24097C433DF for ; Fri, 14 Aug 2020 17:40:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D72BD20768 for ; Fri, 14 Aug 2020 17:40:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Q/OymEuE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D72BD20768 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 636FB8D000A; Fri, 14 Aug 2020 13:40:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E7D38D0003; Fri, 14 Aug 2020 13:40:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AE068D000A; Fri, 14 Aug 2020 13:40:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 322EE8D0003 for ; Fri, 14 Aug 2020 13:40:30 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DFD6B8141 for ; Fri, 14 Aug 2020 17:40:29 +0000 (UTC) X-FDA: 77149888578.17.bite09_3b0c8f126ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 8AA58180D0181 for ; Fri, 14 Aug 2020 17:40:29 +0000 (UTC) X-HE-Tag: bite09_3b0c8f126ffe X-Filterd-Recvd-Size: 2768 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:40:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=DsHUGqTDRVgYmUJuHS5wuUtYumb7JxmGCJkm5fxPVhI=; b=Q/OymEuE90CUj6upNtiF1XRANr oznQEcbWJjafD2X0AI5ozPQfFRkFff+DZkNXnm017jXjnH2W+TtZKLyfF/lB71X1gcdLMmDM9cM0p Zk3+s8xjvjyqShcfwVY7/F6xRR6mtx09Ucg0LOLeUAp+yLrXvoKxv+7tcVINw61Pop3eAMx/d8U9N cz8PpE1gRpsEmCsx/gCmLwwknBgBZTbycJHst5KLNHDrqyK9ygTOaUZP5JHTebhxK2pYO8/nm8nWp R3Uhk3tC/Ehy9LJYuscyl12TbARhtR60Z42bup6Z7jDQdZ10Fa3RX3985J9+/jYNsBTYzRcLMxHne CcsugmQw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1k6dgi-0002U0-Bf; Fri, 14 Aug 2020 17:40:20 +0000 Date: Fri, 14 Aug 2020 18:40:20 +0100 From: Matthew Wilcox To: Minchan Kim Cc: Andrew Morton , linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com Subject: Re: [RFC 0/7] Support high-order page bulk allocation Message-ID: <20200814174020.GX17456@casper.infradead.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> X-Rspamd-Queue-Id: 8AA58180D0181 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Aug 14, 2020 at 10:31:24AM -0700, Minchan Kim wrote: > There is a need for special HW to require bulk allocation of > high-order pages. For example, 4800 * order-4 pages. ... but you haven't shown that user. > int alloc_pages_bulk(unsigned long start, unsigned long end, > unsigned int migratetype, gfp_t gfp_mask, > unsigned int order, unsigned int nr_elem, > struct page **pages); > > It will investigate the [start, end) and migrate movable pages > out there by best effort(by upcoming patches) to make requested > order's free pages. > > The allocated pages will be returned using pages parameter. > Return value represents how many of requested order pages we got. > It could be less than user requested by nr_elem. I don't understand why a user would need to know the PFNs to allocate between. This seems like something that's usually specified by GFP_DMA32 or similar. Is it useful to return fewer pages than requested?