From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BF91C433E0 for ; Wed, 10 Mar 2021 23:47:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E2F8E64FCA for ; Wed, 10 Mar 2021 23:47:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2F8E64FCA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D6518D0243; Wed, 10 Mar 2021 18:47:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ABBD6B00AB; Wed, 10 Mar 2021 18:47:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69AC18D0243; Wed, 10 Mar 2021 18:47:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 4F0206B00A9 for ; Wed, 10 Mar 2021 18:47:06 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 03E0D8249980 for ; Wed, 10 Mar 2021 23:47:06 +0000 (UTC) X-FDA: 77905602852.10.DAA37C4 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id D6689C0007CE for ; Wed, 10 Mar 2021 23:47:05 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 7A2E364FC8; Wed, 10 Mar 2021 23:47:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1615420024; bh=rn7Xr3yIxnnV9BAoPugT+l22JqgQDqgLjIf7EbU0AGY=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=euJqPDQ6UAmWIcW6l+ea4stipU1fQr9d9CgnEN9dfMnNUzReFQNJqHfdSn3MeVW5B Sq+ohg7MQwVwNrtnNsrQQSs1wF2o7qSJEqO9jlcFfRfKPIsgJFBDs3ZTHCmjx+2a4r CRVra3d/lUX6k0zAfa0yX0mZoiStN2Qj5KuotG5o= Date: Wed, 10 Mar 2021 15:47:04 -0800 From: Andrew Morton To: Mel Gorman Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 0/5] Introduce a bulk order-0 page allocator with two in-tree users Message-Id: <20210310154704.9389055d0be891a0c3549cc2@linux-foundation.org> In-Reply-To: <20210310104618.22750-1-mgorman@techsingularity.net> References: <20210310104618.22750-1-mgorman@techsingularity.net> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Stat-Signature: bs4gb1okcxw3543hptmndr7yf5a4n3f6 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D6689C0007CE Received-SPF: none (linux-foundation.org>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615420025-184325 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 10 Mar 2021 10:46:13 +0000 Mel Gorman wrote: > This series introduces a bulk order-0 page allocator with sunrpc and > the network page pool being the first users. Right now, the [0/n] doesn't even tell us that it's a performance patchset! The whole point of this patchset appears to appear in the final paragraph of the final patch's changelog. : For XDP-redirect workload with 100G mlx5 driver (that use page_pool) : redirecting xdp_frame packets into a veth, that does XDP_PASS to create : an SKB from the xdp_frame, which then cannot return the page to the : page_pool. In this case, we saw[1] an improvement of 18.8% from using : the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps). Much more detail on the overall objective and the observed results, please? Also, that workload looks awfully corner-casey. How beneficial is this work for more general and widely-used operations? > The implementation is not > particularly efficient and the intention is to iron out what the semantics > of the API should have for users. Once the semantics are ironed out, it can > be made more efficient. And some guesstimates about how much benefit remains to be realized would be helpful.