From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64FEDC433DB for ; Fri, 12 Mar 2021 18:44:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DAC4564F69 for ; Fri, 12 Mar 2021 18:44:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAC4564F69 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4835E6B0070; Fri, 12 Mar 2021 13:44:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 40CA36B0071; Fri, 12 Mar 2021 13:44:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 237D56B0072; Fri, 12 Mar 2021 13:44:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 02C746B0070 for ; Fri, 12 Mar 2021 13:44:20 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B2D6B9880 for ; Fri, 12 Mar 2021 18:44:20 +0000 (UTC) X-FDA: 77912097480.26.EBEEE6C Received: from mail-io1-f50.google.com (mail-io1-f50.google.com [209.85.166.50]) by imf28.hostedemail.com (Postfix) with ESMTP id 8B050200038C for ; Fri, 12 Mar 2021 18:44:21 +0000 (UTC) Received: by mail-io1-f50.google.com with SMTP id o9so26803839iow.6 for ; Fri, 12 Mar 2021 10:44:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=KKiouv1eprFdp8cyqZbRKzdDF1BxO+8+7O6XRnUPNYY=; b=apzg0DwmUzO8SKLb5mJMXEkdpERbz/o+0p4zJ/+93T75wCBwPVy7VVGAn7h3QAmD6U kdEHEkuOnL5rpIacy84CgysLBndGwQgXq7K2w5oWtFZ54CHLb6nth+vIAweDJsX7b3Y7 7HquDgx/uM0KNxpZZkbE29i5A0Pt6fqBVLTY76caC1aSeeyojSXFyyMzL3Vo28usY8KV 453csxQrhI72HlQDbt8bDQFgjuLxiv3mHWXJ1w8JenSmJgBO+ud+DqNt0E6OIZOZpFrU UaecoLJ8OT7o+jQ8BpGXkRZQRUZgSjCP0i8wxn3+wbyc92ZAG8O7HfpnBaC/S+cfmV8+ GmEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=KKiouv1eprFdp8cyqZbRKzdDF1BxO+8+7O6XRnUPNYY=; b=F8QvxpWPQiKcKxttNT5nvQnO00RO0aXGK8HsPHOIWRS9+rO8TivVj8ADqWuFRzkVp+ lGIOcclsbRwFFDe4ZYUaK+XLDQ2h69v5DN6ZdvvGkWUHfdTrQdExQxHmov8bT8RSH+3d Pa3XsrliD2OwbdaAFov0pnn/WSMVcZrjyShftryAbCuS5AX1r733+HPXREsHUiY4Qwfy BksYK8y/CFpINZTTB9EGFLIoznqUp761HoRPZOZxOSqFpUCel0Vhwv8LD6hMIaW6sb8n QYK9pbZFoQiEUjjz+j+LF0OYt2/NvvE3jO9xIX5QM8ocw2YEg2IHi4QkUydqWDYH5MkT xSyg== X-Gm-Message-State: AOAM531OviD8zf6e11tBSsZPqnXUy7y+l+mMAMggQgjCG/WpUZVXn5pd I7QM2wNQwe3KHw0GRC33MOCXwGmSsffPUMXOH9E= X-Google-Smtp-Source: ABdhPJzzYYT+IyN0pKKJoq1taE+f0eHNQJxNOErzOxE4nio01xX99ngiJyCGxOys+4cKS/Jo0K6MDTD2lNH45Swj7/0= X-Received: by 2002:a5e:8e41:: with SMTP id r1mr461266ioo.5.1615574659362; Fri, 12 Mar 2021 10:44:19 -0800 (PST) MIME-Version: 1.0 References: <20210312154331.32229-1-mgorman@techsingularity.net> <20210312154331.32229-6-mgorman@techsingularity.net> In-Reply-To: <20210312154331.32229-6-mgorman@techsingularity.net> From: Alexander Duyck Date: Fri, 12 Mar 2021 10:44:08 -0800 Message-ID: Subject: Re: [PATCH 5/7] SUNRPC: Refresh rq_pages using a bulk page allocator To: Mel Gorman Cc: Andrew Morton , Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Matthew Wilcox , LKML , Linux-Net , Linux-MM , Linux-NFS Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: xmcfuzc3p19ra96can6busb7rjy87yda X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8B050200038C Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mail-io1-f50.google.com; client-ip=209.85.166.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615574661-626340 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 12, 2021 at 7:43 AM Mel Gorman wrote: > > From: Chuck Lever > > Reduce the rate at which nfsd threads hammer on the page allocator. > This improves throughput scalability by enabling the threads to run > more independently of each other. > > Signed-off-by: Chuck Lever > Signed-off-by: Mel Gorman > --- > net/sunrpc/svc_xprt.c | 43 +++++++++++++++++++++++++++++++------------ > 1 file changed, 31 insertions(+), 12 deletions(-) > > diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c > index cfa7e4776d0e..38a8d6283801 100644 > --- a/net/sunrpc/svc_xprt.c > +++ b/net/sunrpc/svc_xprt.c > @@ -642,11 +642,12 @@ static void svc_check_conn_limits(struct svc_serv *serv) > static int svc_alloc_arg(struct svc_rqst *rqstp) > { > struct svc_serv *serv = rqstp->rq_server; > + unsigned long needed; > struct xdr_buf *arg; > + struct page *page; > int pages; > int i; > > - /* now allocate needed pages. If we get a failure, sleep briefly */ > pages = (serv->sv_max_mesg + 2 * PAGE_SIZE) >> PAGE_SHIFT; > if (pages > RPCSVC_MAXPAGES) { > pr_warn_once("svc: warning: pages=%u > RPCSVC_MAXPAGES=%lu\n", > @@ -654,19 +655,28 @@ static int svc_alloc_arg(struct svc_rqst *rqstp) > /* use as many pages as possible */ > pages = RPCSVC_MAXPAGES; > } > - for (i = 0; i < pages ; i++) > - while (rqstp->rq_pages[i] == NULL) { > - struct page *p = alloc_page(GFP_KERNEL); > - if (!p) { > - set_current_state(TASK_INTERRUPTIBLE); > - if (signalled() || kthread_should_stop()) { > - set_current_state(TASK_RUNNING); > - return -EINTR; > - } > - schedule_timeout(msecs_to_jiffies(500)); > + > + for (needed = 0, i = 0; i < pages ; i++) > + if (!rqstp->rq_pages[i]) > + needed++; I would use an opening and closing braces for the for loop since technically the if is a multiline statement. It will make this more readable. > + if (needed) { > + LIST_HEAD(list); > + > +retry: Rather than kind of open code a while loop why not just make this "while (needed)"? Then all you have to do is break out of the for loop and you will automatically return here instead of having to jump to two different labels. > + alloc_pages_bulk(GFP_KERNEL, needed, &list); Rather than not using the return value would it make sense here to perhaps subtract it from needed? Then you would know if any of the allocation requests weren't fulfilled. > + for (i = 0; i < pages; i++) { It is probably optimizing for the exception case, but I don't think you want the "i = 0" here. If you are having to stop because the list is empty it probably makes sense to resume where you left off. So you should probably be initializing i to 0 before we check for needed. > + if (!rqstp->rq_pages[i]) { It might be cleaner here to just do a "continue" if rq_pages[i] is populated. > + page = list_first_entry_or_null(&list, > + struct page, > + lru); > + if (unlikely(!page)) > + goto empty_list; I think I preferred the original code that wasn't jumping away from the loop here. With the change I suggested above that would switch the if(needed) to while(needed) you could have it just break out of the for loop to place itself back in the while loop. > + list_del(&page->lru); > + rqstp->rq_pages[i] = page; > + needed--; > } > - rqstp->rq_pages[i] = p; > } > + } > rqstp->rq_page_end = &rqstp->rq_pages[pages]; > rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */ > > @@ -681,6 +691,15 @@ static int svc_alloc_arg(struct svc_rqst *rqstp) > arg->len = (pages-1)*PAGE_SIZE; > arg->tail[0].iov_len = 0; > return 0; > + > +empty_list: > + set_current_state(TASK_INTERRUPTIBLE); > + if (signalled() || kthread_should_stop()) { > + set_current_state(TASK_RUNNING); > + return -EINTR; > + } > + schedule_timeout(msecs_to_jiffies(500)); > + goto retry; > } > > static bool > -- > 2.26.2 >