From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF3CCC433ED for ; Fri, 21 May 2021 13:07:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 673CF61244 for ; Fri, 21 May 2021 13:07:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 673CF61244 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ECBDC8E003E; Fri, 21 May 2021 09:07:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA1E18E0022; Fri, 21 May 2021 09:07:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D435A8E003E; Fri, 21 May 2021 09:07:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id A0CFD8E0022 for ; Fri, 21 May 2021 09:07:23 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2A9438248068 for ; Fri, 21 May 2021 13:07:23 +0000 (UTC) X-FDA: 78165264366.34.E7429A7 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf25.hostedemail.com (Postfix) with ESMTP id EBB89600025C for ; Fri, 21 May 2021 13:07:19 +0000 (UTC) Received: by mail-lj1-f169.google.com with SMTP id t17so7159320ljd.9 for ; Fri, 21 May 2021 06:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=XR01GIQvJskVOVw5IqfA2lCJ3o70uuemnirl8BXtsUM=; b=plf+BbYAfzrs7YDrXURCX0WGuEVkGgwzqE9Y/jcLL1y0X/cc+y9YZ3bfWBOqrVxO7L PfYoCxKChfa8SmUiws+wxkZdpqelXtWWXdbGcIcrlyeoQiJUPkYvzeHFIgo+vEHGZm0f qZ/aj34x/8iuOxWIwydutXpeQwxm/D64HOqb4pglgybjxaYV3K3m62bOWus08o3ec13Y 3YMIzYA0QC+QQysYz3NDWJ5a/uPu5Je7t51mJ5wiCMmA6z/5v5vzRYuUR55R616ftGOJ mVPlqqh+lpl+/FbqLU22pWThOUgDwni+ObBxRJdg/rwgyUFRN5qBbffDYEsYB8sIhiTt u57Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=XR01GIQvJskVOVw5IqfA2lCJ3o70uuemnirl8BXtsUM=; b=QgkHA8X1nAvCMJQj2GHY+VoK+rMLbcObYbei9422hDLxU2HQKOc1VHzdMWCWGYJqwi J6Ebmw82ZaFeX+c279Inih/mV+5/jzdNe1FM1L9kmt2F5NCV05v0HYcaJXbjZyc/HFb2 yTJSBkvt5APjJYZ8ny9gVq8IbG584vSHvke8MlQ6iNypHZl7nooDlrBxRVLBi6S516Ye jfzD0Hr4BBmK2a07T3u6oLMyad9+ALG1J61SDlr1D5tZ7bNRiSvLRB+JOMXosWiXmmMS RnMfVi8GYJdxVnxVBntqqD1fClMIcDj9OdxEK+3Y1s90LC8IhaFon50mkuCA8TQMM+x7 0y8Q== X-Gm-Message-State: AOAM532H+JOCsCC6reKRmUw3ZG59KqTmUrcEORALEKoE5doDKLAfWU8K SpmzKkQjnFxuqiyi2LMlJe4= X-Google-Smtp-Source: ABdhPJwwOBXRIuQOu0kr5gK2Cb5i5ajWMWZBL/dvDgw4g+II6O1pUtAic6AhWpf70aNBPKRTkVM45g== X-Received: by 2002:a2e:805a:: with SMTP id p26mr6538651ljg.495.1621602441472; Fri, 21 May 2021 06:07:21 -0700 (PDT) Received: from pc638.lan (h5ef52e3d.seluork.dyn.perspektivbredband.net. [94.245.46.61]) by smtp.gmail.com with ESMTPSA id x24sm660924lfe.230.2021.05.21.06.07.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 May 2021 06:07:21 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Fri, 21 May 2021 15:07:18 +0200 To: Andrew Morton Cc: Matthew Wilcox , Andrew Morton , linux-mm@kvack.org, LKML , Mel Gorman , Christoph Hellwig , Nicholas Piggin , Hillf Danton , Michal Hocko , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH] mm/vmalloc: Fallback to a single page allocator Message-ID: <20210521130718.GA17882@pc638.lan> References: <20210521111033.2243-1-urezki@gmail.com> <20210521125509.GA2442@pc638.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210521125509.GA2442@pc638.lan> User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=plf+BbYA; spf=pass (imf25.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: EBB89600025C X-Stat-Signature: aq7mk456sne3k1e8t43yjcodgxuzg5h8 X-HE-Tag: 1621602439-319906 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 21, 2021 at 02:55:09PM +0200, Uladzislau Rezki wrote: > > On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote: > > > +static inline unsigned int > > > +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order, > > > + unsigned long nr_small_pages, struct page **pages) > > > > (at least) two tabs here, please, otherwise the argument list is at > > the same indentation as the code which trips up my parser. some people > > like to match the opening bracket, but that always feels like more work > > than it's worth. fwiw, i'd format it like this: > > > > static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid, > > unsigned int order, unsigned long nr_pages, struct page **pages) > > { > > ... > > > No problem. Will fix it. > > > > > (yes, i renamed some of the variables there; overly long variable names > > are painful) > > > > The rest of the patch looks good. > > > > Reviewed-by: Matthew Wilcox (Oracle) > Thank you! > > I will re-spin the patch and send a v2. > >From 6537bc97b5550f17b0813caf02ce0ec1865fa94e Mon Sep 17 00:00:00 2001 From: "Uladzislau Rezki (Sony)" Date: Thu, 20 May 2021 14:13:23 +0200 Subject: [PATCH v2] mm/vmalloc: Fallback to a single page allocator Currently for order-0 pages we use a bulk-page allocator to get set of pages. From the other hand not allocating all pages is something that might occur. In that case we should fallbak to the single-page allocator trying to get missing pages, because it is more permissive(direct reclaim, etc). Introduce a vm_area_alloc_pages() function where the described logic is implemented. Reviewed-by: Matthew Wilcox (Oracle) Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 81 +++++++++++++++++++++++++++++++++------------------- 1 file changed, 52 insertions(+), 29 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b2a0cbfa37c1..7765af7b1e9c 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2756,6 +2756,54 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) EXPORT_SYMBOL_GPL(vmap_pfn); #endif /* CONFIG_VMAP_PFN */ +static inline unsigned int +vm_area_alloc_pages(gfp_t gfp, int nid, + unsigned int order, unsigned long nr_pages, struct page **pages) +{ + unsigned int nr_allocated = 0; + + /* + * For order-0 pages we make use of bulk allocator, if + * the page array is partly or not at all populated due + * to fails, fallback to a single page allocator that is + * more permissive. + */ + if (!order) + nr_allocated = alloc_pages_bulk_array_node( + gfp, nid, nr_pages, pages); + else + /* + * Compound pages required for remap_vmalloc_page if + * high-order pages. + */ + gfp |= __GFP_COMP; + + /* High-order pages or fallback path if "bulk" fails. */ + while (nr_allocated < nr_pages) { + struct page *page; + int i; + + page = alloc_pages_node(nid, gfp, order); + if (unlikely(!page)) + break; + + /* + * Careful, we allocate and map page-order pages, but + * tracking is done per PAGE_SIZE page so as to keep the + * vm_struct APIs independent of the physical/mapped size. + */ + for (i = 0; i < (1U << order); i++) + pages[nr_allocated + i] = page + i; + + if (gfpflags_allow_blocking(gfp)) + cond_resched(); + + nr_allocated += 1U << order; + } + + return nr_allocated; +} + static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot, unsigned int page_shift, int node) @@ -2789,37 +2837,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, return NULL; } - area->nr_pages = 0; set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - if (!page_order) { - area->nr_pages = alloc_pages_bulk_array_node( - gfp_mask, node, nr_small_pages, area->pages); - } else { - /* - * Careful, we allocate and map page_order pages, but tracking is done - * per PAGE_SIZE page so as to keep the vm_struct APIs independent of - * the physical/mapped size. - */ - while (area->nr_pages < nr_small_pages) { - struct page *page; - int i; - - /* Compound pages required for remap_vmalloc_page */ - page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order); - if (unlikely(!page)) - break; - - for (i = 0; i < (1U << page_order); i++) - area->pages[area->nr_pages + i] = page + i; - - if (gfpflags_allow_blocking(gfp_mask)) - cond_resched(); - - area->nr_pages += 1U << page_order; - } - } + area->nr_pages = vm_area_alloc_pages(gfp_mask, node, + page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); @@ -2835,7 +2857,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) { + if (vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift) < 0) { warn_alloc(gfp_mask, NULL, "vmalloc size %lu allocation failure: " "failed to map pages", -- 2.20.1 -- Vlad Rezki