From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D5BFC433B4 for ; Wed, 19 May 2021 14:39:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 13D286124C for ; Wed, 19 May 2021 14:39:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13D286124C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5FF256B006E; Wed, 19 May 2021 10:39:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AFA16B0071; Wed, 19 May 2021 10:39:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4013C6B0072; Wed, 19 May 2021 10:39:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id 0A9346B006E for ; Wed, 19 May 2021 10:39:09 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9A9D3180AD830 for ; Wed, 19 May 2021 14:39:09 +0000 (UTC) X-FDA: 78158238018.30.4700F2C Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf09.hostedemail.com (Postfix) with ESMTP id 6D8FF600024B for ; Wed, 19 May 2021 14:39:08 +0000 (UTC) Received: by mail-lj1-f181.google.com with SMTP id c15so15916206ljr.7 for ; Wed, 19 May 2021 07:39:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Nc6CsOmLYlYhosQDHmPH7JIfTt0CJg9Ts/YiX/jQHwY=; b=HgA1pDo4XeJYXNKY/iAMwblUWkBN77xsngRBcrzHq/iE8eRT2HRRgc686wcHtnVpan HZ4AKQuxDcR+0mTInQCOuhqbINIHOD9G8LpebL7znluHF9OiwJr/le986aBiNxpxkxE5 sQVwBPPSFOKgRF1dwWf6jHve9KfSQZYLZiBitx8/LFExcIxFSECkV4R8pmZMK/OKX1vj mTqf5DCtP2ivd5R1BYfI95LmItjedS3frHGqMcNtcnFW+Jc8J6sUiH0yEL15LODlfoBw f3PD5/5fb5yyqDeVFXGdv91WgajlbJirXxjzXXboi1pGFt9PfuYvJgD5s+FrcvrA94rX IpSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Nc6CsOmLYlYhosQDHmPH7JIfTt0CJg9Ts/YiX/jQHwY=; b=aXk6Exi/xVmJFNTdcs+cOo5+cdH5ip1RDSkF0EQJ8d/F3A1bh9b67Q5LL4Nfrm7tmL qPWr1tbyt1NZ/okXNist6jwbLPUJfu3pHx8kGEiPjZd1s6vCSQAHIg4GKxB+R5zPtnP5 TU9UiXzzGO/OJSQl5S+STxl9PQvjG9jqwH/pEtEBJ6evAA8KB09A9vZHS9CmVjF2Ecbf NH4HlFsnw59HWiHaNVVD0OxfRprWU6iK2Jgkqtj/xyH6iI3yex8McV6I+hAh4P4eZlwp 9hWO3Y5LDtZKZQ+SlMUrqPOgS0Ifz5g6NMJWJa/+iYyc0ATfJDQNwCu9hcfGGV/zeS1h XDlg== X-Gm-Message-State: AOAM530DQQJ77zaAILmAs0B5eOz+UeywLi90u4EfaPhAtgR00xJQPxES 6ik4M78CDDLC0GI5a7CPiTI= X-Google-Smtp-Source: ABdhPJwE/Sq+JLDSB8vi2c8j5QBIicU2jCY0aUPL5ychSGfim18/GS5+IjIi9qEou+22wg5pyItXBA== X-Received: by 2002:a2e:6c1a:: with SMTP id h26mr9434973ljc.478.1621435143358; Wed, 19 May 2021 07:39:03 -0700 (PDT) Received: from pc638.lan (h5ef52e3d.seluork.dyn.perspektivbredband.net. [94.245.46.61]) by smtp.gmail.com with ESMTPSA id m4sm3698096ljc.20.2021.05.19.07.39.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 May 2021 07:39:02 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 19 May 2021 16:39:00 +0200 To: Christoph Hellwig , Mel Gorman Cc: "Uladzislau Rezki (Sony)" , Andrew Morton , linux-mm@kvack.org, LKML , Mel Gorman , Matthew Wilcox , Nicholas Piggin , Hillf Danton , Michal Hocko , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node() Message-ID: <20210519143900.GA2262@pc638.lan> References: <20210516202056.2120-1-urezki@gmail.com> <20210516202056.2120-3-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 6D8FF600024B Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=HgA1pDo4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=urezki@gmail.com X-Rspamd-Server: rspam03 X-Stat-Signature: n9uwaxs5pakyxowrapk9pw74msixyedj X-HE-Tag: 1621435148-432681 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 19, 2021 at 02:44:08PM +0100, Christoph Hellwig wrote: > > + if (!page_order) { > > + area->nr_pages = alloc_pages_bulk_array_node( > > + gfp_mask, node, nr_small_pages, area->pages); > > + } else { > > + /* > > + * Careful, we allocate and map page_order pages, but tracking is done > > + * per PAGE_SIZE page so as to keep the vm_struct APIs independent of > > Comments over 80 lines are completely unreadable, so please avoid them. > That i can fix by separate patch. > > + * the physical/mapped size. > > + */ > > + while (area->nr_pages < nr_small_pages) { > > + struct page *page; > > + int i; > > + > > + /* Compound pages required for remap_vmalloc_page */ > > + page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order); > > + if (unlikely(!page)) > > + break; > > > > + for (i = 0; i < (1U << page_order); i++) > > + area->pages[area->nr_pages + i] = page + i; > > > > + if (gfpflags_allow_blocking(gfp_mask)) > > + cond_resched(); > > + > > + area->nr_pages += 1U << page_order; > > + } > > In fact splitting this whole high order allocation logic into a little > helper would massivel benefit the function by ordering it more logical > and reducing a level of indentation. > I can put it into separate function. Actually i was thinking about it. > > + /* > > + * If not enough pages were obtained to accomplish an > > + * allocation request, free them via __vfree() if any. > > + */ > > + if (area->nr_pages != nr_small_pages) { > > + warn_alloc(gfp_mask, NULL, > > + "vmalloc size %lu allocation failure: " > > + "page order %u allocation failed", > > + area->nr_pages * PAGE_SIZE, page_order); > > + goto fail; > > + } > > From reading __alloc_pages_bulk not allocating all pages is something > that cn happen fairly easily. Shouldn't we try to allocate the missing > pages manually and/ore retry here? > It is a good point. The bulk-allocator, as i see, only tries to access to pcp-list and falls-back to a single allocator once it fails, so the array may not be fully populated. In that case probably it makes sense to manually populate it using single page allocator. Mel, could you please also comment on it? > > + > > + if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) { > > Another pointlessly long line. Yep. Will fix it by a separate patch. Actually the checkpatch.pl also complains on splitting the text like below: warn_alloc(gfp_mask, NULL, "vmalloc size %lu allocation failure: " "page order %u allocation failed", area->nr_pages * PAGE_SIZE, page_order); Thanks for the comments! -- Vlad Rezki