From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74E6BC433DB for ; Tue, 23 Mar 2021 12:04:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8C526191C for ; Tue, 23 Mar 2021 12:04:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C8C526191C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 548AA6B0181; Tue, 23 Mar 2021 08:04:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 51C556B0183; Tue, 23 Mar 2021 08:04:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 398A26B0184; Tue, 23 Mar 2021 08:04:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id 1DEA16B0181 for ; Tue, 23 Mar 2021 08:04:42 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D337B1EF1 for ; Tue, 23 Mar 2021 12:04:41 +0000 (UTC) X-FDA: 77951007162.19.09F8E13 Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by imf15.hostedemail.com (Postfix) with ESMTP id 122EFA00084F for ; Tue, 23 Mar 2021 12:04:39 +0000 (UTC) Received: by mail-lf1-f48.google.com with SMTP id n138so26226460lfa.3 for ; Tue, 23 Mar 2021 05:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=0McvLaVTxZVttSdMtN3Jb8U0mekVzeUmfLS+AJkPhZI=; b=mfWR31dRkYqEv2jBXG+WNyZ/5Tx5jelLKPefoCAK5a1C7c+X4xbRBh421d742vIudS MeGXwwRciwNMxeOzp3+MG/fhRxixjQW9zbYv+1zU9x2jY7sF7Pk0532kIRJQ+A98DVlu oNxlC5+txocjihmM/atLN1iap9Dc+qGj/1pxzY6vjsetGS9kkji4ZDEizX8dHOfLmmKb Vx49o3DedaTcQ4PHwkuXBFO4/QAPM68A3jFu8FhtpCV5gUJuosIY9ShhR8QuYoQehAZV wpkMKgHoIzZTl/gSJsPpWpxG77GKtVcbC2BvviBCEAz/aUDM69s/a1AgO6sY0HIs5g59 oWOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=0McvLaVTxZVttSdMtN3Jb8U0mekVzeUmfLS+AJkPhZI=; b=X/jUazqggfSiaFAuIDaEkMzjFr3JcvWmHJfF03aR2YEz0xwp12AGcIgtXe7DjiM0gj Sh+0ymxULUHwdSa9GaqBnGOMjgxzthY4UJJArf+0ESl8GfaPQ2SOnlZCMQnBXQPAkfyK n63dGJhzr/Qjx9WYvUVG0yOPkU2x5z2yHDUPm1EosW7NzG20LLjaq6dKQMr6JZaRCXov JuWOeA434kR8PAU+IuSCJvdR3c8ZP6H8T6Mu5yBVzSCCNewrhpzO3pXEWgSUt+AAuIGW hGNiHQEDUyUxC0BufAW7WTuR30DkmklF5epVnQc0z0gwEIQORjzUk4ZLBl6Xz/MOx3PD e5CA== X-Gm-Message-State: AOAM530it/ZC+YDZuDiyutaDKn3pQhLB+DEpCLCGYzkPhZ8oJkb8Aiy8 oU5z3wa3LVX5TsqE5neab9k= X-Google-Smtp-Source: ABdhPJyyNz813XP/Hxih7grc7Qtii5H8oyCE0SpLIS8ryfx+j4F9j2Yi4y98n2af6gHVFfweE1MoTQ== X-Received: by 2002:a05:6512:3283:: with SMTP id p3mr2354797lfe.570.1616501079576; Tue, 23 Mar 2021 05:04:39 -0700 (PDT) Received: from pc638.lan (h5ef52e3d.seluork.dyn.perspektivbredband.net. [94.245.46.61]) by smtp.gmail.com with ESMTPSA id z81sm1829363lfc.149.2021.03.23.05.04.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Mar 2021 05:04:39 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Tue, 23 Mar 2021 13:04:36 +0100 To: Matthew Wilcox Cc: Uladzislau Rezki , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Nicholas Piggin Subject: Re: [PATCH 2/2] mm/vmalloc: Use kvmalloc to allocate the table of pages Message-ID: <20210323120436.GA1949@pc638.lan> References: <20210322193820.2140045-1-willy@infradead.org> <20210322193820.2140045-2-willy@infradead.org> <20210322223619.GA56503@pc638.lan> <20210322230311.GY1719932@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210322230311.GY1719932@casper.infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Stat-Signature: i96rfyobijaa96wem5j55hjm9g4i6wfe X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 122EFA00084F Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=mail-lf1-f48.google.com; client-ip=209.85.167.48 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616501079-119299 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 22, 2021 at 11:03:11PM +0000, Matthew Wilcox wrote: > On Mon, Mar 22, 2021 at 11:36:19PM +0100, Uladzislau Rezki wrote: > > On Mon, Mar 22, 2021 at 07:38:20PM +0000, Matthew Wilcox (Oracle) wro= te: > > > If we're trying to allocate 4MB of memory, the table will be 8KiB i= n size > > > (1024 pointers * 8 bytes per pointer), which can usually be satisfi= ed > > > by a kmalloc (which is significantly faster). Instead of changing = this > > > open-coded implementation, just use kvmalloc(). > > >=20 > > > Signed-off-by: Matthew Wilcox (Oracle) > > > --- > > > mm/vmalloc.c | 7 +------ > > > 1 file changed, 1 insertion(+), 6 deletions(-) > > >=20 > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > index 96444d64129a..32b640a84250 100644 > > > --- a/mm/vmalloc.c > > > +++ b/mm/vmalloc.c > > > @@ -2802,13 +2802,8 @@ static void *__vmalloc_area_node(struct vm_s= truct *area, gfp_t gfp_mask, > > > gfp_mask |=3D __GFP_HIGHMEM; > > > =20 > > > /* Please note that the recursion is strictly bounded. */ > > > - if (array_size > PAGE_SIZE) { > > > - pages =3D __vmalloc_node(array_size, 1, nested_gfp, node, > > > + pages =3D kvmalloc_node_caller(array_size, nested_gfp, node, > > > area->caller); > > > - } else { > > > - pages =3D kmalloc_node(array_size, nested_gfp, node); > > > - } > > > - > > > if (!pages) { > > > free_vm_area(area); > > > return NULL; > > > --=20 > > > 2.30.2 > > Makes sense to me. Though i expected a bigger difference: > >=20 > > # patch > > single CPU, 4MB allocation, loops: 1000000 avg: 85293854 usec > >=20 > > # default > > single CPU, 4MB allocation, loops: 1000000 avg: 89275857 usec >=20 > Well, 4.5% isn't something to leave on the table ... but yeah, I was > expecting more in the 10-20% range. It may be more significant if > there's contention on the spinlocks (like if this crazy ksmbd is callin= g > vmalloc(4MB) on multiple nodes simultaneously). >=20 Yep, it can be that simultaneous allocations will show even bigger improvements because of lock contention. Anyway there is an advantage in switching to SLAB - 5% is also a win :)=20 > > I suspect the vast majority of the time is spent calling alloc_pages_no= de() > 1024 times. Have you looked at Mel's patch to do ... well, exactly wha= t > vmalloc() wants? >=20 - 97.37% 0.00% vmalloc_test/0 [kernel.vmlinux] [k] ret_from_for= k = =E2=97=86 ret_from_fork = = =E2=96=92 kthread = = =E2=96=92 - 0xffffffffc047373b = = =E2=96=92 - 52.67% 0xffffffffc047349f = = =E2=96=92 __vmalloc_node = = =E2=96=92 - __vmalloc_node_range = = =E2=96=92 - 45.25% __alloc_pages_nodemask = = =E2=96=92 - 37.59% get_page_from_freelist = = =E2=96=92 4.34% __list_del_entry_valid = = =E2=96=92 3.67% __list_add_valid = = =E2=96=92 1.52% prep_new_page = = =E2=96=92 1.20% check_preemption_disabled = = =E2=96=92 3.75% map_kernel_range_noflush = = =E2=96=92 - 0.64% kvmalloc_node_caller = = =E2=96=92 __kmalloc_track_caller = = =E2=96=92 memset_orig = = =E2=96=92 - 44.61% 0xffffffffc047348d = = =E2=96=92 - __vunmap = = =E2=96=92 - 35.56% free_unref_page = = =E2=96=92 - 22.48% free_pcppages_bulk = = =E2=96=92 - 4.21% __mod_zone_page_state = = =E2=96=92 2.78% check_preemption_disabled = = =E2=96=92 0.80% __this_cpu_preempt_check = = =E2=96=92 2.24% __list_del_entry_valid = = =E2=96=92 1.84% __list_add_valid = = =E2=96=92 - 6.55% free_unref_page_commit = = =E2=96=92 2.47% check_preemption_disabled = = =E2=96=92 1.36% __list_add_valid = = =E2=96=92 3.10% free_unref_page_prepare.part.88 = = =E2=96=92 0.72% free_pcp_prepare = = =E2=96=92 - 6.26% remove_vm_area = = =E2=96=92 6.18% unmap_kernel_range_noflush = = =E2=96=92 2.31% __free_pages =20 __alloc_pages_nodemask() consumes lot of cycles because it is called one time per a page and like you mentioned, for 4MB request it is invoked 1024 times! > > https://lore.kernel.org/linux-mm/20210322091845.16437-1-mgorman@techsin= gularity.net/ > I saw it. It would be good to switch to the bulk interface for vmalloc once it is settled and mainlined. Apart of that, i find it also useful for the kvfree_rcu() code in a context of page-cache refilling :) >=20 > > One question. Should we care much about fragmentation? I mean > > with the patch, allocations > 2MB will do request to SLAB bigger > > then PAGE_SIZE. >=20 > We're pretty good about allocating memory in larger chunks these days. > Looking at my laptop's slabinfo, > kmalloc-8k 219 232 8192 4 8 : tunables 0 0 = 0 : sla > bdata 58 58 0 >=20 > That's using 8 pages per slab, so that's order-3 allocations. There's = a > few more of those: >=20 > $ sudo grep '8 :' /proc/slabinfo |wc > 42 672 4508 >=20 > so I have confidence that kvmalloc() will manage to use kmalloc up to 1= 6MB > vmalloc allocations, and after that it'll tend to fall back to vmalloc. > Reviewed-by: Uladzislau Rezki (Sony) Thanks! -- Vlad Rezki