From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D2A3C47258 for ; Sat, 20 Jan 2024 12:57:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 573086B006E; Sat, 20 Jan 2024 07:57:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FC106B0071; Sat, 20 Jan 2024 07:57:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 375816B0072; Sat, 20 Jan 2024 07:57:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 218C76B006E for ; Sat, 20 Jan 2024 07:57:30 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B49A8120345 for ; Sat, 20 Jan 2024 12:57:29 +0000 (UTC) X-FDA: 81699690618.22.64FC5EF Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) by imf22.hostedemail.com (Postfix) with ESMTP id C9409C0017 for ; Sat, 20 Jan 2024 12:57:27 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=B4rFFcto; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705755447; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=N8mTtjlEuuvHfE+fvWczwMXg7wSqmdVC2KllD4mXk5Y=; b=tkek+11Ku6BZYJxRluZnahrajKHwqL8UF3rJtRCM+yqHe7vPQl0mDFyEhynmCFUavPvYSC cX0Yy/xFNzRquqvZrbUbzKCLfwL/RgTpgVTYvCXyNKuYdn7YPcO9bxlNaCHkCOk+Cg6Yuc 3JRUONQEZk5CdypZ1oTYR9gvfV3trIg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=B4rFFcto; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705755447; a=rsa-sha256; cv=none; b=4CDZe1JhrtDcpk0hcrTXzClN5gvprlLctT8W2+/CVliOQ+edqDl9HGpkAzrLMM/uBZdkHX LyRhUdvInsaYfvBQp/kAMS2CH702Zzw9k1+F3DPYVJfiKvLrCvTt3TQ3t3UIT8y/sxo+QX +zZkDpybJyyPtuXLdADLyprAKVDCe0g= Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-40e72a567eeso19455415e9.0 for ; Sat, 20 Jan 2024 04:57:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705755446; x=1706360246; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=N8mTtjlEuuvHfE+fvWczwMXg7wSqmdVC2KllD4mXk5Y=; b=B4rFFctoshqQIyWmHFPAneOqTQa2Ya/AWwJGE/mzf/fnlV+BxruQv0GNiMArY2myd+ AYGWANjG5TwNcyRcWPkONS2NJHoGUdBQej6WBG8jwQC2gp6CYkAOInS40iMdq5ajS2+Q P1+aLWXPIrUkUzCD+Hf6HfaGjZbJMfHGwkgjYsbhqMriJIqSgEWxiLkSOP/HjJ8T2gOB s3jV4h4w42ErleTGnrxN7YU/tM5szq6+IMyJLqghv8gg4CNn2jHX6eiKv8wkr6hK9r6o 0LJ4mW3aqv6C9ymh+9kf56ThrtW/BxHUA0w4y0hGpjGVxA2uiTtmHxQHXLXc6ZEw0w/6 RTXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705755446; x=1706360246; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=N8mTtjlEuuvHfE+fvWczwMXg7wSqmdVC2KllD4mXk5Y=; b=IShYEep3FJV6+hX8EA0mG2ebGnv989i1U7maYqh82v0PlSdpWVFObd/SBW5nCOWOLC AXtlbQVHGgmWaWXVFuWRilxQU037//zAwZdp9JUbkwfrp3hInD8HzYC6oLYFKXxWYJhd UF4eyfD0w3uooOOk5eM/wDnL2aIAb9/4ZPJjirPxUWiyT2FCLgl4NVH4XWWM8ZvFGbWW aWHmDgGOAd91jx8w7etaW3S3YxF0gh+i0cKiwldeeK8kAVHPsEVfwc8b+iH6jUuckIbk YbuCF3s+JPRKb5ZYyu8caSfeAac2flrQ1AeaovIaoXMaEfixAbXmre/TG7YxaF0E0mTC QVog== X-Gm-Message-State: AOJu0Yx/Rms1+pPZmJNp2PCyGa9rY2F4VsaKY9mOx7uvdipLrZhO5yhU Jy8m0UDkLWoRbJRhYm+t/z2zApvp3lgfWwR4xlzdPfUys5lcERGq X-Google-Smtp-Source: AGHT+IFHIg1blJCcSg3Jj45gJp8hPZeFSJKFYZ2oaK2duFhsoPa2VzClKDV95xeDtFVIN16FlCiZMA== X-Received: by 2002:a05:600c:4f4f:b0:40e:a34c:8cac with SMTP id m15-20020a05600c4f4f00b0040ea34c8cacmr357809wmq.72.1705755445889; Sat, 20 Jan 2024 04:57:25 -0800 (PST) Received: from localhost (host86-164-128-169.range86-164.btcentralplus.com. [86.164.128.169]) by smtp.gmail.com with ESMTPSA id s7-20020adff807000000b00337d735c193sm5139822wrp.49.2024.01.20.04.57.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 20 Jan 2024 04:57:24 -0800 (PST) Date: Sat, 20 Jan 2024 12:55:10 +0000 From: Lorenzo Stoakes To: Uladzislau Rezki Cc: linux-mm@kvack.org, Andrew Morton , LKML , Baoquan He , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Oleksiy Avramchenko Subject: Re: [PATCH v3 04/11] mm: vmalloc: Remove global vmap_area_root rb-tree Message-ID: <2c318a40-9e0f-4d24-b5cc-e712f7b2c334@lucifer.local> References: <20240102184633.748113-1-urezki@gmail.com> <20240102184633.748113-5-urezki@gmail.com> <63104f8e-2fe3-46b2-842c-f11f8bb4b336@lucifer.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: C9409C0017 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: kru13msmmz4zzhca9wbzigfs5yzqq731 X-HE-Tag: 1705755447-293811 X-HE-Meta: U2FsdGVkX19UWG5hfnn86t7JudFJLu08mBUWIwwfAr8stGqdxvKCae1Ohsyq7ZOKeCCrzgSxmEDlfRXjebJRRyrZqgovU/46zpE4mh9UHV43/DoXy5IyXn9CAJblqLWA5Nl2cIsLCAnP1Fn3QYoKYyxvm4Vp83vo1m02rq/S6SABZ7QczeBACpqWZFZv8iAPJSAJo6Y1JK0YiZJuFCXr6QX0N5AjT0BfmMlf3SrsQHTkTl2Cg2maMPfkavM3aUdPQD3YO3sHhIGvCvxlhbUQnyNWfk6iHpkOVcWhamcVX/NEGKRGXhL+H1FTL2v9PK3FuNe+exb6oSSGahdAR54LQ6r0ik2toCNXOEjcm9quTPKXCgEz7krQHmDQ5OSTUkKzw46JRtbLr923bStnyx1YyPdHXv4Tu/XU+7mtp285YMqGWwow9sw+Bx5tyWlBwMYfTenfWbjltxBrs+M9jKoV5bHAQTPMXejxEL/3e0hykoVXSxkHul3PnbKR+fW39RMOm7bLxMyrWlhtj8tLoQNaTCod8rn8ROELinWvW1zN5kriE2oc3MT8R0+XigkH01HeWBLK+cITf0VsvWehdb+0RSQMbvlh0LzvxLzNuSN5ExRYyA59B+6Oi3yiv8hf4GQajm6aOY4XhfoyNoY63fM4uUo5nVGetIVoYeSCRscNiOKF++vvgWLsJLF9kXUYbFBHt5vVXVBA43TEAO6YPmcidHGBHHlvYVly5yxvqB7AZO8xdN8ri+U/e22tzuXdoSDeD6ILLu/IqlFjQO0SSt+pjo6h2sjT0IXucuppMocJLE53QoODOEQsyjDxm03KuLoj5YeF7dXITWNwv8kdFmTGOl6YJGsAJr8N3Eq8NC/J4fFebFy1la6GUPUvLJDMDTbaXVeXE1a173LAcM9+qtAYPxSbsHyy6z1WglNwln182fb9vkpZrtwrGA4PhqWrlaWqvhyYFH+e4Dkqhsl1Vf/ 5gDSMCjl /CkzxQ1al5GPWp4tSCxE4RQo9qp0obc6BkrkLbHoS8YiIlqWS4U+zacVWYvuPE3BXqloclqhIIjF6ijLW5eAXYoTKf6Rb+bEtfY18bpqW+VUYrOd15vIYxLkBLhHSVYf7XWnRNO+DZ/985MoVF392vqDRAJ48R1+4LlaAjVdePIysKoSfNvsk663ANKAJaIezgRKjxWQ6SsuriR5SZ+o384miO5trRGHtNGBs2MsK+JdhbnsqXyJxStFIJN50wMAlYEw1g4pVj2H1G3iUOYpfyDwwRlbwQ+gKAvARwRs0xPlDxEJ8n0weLWKqXV+JBKi257ntjh5zwaWTg/NrluqQkGhf42Nx/C4vAWba+BDibkhCCuvA7JM1z1zrGJwGB2/xMoVZrGRB8sggFGXcfEUtyBxqQtSHtLpxjAGGAT+lh9icVm9HD6z/nhNIYJEeXPyvNdK84avv1fa7Hg6wgtivZfQjhw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 18, 2024 at 02:15:31PM +0100, Uladzislau Rezki wrote: [snip] > > > > + struct rb_root root; > > > + struct list_head head; > > > + spinlock_t lock; > > > +}; > > > + > > > +static struct vmap_node { > > > + /* Bookkeeping data of this node. */ > > > + struct rb_list busy; > > > +} single; > > > > This may be a thing about encapsulation/naming or similar, but I'm a little > > confused as to why the rb_list type is maintained as a field rather than > > its fields embedded? > > > The "struct vmap_node" will be extended by the following patches in the > series. > Yeah sorry I missed this, only realising after I sent...! > > > + > > > +static struct vmap_node *vmap_nodes = &single; > > > +static __read_mostly unsigned int nr_vmap_nodes = 1; > > > +static __read_mostly unsigned int vmap_zone_size = 1; > > > > It might be worth adding a comment here explaining that we're binding to a > > single node for now to maintain existing behaviour (and a brief description > > of what these values mean - for instance what unit vmap_zone_size is > > expressed in?) > > > Right. Agree on it :) > Indeed :) [snip] > > > /* Look up the first VA which satisfies addr < va_end, NULL if none. */ > > > -static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr) > > > +static struct vmap_area * > > > +find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) > > > { > > > struct vmap_area *va = NULL; > > > - struct rb_node *n = vmap_area_root.rb_node; > > > + struct rb_node *n = root->rb_node; > > > > > > addr = (unsigned long)kasan_reset_tag((void *)addr); > > > > > > @@ -1552,12 +1583,14 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, > > > */ > > > static void free_vmap_area(struct vmap_area *va) > > > { > > > + struct vmap_node *vn = addr_to_node(va->va_start); > > > + > > > > I'm being nitty here, and while I know it's a vmalloc convention to use > > 'va' and 'vm', perhaps we can break away from the super short variable name > > convention and use 'vnode' or something for these values? > > > > I feel people might get confused between 'vm' and 'vn' for instance. > > > vnode, varea? I think 'vm' and 'va' are fine, just scanning through easy to mistake 'vn' and 'vm'. Obviously a litle nitpicky! You could replace all but a bit churny, so I think vn -> vnode works best imo. [snip] > > > struct vmap_area *find_vmap_area(unsigned long addr) > > > { > > > + struct vmap_node *vn; > > > struct vmap_area *va; > > > + int i, j; > > > > > > - spin_lock(&vmap_area_lock); > > > - va = __find_vmap_area(addr, &vmap_area_root); > > > - spin_unlock(&vmap_area_lock); > > > + /* > > > + * An addr_to_node_id(addr) converts an address to a node index > > > + * where a VA is located. If VA spans several zones and passed > > > + * addr is not the same as va->va_start, what is not common, we > > > + * may need to scan an extra nodes. See an example: > > > > For my understading when you say 'scan an extra nodes' do you mean scan > > just 1 extra node, or multiple? If the former I'd replace this with 'may > > need to scan an extra node' if the latter then 'may ened to scan extra > > nodes'. > > > > It's a nitty language thing, but also potentially changes the meaning of > > this! > > > Typo, i should replace it to: scan extra nodes. Thanks. > > > > + * > > > + * <--va--> > > > + * -|-----|-----|-----|-----|- > > > + * 1 2 0 1 > > > + * > > > + * VA resides in node 1 whereas it spans 1 and 2. If passed > > > + * addr is within a second node we should do extra work. We > > > + * should mention that it is rare and is a corner case from > > > + * the other hand it has to be covered. > > > > A very minor language style nit, but you've already said this is not > > common, I don't think you need this 'We should mention...' bit. It's not a > > big deal however! > > > No problem. We can remove it! Thanks. > > > > + */ > > > + i = j = addr_to_node_id(addr); > > > + do { > > > + vn = &vmap_nodes[i]; > > > > > > - return va; > > > + spin_lock(&vn->busy.lock); > > > + va = __find_vmap_area(addr, &vn->busy.root); > > > + spin_unlock(&vn->busy.lock); > > > + > > > + if (va) > > > + return va; > > > + } while ((i = (i + 1) % nr_vmap_nodes) != j); > > > > If you comment above suggests that only 1 extra node might need to be > > scanned, should we stop after one iteration? > > > Not really. Though we can improve it further to scan backward. I think it'd be good to clarify in the comment above that the VA could span more than 1 node then, as the diagram seems to imply only 1 (I think just simply because of the example you were showing). [snip] > > > static struct vmap_area *find_unlink_vmap_area(unsigned long addr) > > > { > > > + struct vmap_node *vn; > > > struct vmap_area *va; > > > + int i, j; > > > > > > - spin_lock(&vmap_area_lock); > > > - va = __find_vmap_area(addr, &vmap_area_root); > > > - if (va) > > > - unlink_va(va, &vmap_area_root); > > > - spin_unlock(&vmap_area_lock); > > > + i = j = addr_to_node_id(addr); > > > + do { > > > + vn = &vmap_nodes[i]; > > > > > > - return va; > > > + spin_lock(&vn->busy.lock); > > > + va = __find_vmap_area(addr, &vn->busy.root); > > > + if (va) > > > + unlink_va(va, &vn->busy.root); > > > + spin_unlock(&vn->busy.lock); > > > + > > > + if (va) > > > + return va; > > > + } while ((i = (i + 1) % nr_vmap_nodes) != j); > > > > Maybe worth adding a comment saying to refer to the comment in > > find_vmap_area() to see why this loop is necessary. > > > OK. We can do it to make it better for reading. Thanks! [snip] > > > @@ -3728,8 +3804,11 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) > > > > Unrelated to your change but makes me feel a little unwell to see 'const > > char *addr'! Can we change this at some point? Or maybe I can :) > > > You are welcome :) Haha ;) yes I think I might tbh, I have noted it down. > > > > > > > remains = count; > > > > > > - spin_lock(&vmap_area_lock); > > > - va = find_vmap_area_exceed_addr((unsigned long)addr); > > > + /* Hooked to node_0 so far. */ > > > + vn = addr_to_node(0); > > > > Why can't we use addr for this call? We already enforce the node-0 only > > thing by setting nr_vmap_nodes to 1 right? And won't this be potentially > > subtly wrong when we later increase this? > > > I used to have 0 here. But please note, it is changed by the next patch in > this series. Yeah sorry, again hadn't noticed this. [snip] > > > + spin_lock(&vn->busy.lock); > > > + insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head); > > > setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, > > > pcpu_get_vm_areas); > > > + spin_unlock(&vn->busy.lock); > > > > Hmm, before we were locking/unlocking once before the loop, now we're > > locking on each iteration, this seems inefficient. > > > > Seems like we need logic like: > > > > /* ... something to check nr_vms > 0 ... */ > > struct vmap_node *last_node = NULL; > > > > for (...) { > > struct vmap_node *vnode = addr_to_node(vas[area]->va_start); > > > > if (vnode != last_node) { > > spin_unlock(last_node->busy.lock); > > spin_lock(vnode->busy.lock); > > last_node = vnode; > > } > > > > ... > > } > > > > if (last_node) > > spin_unlock(last_node->busy.lock); > > > > To minimise the lock twiddling. What do you think? > > > This per-cpu-allocator prefetches several VA units per-cpu. I do not > find it as critical because it is not a hot path for the per-cpu allocator. > When its buffers are exhausted it does an extra prefetch. So it is not > frequent. OK, sure I mean this is simpler and more readable so if not a huge perf concern then not a big deal. > > > > > > } > > > - spin_unlock(&vmap_area_lock); > > > > > > /* > > > * Mark allocated areas as accessible. Do it now as a best-effort > > > @@ -4253,55 +4333,57 @@ bool vmalloc_dump_obj(void *object) > > > { > > > void *objp = (void *)PAGE_ALIGN((unsigned long)object); > > > const void *caller; > > > - struct vm_struct *vm; > > > struct vmap_area *va; > > > + struct vmap_node *vn; > > > unsigned long addr; > > > unsigned int nr_pages; > > > + bool success = false; > > > > > > - if (!spin_trylock(&vmap_area_lock)) > > > - return false; > > > > Nitpick on style for this, I really don't know why you are removing this > > early exit? It's far neater to have a guard clause than to nest a whole > > bunch of code below. > > > Hm... I can return back as it used to be. I do not have a strong opinion here. Yeah that'd be ideal just for readability. [snip the rest as broadly fairly trivial comment stuff on which we agree] > > Thank you for the review! I can fix the comments as separate patches if > no objections. Yes, overall it's style/comment improvement stuff nothing major, feel free to send as follow-up patches. I don't want to hold anything up here so for the rest, feel free to add: Reviewed-by: Lorenzo Stoakes > > -- > Uladzislau Rezki