From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA129E74AC8 for ; Tue, 3 Dec 2024 19:02:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AC946B007B; Tue, 3 Dec 2024 14:02:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 35B786B0082; Tue, 3 Dec 2024 14:02:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FCC36B0083; Tue, 3 Dec 2024 14:02:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 044146B007B for ; Tue, 3 Dec 2024 14:02:32 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 78CD1140BF6 for ; Tue, 3 Dec 2024 19:02:32 +0000 (UTC) X-FDA: 82854568398.06.731FFA2 Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by imf22.hostedemail.com (Postfix) with ESMTP id 8ECECC0010 for ; Tue, 3 Dec 2024 19:02:15 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WGg7RR+0; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733252536; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T/W4swd1nrnG8k0sk+wQz3igVR0h0dmv+YgpvdYYMmQ=; b=mpTP9uEDzGqa/U9ukvI/ZWJo9prjB8Br/I3TlgxA+lqRSM0rhruFHq2U/Xe4lWLtavJbD7 7JnbwAnv9UjOEdDX2mMjRL1xXnsH5J1xg8Jwtx3eCxkRfAFi5LQClY7LyqArjG8UNnrHjF T223JtaRJSq9U+TYu2NBbU/f+ty58E4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733252536; a=rsa-sha256; cv=none; b=sYJvAXtxrj4skTb4xX0DlmRUXK+Rbt3t/pZ/ZTIot9Ic9iE5apywQ23rHN6EMuTdwaFUWA F2HuP/i5WpVTgvkko2QfEEwDYXia0i+cvNkrjCR4f5IDydOMTKbpvBR2NNwCBPI2m17NB8 1mGFvW+sk9HpoPBXKUm87OFHjJtyyUk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WGg7RR+0; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lj1-f171.google.com with SMTP id 38308e7fff4ca-2ffc1009a06so87367821fa.2 for ; Tue, 03 Dec 2024 11:02:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733252548; x=1733857348; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from:from:to :cc:subject:date:message-id:reply-to; bh=T/W4swd1nrnG8k0sk+wQz3igVR0h0dmv+YgpvdYYMmQ=; b=WGg7RR+0kIs6QXYaIGF3pCrirNuQqKo2seQMEgDS3bi/c4SrF/+fDzK6tF6bulDXJ1 hMhYKlZ0SJ1NKZVRpFJz0dq12kXCQL2qrh0Zfdf4spQ8XQ2L4sEhMjGFCx/5kMLF/+x4 cRDJfeRISJQLMnXHip2zXgxKC8+xvN/Mbjgg92Mo3iOAkJQPcLC7VXqi2y0YBqzB7Xc3 nr8IrPwS1IAJKBVPunS+d/8jf8TZVXj8ZH1lg0ncjlMDD641S2EeFGdAdFF+2bJXKHwb 9A/LgrCBW/Y0kQ7eIZfD1Yt7CX47P8v6voS1McRUibrF4flMu0ipLj8oZnPpRHRVBzfW s38g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733252548; x=1733857348; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T/W4swd1nrnG8k0sk+wQz3igVR0h0dmv+YgpvdYYMmQ=; b=ZjjEII2ofWWMywWh0f8INHgQeytaJ1OgzqTLj9XZXOX2L1Kcc/t5tMyzMdGWJIvJZh u0GS9rZ094UlOFiiX1KTJlgBHxsw0r7OgdLX/yuooPfGtmazCCGM6o/VrszH5sMBya89 lcmjPx9vS6XynGO6rvbZ+krLwyhW8yL7ihLUt3vkdIW3I6ttCTg6d2JHWJ0zLkgI9/Qc yNFPU3z9DM1ekWQEqpsR2YS9ajJI71Di52xNZd0W/5ciAwI5b787HoRhEtrCwVT9cFo7 ZK+dd1Emw+QdPbeDrnOsumrRudFUHlVKEFYmNaVyYPotyJl5vMx4R2L+6bZEK78pTFn/ KXBA== X-Forwarded-Encrypted: i=1; AJvYcCX7XI1gfQp1aaGb6HslFSsKmkhhLiwpLfjlb4OHeq5ZzuE+sPgINvELV9d9C2TmQIyJePmXSKOhaQ==@kvack.org X-Gm-Message-State: AOJu0YyuN5aPxpBJVmaSIc30SOGg/VmhgJI2W65U+keBxWrQGYW1n5+e ApJEzCj/+MVNY357Xq+hKOhOGo8dr8xOeUXo5d5NgC8cF3+fvGVt X-Gm-Gg: ASbGnctTpyK8bRcL5T9G24ex/3maILvzB9U5j0uURbnNPLEh7pcVEbdmHAifjWM7dNa LNTvWQCL0cKC3OpBSSm4mSmG9SPORlqIrT413L4S4lovWe8iYz09+zyjXACi125RfmTAI5/Zh3y j09xgljhdwc4pKdMdZWpHcbRHlfT/lZxeZ5UrdkwkVKYIdZwqqJNQ4OPM58WNLUsbGUrOxulXvD 6B/AIkFxYPr3iBUHGas97yaU75MOuFLkq6sgf3p7sDliQ== X-Google-Smtp-Source: AGHT+IGqfS4F2u1liUNOa8FsxnmSFcHX6/qywcZALbqGR8ah608Vj56nCfYmWzfjUbCilkIUDWbKYg== X-Received: by 2002:a05:651c:2220:b0:2fb:36df:3b4 with SMTP id 38308e7fff4ca-30009d0288fmr39713311fa.34.1733252548158; Tue, 03 Dec 2024 11:02:28 -0800 (PST) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-2ffdfc097b8sm17031661fa.62.2024.12.03.11.02.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Dec 2024 11:02:27 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Tue, 3 Dec 2024 20:02:26 +0100 To: Kefeng Wang , zuoze Cc: Kefeng Wang , zuoze , Matthew Wilcox , gustavoars@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, linux-mm@kvack.org, keescook@chromium.org Subject: Re: [PATCH -next] mm: usercopy: add a debugfs interface to bypass the vmalloc check. Message-ID: References: <57f9eca2-effc-3a9f-932b-fd37ae6d0f87@huawei.com> <92768fc4-4fe0-f74a-d61c-dde0eb64e2c0@huawei.com> <76995749-1c2e-4f78-9aac-a4bff4b8097f@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: 8ECECC0010 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: znx5wfsaeoxnsnebsnhbmwpwzeynqjtx X-HE-Tag: 1733252535-995378 X-HE-Meta: U2FsdGVkX18v600+06drXnMiQfnY3xGYC6fsNLNgdUuHbSHfPlj85hmbDTSRG8uMaqdX5h8+Qo+mS8n8Ist2gFYSNPw9FuAnCjbZQ4hK7qzr27Tx22/c3Sm7Yb9TFqgLrs0dkYEc+30YzV7G8EAR8lGOe4GlNTscVwuPtF1ztk+fuGydW+86tOeaB5gfzQIMXJ/VVwgkRVBT6nmRETbXwlOzOzeQBZ0Upam74ebw39lvzYPg0vyWd5ty6ht0v7C5b4RITlvatrktXDkAuk0booKIHAW0LbKxc47zL8fK2Sl+pkhZKH1rqVX0j4hU9w7iYz50Ov7FLbEP98SCUlUN8R/TDPx4CZmsuY8GFg3rMkFLTpoJpaRscZdd5vxCt65UhiN9gyuFwAxT/B5Wn58g3DMO6rQC3YZFqibwaz684A5+7j3t681pInRyUzY049Se6js2xuCMdXuc4wK12+Dp8M3+w5KdSlXHYsl0AamszKlKgAgfH9PDZqrfMUL8G1QXxhrT9aESN4Z4eTP0mDmcgBYFEkjwbOprSVD/x5QBeRNFssdKiMVHIqsxDOnxKfyL1Cl6mcJiwTnyVd6uufimP9ebRzkeoOtbzDHIkufYliXRiAkHoysguFpr85gUSmrFikCCdrBkp1eFNb5lYufmU+STSiepbrH6cHPa9jVlqTDXzwYNJ8KSbbGXfuDxoBFmFOYJdG6Ouvdy6QCDrQvSwj61d2ogzEHdZiB0rEojvU6+VAaSTGebiZXBh1xM3JrZPkNy60flnQfVUTs/0wK/vGs4MsVQoLmuqM3IsefB854fd0MqnDjJVL+OT9l3JXgjNt8sZe2r1emXy0U30uGHvGWWyg/7mCzmropN7j/aXqwZljmK/FiEI0rjtrl85Yq/1BuSMBvWe2zei9gQORmmOB3/Sb2Aj29X8i81qEwKJSm76J2VnnGBnLSiqDy/cOGy9PoqYBkelk2WLtBEeUj SynEBtc0 Bvi2sb83srwhsRy7ersK3uUXw/UEXYbc/KvyiMEHEIcjvacDuxV7y95TiINTiGLc9ya6VIwTOzPC1CG/9XrKA/qpShSDSIVl1raxqsI3yYdGDdtmq0I9i1zuO/RvylZU0th4oy8YakQHrSchvoOqi54XjODlz+njM1t1ZtVGzK4nZ8eF7sweuR4kFNQDGM+a6fQ85bf2lsDt7YsNWCuCtEO6TkgiP/ewOoamq0mATd/thpsGSs5zg/ah2d+Z0hNxIWeLq3gvp39fMEpGIJZQMpMQovid/uUokmzLiXiBr7psd7KFhonbx89dUEazNED87irXzg14IkR5gr0WIkrFYCQh3nmIih7UMgWzjG/tJT2w8bxqsolm8ZJ3S1Ww0tFa9DVFkLOBNC8u9SeHYjR/Qg6Ncw4HP/rY2AvfJb+WqdyJtkGw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Dec 03, 2024 at 03:20:04PM +0100, Uladzislau Rezki wrote: > On Tue, Dec 03, 2024 at 10:10:26PM +0800, Kefeng Wang wrote: > > > > > > On 2024/12/3 21:51, Uladzislau Rezki wrote: > > > On Tue, Dec 03, 2024 at 09:45:09PM +0800, Kefeng Wang wrote: > > > > > > > > > > > > On 2024/12/3 21:39, Uladzislau Rezki wrote: > > > > > On Tue, Dec 03, 2024 at 09:30:09PM +0800, Kefeng Wang wrote: > > > > > > > > > > > > > > > > > > On 2024/12/3 21:10, zuoze wrote: > > > > > > > > > > > > > > > > > > > > > 在 2024/12/3 20:39, Uladzislau Rezki 写道: > > > > > > > > On Tue, Dec 03, 2024 at 07:23:44PM +0800, zuoze wrote: > > > > > > > > > We have implemented host-guest communication based on the TUN device > > > > > > > > > using XSK[1]. The hardware is a Kunpeng 920 machine (ARM architecture), > > > > > > > > > and the operating system is based on the 6.6 LTS version with kernel > > > > > > > > > version 6.6. The specific stack for hotspot collection is as follows: > > > > > > > > > > > > > > > > > > -  100.00%     0.00%  vhost-12384  [unknown]      [k] 0000000000000000 > > > > > > > > >     - ret_from_fork > > > > > > > > >        - 99.99% vhost_task_fn > > > > > > > > >           - 99.98% 0xffffdc59f619876c > > > > > > > > >              - 98.99% handle_rx_kick > > > > > > > > >                 - 98.94% handle_rx > > > > > > > > >                    - 94.92% tun_recvmsg > > > > > > > > >                       - 94.76% tun_do_read > > > > > > > > >                          - 94.62% tun_put_user_xdp_zc > > > > > > > > >                             - 63.53% __check_object_size > > > > > > > > >                                - 63.49% __check_object_size.part.0 > > > > > > > > >                                     find_vmap_area > > > > > > > > >                             - 30.02% _copy_to_iter > > > > > > > > >                                  __arch_copy_to_user > > > > > > > > >                    - 2.27% get_rx_bufs > > > > > > > > >                       - 2.12% vhost_get_vq_desc > > > > > > > > >                            1.49% __arch_copy_from_user > > > > > > > > >                    - 0.89% peek_head_len > > > > > > > > >                         0.54% xsk_tx_peek_desc > > > > > > > > >                    - 0.68% vhost_add_used_and_signal_n > > > > > > > > >                       - 0.53% eventfd_signal > > > > > > > > >                            eventfd_signal_mask > > > > > > > > >              - 0.94% handle_tx_kick > > > > > > > > >                 - 0.94% handle_tx > > > > > > > > >                    - handle_tx_copy > > > > > > > > >                       - 0.59% vhost_tx_batch.constprop.0 > > > > > > > > >                            0.52% tun_sendmsg > > > > > > > > > > > > > > > > > > It can be observed that most of the overhead is concentrated in the > > > > > > > > > find_vmap_area function. > > > > > > > > > > > ... > > > > > > > Thank you. Then you have tons of copy_to_iter/copy_from_iter calls > > > during your test case. Per each you need to find an area which might > > > be really heavy. > > > > Exactly, no vmalloc check before 0aef499f3172 ("mm/usercopy: Detect vmalloc > > overruns"), so no burden in find_vmap_area in old kernel. > > > Yep. It will slow down for sure. > > > > > > > How many CPUs in a system you have? > > > > > > > 128 core > OK. Just in case, do you see in a boot log something like: > > "Failed to allocate an array. Disable a node layer" > And if you do not see such failing message, it means that a node layer is up and running fully, can you also test below patch on your workload? diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 634162271c00..35b28be27cf4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -896,7 +896,7 @@ static struct vmap_node { * is fully disabled. Later on, after vmap is initialized these * parameters are updated based on a system capacity. */ -static struct vmap_node *vmap_nodes = &single; +static struct vmap_node **vmap_nodes; static __read_mostly unsigned int nr_vmap_nodes = 1; static __read_mostly unsigned int vmap_zone_size = 1; @@ -909,13 +909,13 @@ addr_to_node_id(unsigned long addr) static inline struct vmap_node * addr_to_node(unsigned long addr) { - return &vmap_nodes[addr_to_node_id(addr)]; + return vmap_nodes[addr_to_node_id(addr)]; } static inline struct vmap_node * id_to_node(unsigned int id) { - return &vmap_nodes[id % nr_vmap_nodes]; + return vmap_nodes[id % nr_vmap_nodes]; } /* @@ -1060,7 +1060,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va) repeat: for (i = 0, va_start_lowest = 0; i < nr_vmap_nodes; i++) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; spin_lock(&vn->busy.lock); *va = __find_vmap_area_exceed_addr(addr, &vn->busy.root); @@ -2240,7 +2240,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, purge_nodes = CPU_MASK_NONE; for (i = 0; i < nr_vmap_nodes; i++) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; INIT_LIST_HEAD(&vn->purge_list); vn->skip_populate = full_pool_decay; @@ -2272,7 +2272,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, nr_purge_helpers = clamp(nr_purge_helpers, 1U, nr_purge_nodes) - 1; for_each_cpu(i, &purge_nodes) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; if (nr_purge_helpers > 0) { INIT_WORK(&vn->purge_work, purge_vmap_node); @@ -2291,7 +2291,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, } for_each_cpu(i, &purge_nodes) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; if (vn->purge_work.func) { flush_work(&vn->purge_work); @@ -2397,7 +2397,7 @@ struct vmap_area *find_vmap_area(unsigned long addr) */ i = j = addr_to_node_id(addr); do { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; spin_lock(&vn->busy.lock); va = __find_vmap_area(addr, &vn->busy.root); @@ -2421,7 +2421,7 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) */ i = j = addr_to_node_id(addr); do { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; spin_lock(&vn->busy.lock); va = __find_vmap_area(addr, &vn->busy.root); @@ -4928,7 +4928,7 @@ static void show_purge_info(struct seq_file *m) int i; for (i = 0; i < nr_vmap_nodes; i++) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; spin_lock(&vn->lazy.lock); list_for_each_entry(va, &vn->lazy.head, list) { @@ -4948,7 +4948,7 @@ static int vmalloc_info_show(struct seq_file *m, void *p) int i; for (i = 0; i < nr_vmap_nodes; i++) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; spin_lock(&vn->busy.lock); list_for_each_entry(va, &vn->busy.head, list) { @@ -5069,6 +5069,7 @@ static void __init vmap_init_free_space(void) static void vmap_init_nodes(void) { + struct vmap_node **nodes; struct vmap_node *vn; int i, n; @@ -5087,23 +5088,34 @@ static void vmap_init_nodes(void) * set of cores. Therefore a per-domain purging is supposed to * be added as well as a per-domain balancing. */ - n = clamp_t(unsigned int, num_possible_cpus(), 1, 128); + n = 1024; if (n > 1) { - vn = kmalloc_array(n, sizeof(*vn), GFP_NOWAIT | __GFP_NOWARN); - if (vn) { + nodes = kmalloc_array(n, sizeof(struct vmap_node **), + GFP_NOWAIT | __GFP_NOWARN | __GFP_ZERO); + + if (nodes) { + for (i = 0; i < n; i++) { + nodes[i] = kmalloc(sizeof(struct vmap_node), GFP_NOWAIT | __GFP_ZERO); + + if (!nodes[i]) + break; + } + /* Node partition is 16 pages. */ vmap_zone_size = (1 << 4) * PAGE_SIZE; - nr_vmap_nodes = n; - vmap_nodes = vn; + nr_vmap_nodes = i; + vmap_nodes = nodes; } else { pr_err("Failed to allocate an array. Disable a node layer\n"); + vmap_nodes[0] = &single; + nr_vmap_nodes = 1; } } #endif for (n = 0; n < nr_vmap_nodes; n++) { - vn = &vmap_nodes[n]; + vn = vmap_nodes[n]; vn->busy.root = RB_ROOT; INIT_LIST_HEAD(&vn->busy.head); spin_lock_init(&vn->busy.lock); @@ -5129,7 +5141,7 @@ vmap_node_shrink_count(struct shrinker *shrink, struct shrink_control *sc) int i, j; for (count = 0, i = 0; i < nr_vmap_nodes; i++) { - vn = &vmap_nodes[i]; + vn = vmap_nodes[i]; for (j = 0; j < MAX_VA_SIZE_PAGES; j++) count += READ_ONCE(vn->pool[j].len); @@ -5144,7 +5156,7 @@ vmap_node_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) int i; for (i = 0; i < nr_vmap_nodes; i++) - decay_va_pool_node(&vmap_nodes[i], true); + decay_va_pool_node(vmap_nodes[i], true); return SHRINK_STOP; } it sets a number of nodes to 1024. It would be really appreciated to see the perf-delta with this patch. If it improves the things or not. Thank you in advance. -- Uladzislau Rezki