From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C699C25B74 for ; Sun, 2 Jun 2024 09:06:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEA436B0095; Sun, 2 Jun 2024 05:06:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B99AF6B0098; Sun, 2 Jun 2024 05:06:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3A406B009A; Sun, 2 Jun 2024 05:06:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8028B6B0095 for ; Sun, 2 Jun 2024 05:06:08 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EA043A026D for ; Sun, 2 Jun 2024 09:06:07 +0000 (UTC) X-FDA: 82185366774.06.8F6C4A7 Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) by imf13.hostedemail.com (Postfix) with ESMTP id 2A0FF20007 for ; Sun, 2 Jun 2024 09:06:04 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bwWBKYex; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of rgbi3307@gmail.com designates 209.85.208.175 as permitted sender) smtp.mailfrom=rgbi3307@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717319165; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ni5qfj/DTcTpIPY+I+qnaf/B7kcFH0t6iPU99jBic6U=; b=Z8BKPNUE8Ak9E4tiwRkMCNVQsj7NtZ9txyfu6fuzeGlSI7X81kfOfHfRAjWAgMD+gUJgry 17Br50mR48x07vOIEOdT/uBB3OHvXRQ0nF2aCxWCT9RXN3IifKvHVCmxtHyonwop6x3zph fxwsVnbUQ8JMmYPY45VGjFjNu//W5Tg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717319165; a=rsa-sha256; cv=none; b=OsywAdgT6DaQuOfgM4GBKj1IyWjLyzbvGxAh/+6MOxGXnOHr9J0fVsXAv1oElvaqQJZktn zUcsZHdmkbd9zkb/hP5mW7AjqgM8doRGDy1QLZNJtnqOC67PanYXT6/E72m2FmFvNZjZ9T dOApNwQ75l9JdEHa5IGyX6BS4/8qwBQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bwWBKYex; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of rgbi3307@gmail.com designates 209.85.208.175 as permitted sender) smtp.mailfrom=rgbi3307@gmail.com Received: by mail-lj1-f175.google.com with SMTP id 38308e7fff4ca-2e95abc7259so36549511fa.3 for ; Sun, 02 Jun 2024 02:06:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717319163; x=1717923963; darn=kvack.org; h=to:subject:message-id:date:from:in-reply-to:references:mime-version :from:to:cc:subject:date:message-id:reply-to; bh=ni5qfj/DTcTpIPY+I+qnaf/B7kcFH0t6iPU99jBic6U=; b=bwWBKYex6GhjQLU2hwqaHv3ZYtUcbgPXFqtN2ezi5bLKIQOR7t+qeFd44BW20x5I2I 4X9akhbPyncQfoyFjo8wnLOL6FOnTdsJbOhCNqmDltKaukauFeqSGJ/UP/F5GmDrzxhD rDx7RDnIhZiy5sZ/+zYVPUFBFOwFBFRUpCHtGNM/hWDbfTjNviB0X6VEc/kc3gybPbMx cR5qM/ER6pTQvl27H2ANQaAyjexjc0uUQrcEOvi/ZYzNlC8ue7KROE6QD73gl55IGI6d ueEOZ/hq8TBcwmmT+ispk21OU9IxJy0qJHeYQ2Xx/3Q0N2VmwR/56KX83BE4gFkdOwe/ TRsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717319163; x=1717923963; h=to:subject:message-id:date:from:in-reply-to:references:mime-version :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ni5qfj/DTcTpIPY+I+qnaf/B7kcFH0t6iPU99jBic6U=; b=IsCazOSSbN6bgpgj9ZR0pXSnI0tp6VnyxYG7sSwpNUsiCbPxKyGZcjR1pzbd48AycZ ZtSETpr562Pm4nHfe1qRifYTXcyEKWDqfqKw0vOT/PHHV10CN2AhPRFuRPr/avNL2ymh 06iDRixlHGq4vwKT2mU3rxgARDvLmzPhV7dJLNuGZ6GQBfOK8xVl4oaFNUMk5bIhvPX9 wK8Dk2+vyqbtFj+TYhLMO05epP825s6dCKCdMJE2lTkMwvZ07GZFx+v1Rj7ZpY4htRrQ 64JGHMQrBiYccxbaceHPYsuwIRAcZESV/0TurWQ5ojP3CdgFuAU2BaxULxB8UEeiIggM uVQQ== X-Forwarded-Encrypted: i=1; AJvYcCUz08hsbN1fimjPIiCoGYY1XNk2prvIaG57LXrIKylTQBrdiGIO6dRqWdY1rI1HI1iI8QTLOOAUWXxPCFM5EjMJua4= X-Gm-Message-State: AOJu0YwHLr0MuQzM0kkHVjzlmAcwP3nINLd0t9pT3NPWh9HuJmDv3TFa z90aZHtiwfjk/C0wMjYUB5VCNwL3+ZY0keZyM/CFpxbvjje7Xp/UepmMz1nc+WeV0Fdec/fbEaD 9pN5+B0fnxVzptcED/Pjj4Y0Ch8s= X-Google-Smtp-Source: AGHT+IEn/W8MSUdwZOfZ1YLWWsTSkpu1Fb/efk+yVX5YIfoTvWbNwj840nO9OwLFlqLPBcOiEbLh/nkQtDaOkO140Lg= X-Received: by 2002:a19:6449:0:b0:52b:84bd:3452 with SMTP id 2adb3069b0e04-52b89563492mr3562309e87.7.1717319162881; Sun, 02 Jun 2024 02:06:02 -0700 (PDT) MIME-Version: 1.0 References: <20240601025536.25682-1-rgbi3307@naver.com> In-Reply-To: From: JaeJoon Jung Date: Sun, 2 Jun 2024 18:05:49 +0900 Message-ID: Subject: Re: [PATCH] maple_tree: add mas_node_count() before going to slow_path in mas_wr_modify() To: "Liam R. Howlett" , Jung-JaeJoon , maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 7zbc8i4h8ycswo6fgdzson4nrsfqfpot X-Rspamd-Queue-Id: 2A0FF20007 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1717319164-53201 X-HE-Meta: U2FsdGVkX1/ebsdvZ3g05XxInlJx1eiagjGsgPTfQ7x+5ak9YW322zIf2+2e7qRxmNdaIGn4I6/sxkfjdlum1kEfaHD0qACuoLgRxj4CmJJDmSlbRZFP1YPCLzCLLMssKfXkRzL2+Leoeh7FVABsZhmb3yY6TjDsfR6tjsSfG0IsgKu1+YHA5FYmhKdC0uJNReqRyn3GD/4qkfv5W56tWSkXaSWVxLqtAMiLXtot3JTVMElNUuvyHQUyVvwvd7n68pIaT0kBTHc1PRsHOesNPes+Qvc9iWHXVtRhiWKt3oeN+yIJXHIQuCKRbNrcpA3Tyh79L6zbrCxrDmr6jzaj1gt43LJIDxqv5LOqzpnIgYOs9mWBuIOUDkufhCqqrNsYgA+qCDK66qRHkBklje06waf9aCXfi3VYljlTej2YTUInackTp9tbDYXznZbPOHZZnnnoBecdKht0mzMHVH2dqKS9BVH50Nid/T/+qcjdOcO8bV0dLldg2GUFDkludSOsc5AqaMdeVk0Z5e7IpggF1xOP53I6gTwLo5XdJOlg5AP4pufSDMGIyhBpTHAAFknBMnbsgQjlHOfZMPggTw1QGK5gaT29M5Ccdl+Buexmi+aoUnn25NDi+hLY4SuR0fC3jJzMy8HZTe+86GvDfcGgv/agckmPgL8W1np2kZ8bNtx0DEQLWR5vudYeAHcsmpVeLZyn0gdR7PwMDSzlLEDeYhKpTxcu2OPevtLUuTp2rgXzU08+CFdbJ8O9Yw+VeynHeLCX7SjbPZb0Vmp2foo41QmDssfxi52vrj4gzw1UF2uR41Qp24KtO8AmIZQ2V5S8RMYS6jjwZJAYdu7HaDPzanPKNJomP3gNL2CQqmVsr4ai+gkMbXioKE+KaIajiOR+lQTaTSFFOVsWMDT7Slf8JN6h/c8w1/AJnStEskxLWX4iIFLaKjj7VwpdNn+oQKReV8kAUEm2F4pkXl0eRyD iEAkhdaD 44cIR6xfGTr6Wco6DyOOLIyAaKCjV9Yk26VqFiAkeiQJOjKm3XR5MT/hLKSiTou+nwmcZihza8gJEwKR08ai5aQhl4ZBnC4Ysrt6lanyfSfp3W9k05pqY8ZsXryioQz/1Isy6pQSliAKTtH53iKC/Xu+0kdknLfPUobmjK6vxix0y8pkIff4CDhli2KlOHgRMzNzba6n0OT6vZCY/UgQNUNIRbPuObYfXHUJXSU2bvYzFEWnv1NFsp8J9rhjK75BT0VEEMwJ5OYHui6fAOLnP7SMcmbyJybMZPtuF9GGo1IZJJD0idmNeIXfLjffkQqxSuiRHnYRDkc8NP/oVeLmRf7k6JJEjlcRUr2G0rDb7yeT7VdzXMTvf1HCzkBLbA03D48yN+baX9orCKb95mEcMG20GBwGPXSxf69YcTA5T2JstpBzahAnN20uxRILPBQC4vnY6Xvfp2Ybplqp/X23LPY/7B51/Zzhn2eSW7TK2ULaiW1xKtSkVBvEs6k72W43DZFDYOrBwh7dJ4okiNKRZm5WQWg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hello, Liam. Thank you very much for the detailed answer and explanation. I tested this patch in userspace. In user space, this phenomenon always occurs when kmem_cache_alloc() is executed to allocate a new node. I will try to test it in more detail in kernel space. I will also refer to the notes from the email list you shared and send results once a more clear analysis has been made. Thanks, JaeJoon Jung On Sun, 2 Jun 2024 at 11:41, Liam R. Howlett wrote: > > * Jung-JaeJoon [240531 22:55]: > > From: Jung-JaeJoon > > > > If there are not enough nodes, mas_node_count() set an error state via mas_set_err() > > and return control flow to the beginning. > > > > In the return flow, mas_nomem() checks the error status, allocates new nodes, > > and resumes execution again. > > > > In particular, > > if this happens in mas_split() in the slow_path section executed in mas_wr_modify(), > > unnecessary work is repeated, causing a slowdown in speed as below flow: > > > > _begin: > > mas_wr_modify() --> if (new_end >= mt_slots[wr_mas->type]) --> goto slow_path > > slow_path: > > --> mas_wr_bnode() --> mas_store_b_node() --> mas_commit_b_node() --> mas_split() > > --> mas_node_count() return to _begin > > > > But, in the above flow, if mas_node_count() is executed before entering slow_path, > > execution efficiency is improved by allocating nodes without entering slow_path repeatedly. > > Thank you for your patch, but I have to NACK this change. > > You are trying to optimise the work done when we are out of memory, > which is a very rare state. How did you check this works? > > If we run out of memory, the code will kick back to mas_nomem() and > may start running in reclaim to free enough memory for the allocations. > There is nothing we can do to make a meaningful change in the speed of > execution at this point. IOW, the duplicate work is the least of our > problems. > > This change has also separated the allocations from why we are > allocating - which isn't really apparent in this change. We could put > in a comment about why we are doing this, but the difference in > execution speed when we are in a low memory, probably reclaim retry > situation is not worth this complication. > > We also have a feature on the mailing list called "Store type" around > changing how this works to make preallocations avoid duplicate work and > it is actively being worked on (as noted in the email to the list). [1] > The key difference being that the store type feature will allow us to > avoid unnecessary work that happens all the time for preallocations. > > [1] http://lists.infradead.org/pipermail/maple-tree/2023-December/003098.html > > Thanks, > Liam > > > > > Signed-off-by: JaeJoon Jung > > --- > > lib/maple_tree.c | 7 ++++++- > > 1 file changed, 6 insertions(+), 1 deletion(-) > > > > diff --git a/lib/maple_tree.c b/lib/maple_tree.c > > index 2d7d27e6ae3c..8ffabd73619f 100644 > > --- a/lib/maple_tree.c > > +++ b/lib/maple_tree.c > > @@ -4176,8 +4176,13 @@ static inline void mas_wr_modify(struct ma_wr_state *wr_mas) > > * path. > > */ > > new_end = mas_wr_new_end(wr_mas); > > - if (new_end >= mt_slots[wr_mas->type]) > > + if (new_end >= mt_slots[wr_mas->type]) { > > + mas->depth = mas_mt_height(mas); > > + mas_node_count(mas, 1 + mas->depth * 2); > > + if (mas_is_err(mas)) > > + return; > > goto slow_path; > > + } > > > > /* Attempt to append */ > > if (mas_wr_append(wr_mas, new_end)) > > -- > > 2.17.1 > >