From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5032FEDE994 for ; Tue, 10 Sep 2024 02:53:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 681686B00E4; Mon, 9 Sep 2024 22:53:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6314A6B00E6; Mon, 9 Sep 2024 22:53:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5219B6B00EB; Mon, 9 Sep 2024 22:53:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 38D2F6B00E4 for ; Mon, 9 Sep 2024 22:53:25 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A98291A0474 for ; Tue, 10 Sep 2024 02:53:24 +0000 (UTC) X-FDA: 82547307528.16.594FC0A Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf12.hostedemail.com (Postfix) with ESMTP id B24BC4000A for ; Tue, 10 Sep 2024 02:53:20 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=PUsWg4J4; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725936751; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A9AxNolWd8sgIA+SoFXrv1HHtV5EHKQsRTnCHtVf7s8=; b=bzT+dzhTnxgR5F2XxanpRAXkeBFO9AGTZXNb/q4mOwxrb4/SH7sCttDcnywSCDqH5jIQoR tkwY5T6sNnji+d1+dMvnc8FiF/V4veJCS+jFnl4Q1f3+J88VzIdeTivAAXxfZHlR9LyrVY DhXp3iW5DwYYo8C9Cg6Z6bZ4i6nSgFo= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=PUsWg4J4; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725936751; a=rsa-sha256; cv=none; b=Vlid9gQ0LFVTEL6o9Vy2L7Ob+j2nxNohtAk4UFtoOHkem/5YAwfcLhqaIAX6z9h5fX42oS jXA2reQDA40gthHv+mriOXlfnxRhiaWSOZI5D+SyYvUDowi6PJyPg7JkOZ98IppAFCUh7I YZYVWB2Hp3SHDghs6Vas4bpFYxXNJJo= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1725936795; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=A9AxNolWd8sgIA+SoFXrv1HHtV5EHKQsRTnCHtVf7s8=; b=PUsWg4J47rm098Bx0x0mgnrQ6ubx+sDA5lEi3pBkqK2JkiVFgkCgRnzMDz+yZBX0QCLsML49tP8rZLYq+6pcfYHpyPXu+aiVA9up7nyuwt/y7iMP1oK3g+7JUoCc7FwKv6p3K499Pvm12GjI3ZNhGaXHXSYxCgbME2mWcEz8sJo= Received: from 30.74.144.120(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WEiMc3c_1725936793) by smtp.aliyun-inc.com; Tue, 10 Sep 2024 10:53:13 +0800 Message-ID: <18dd8c07-5bb5-403b-8fda-b927c7761bd0@linux.alibaba.com> Date: Tue, 10 Sep 2024 10:53:11 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: replace xa_get_order with xas_get_order where appropriate To: Shakeel Butt , Andrew Morton Cc: Matthew Wilcox , Hugh Dickins , Nhat Pham , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , linux-fsdevel@vger.kernel.org References: <20240906230512.124643-1-shakeel.butt@linux.dev> From: Baolin Wang In-Reply-To: <20240906230512.124643-1-shakeel.butt@linux.dev> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: B24BC4000A X-Stat-Signature: fxuqxhyr6crigczmki1qek9opiujtxq9 X-HE-Tag: 1725936800-993701 X-HE-Meta: U2FsdGVkX18jp3IZBc8lSJvH52F+j61PgnTuZoiCA1KmuPgwUM2QKjsU8jQIOdfkymN05FbZN+mRtT1NerDUhUHQXThmU8aeX90nq8AUgZNGWKNQjftL0ndmaoaWjftaIiBwOLAbj2F9rk43LQMceVwuZucZhyaMhRY5mPf8RBOr01bRCOK9Cwf9+C8VWgxgPsOSMKDg8vc0XR/xbpPMaGj6C+NCaVVSYtuBZHQ2uJBsr/gJ8qW0hESd/i/oF9/jwo4/GGQKKZAvYwloqR6bvMsmaiRnB9wlI+cmzL3fXnIiRYmqWtgnQ4F5knu+2Y0lmTIETBbCTEoeQwOrD+hvWGyu0mC9lkD4kYT9+efbXFaUnbZuAuGsOciO0tsAtW0fjOqmGS9s40mPAOjhhbwyZv5Zska/ri9caqA8ZBuuTVOZd/Mb0khF8eEwvbfxE0x4/R82feRiFlaDZ62U5nrcgzEdF+wvesj2xvea5RjaROLwht2+p4r0S/BFrvh+qg2VGR8IWye5ArWS57G3QPyAx94SYzNatp149Md6G5QsOGe4Qe8w+Ae0lpnhrle28fdZEBLXZEi9IwFmXL5vX/tjVdUnJHYwpOnJLOCO9IEzxzbDqdYMMDiuyec7dJu/E2YLkgF63zLHiRK6WbqGFzYpq+VvN9OKfH0/rSuwcHeggkHgRA4tqiLIhiF8ULbGH8Nr+dg/bwlJnzRnGTqK9cbbv3KX9r6DnQRbuoPLGX7MMscLS7BXjAdi2+F+CtIFmThtJCNnRiXq2+BnK4RItDPhqLizfoEuqAz+k7QuQE9vbPRq/nRIA8O67eaC/f9P4LYRoFMv1Sfu+Y70QBzHULFNGa/ZknepfYbfB4aYPuuNhoULFGFkunGh5fQJHit5Vf4NKJiapXMrnOXpx6FZlL/YV8vGk2tJWV94axOIJk28cmmlwO2y5W6w5RUd9QF/LW+jRuVEjDyi0oIBZxu98xT J3bPO2y9 /47M7eDKB5Dk525h525+WspocSk74pVgc9QW//vl82JwQR8BqCyHZAMX+2n0hntghCUHzuo11QwQMf7+YEgHFbx+9sJHM342IUCSbv6PuWAG3lhrTIKGr09U356yPiTBCFw9H5tTAIHiGzTCk7cAa7tsA7ZxI3VSXSwbhomVM4n/RAzPa1Lu2U92s286PoGIKy6KVgSGu8wrIWbbnU+j+n9V6jUyKOMyI7kjWweyqKflZTG1Uiu3qoW6zC9zc8znz2yibXZP1LZ1DY35BzfsIuDjOrH7OH5m0KffCyydzzzoL3LivxJJvvdhnh5TYEyC7sgWc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/9/7 07:05, Shakeel Butt wrote: > The tracing of invalidation and truncation operations on large files > showed that xa_get_order() is among the top functions where kernel > spends a lot of CPUs. xa_get_order() needs to traverse the tree to reach > the right node for a given index and then extract the order of the > entry. However it seems like at many places it is being called within an > already happening tree traversal where there is no need to do another > traversal. Just use xas_get_order() at those places. > > Signed-off-by: Shakeel Butt LGTM. Thanks. Reviewed-by: Baolin Wang > --- > mm/filemap.c | 6 +++--- > mm/shmem.c | 2 +- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 070dee9791a9..7e3412941a8d 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2112,7 +2112,7 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start, > VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index), > folio); > } else { > - nr = 1 << xa_get_order(&mapping->i_pages, xas.xa_index); > + nr = 1 << xas_get_order(&xas); > base = xas.xa_index & ~(nr - 1); > /* Omit order>0 value which begins before the start */ > if (base < *start) > @@ -3001,7 +3001,7 @@ static inline loff_t folio_seek_hole_data(struct xa_state *xas, > static inline size_t seek_folio_size(struct xa_state *xas, struct folio *folio) > { > if (xa_is_value(folio)) > - return PAGE_SIZE << xa_get_order(xas->xa, xas->xa_index); > + return PAGE_SIZE << xas_get_order(xas); > return folio_size(folio); > } > > @@ -4297,7 +4297,7 @@ static void filemap_cachestat(struct address_space *mapping, > if (xas_retry(&xas, folio)) > continue; > > - order = xa_get_order(xas.xa, xas.xa_index); > + order = xas_get_order(&xas); > nr_pages = 1 << order; > folio_first_index = round_down(xas.xa_index, 1 << order); > folio_last_index = folio_first_index + nr_pages - 1; > diff --git a/mm/shmem.c b/mm/shmem.c > index 866d46d0c43d..4002c4f47d4d 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -893,7 +893,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping, > if (xas_retry(&xas, page)) > continue; > if (xa_is_value(page)) > - swapped += 1 << xa_get_order(xas.xa, xas.xa_index); > + swapped += 1 << xas_get_order(&xas); > if (xas.xa_index == max) > break; > if (need_resched()) {