From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F1ECC71155 for ; Wed, 18 Jun 2025 02:49:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 677986B008A; Tue, 17 Jun 2025 22:48:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 627BC6B008C; Tue, 17 Jun 2025 22:48:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 564826B0092; Tue, 17 Jun 2025 22:48:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4A3F56B008A for ; Tue, 17 Jun 2025 22:48:59 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E6C48121A3A for ; Wed, 18 Jun 2025 02:48:58 +0000 (UTC) X-FDA: 83566989156.19.BA0905F Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by imf20.hostedemail.com (Postfix) with ESMTP id E48931C0006 for ; Wed, 18 Jun 2025 02:48:53 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of shikemeng@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=shikemeng@huaweicloud.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750214937; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=df0NA8fyT6lZhpoKzdyFev9OMjUoU2hvJCjL939ZHMg=; b=WFopj/VHrEUQvMzmVGYli2UI82v5z5B3/LWkuBjrdbRyNhB16AiPKfwhYYGHsn3+40sVhn /bzM6XD86m+Z9ehXhaMZs6rskJENMUQ4jwBQAsEHlRtWUkmw2R9sJjRo7uACFLIJFE28ee I0ygCSstykH94d1jMiQ9Imf04u2XPwA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750214937; a=rsa-sha256; cv=none; b=n+mdvNhzCMXBlR3E7N8zOpxjm/DIC9uyH0lRUg3GXi7QuTo7EdAsZsiORRhIfjbhtTnRzw aQZzem95QU7ZFcaSA9i5l9wktNmM1GMKJNTBXo32Ffp+yHbRX2O5loduZArZMTXwZoIAEO rWEfARreBouQ9Lyt/h/KhhZ5wW0l8TM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of shikemeng@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=shikemeng@huaweicloud.com; dmarc=none Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTPS id 4bMSqQ4fJbzYQtv9 for ; Wed, 18 Jun 2025 10:48:50 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.75]) by mail.maildlp.com (Postfix) with ESMTP id 9AA061A1BBF for ; Wed, 18 Jun 2025 10:48:49 +0800 (CST) Received: from [10.174.99.169] (unknown [10.174.99.169]) by APP2 (Coremail) with SMTP id Syh0CgDn8GMPKVJo3HrvPg--.13305S2; Wed, 18 Jun 2025 10:48:49 +0800 (CST) Subject: Re: [PATCH 2/4] mm/shmem, swap: avoid redundant Xarray lookup during swapin To: Kairui Song , linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org References: <20250617183503.10527-1-ryncsn@gmail.com> <20250617183503.10527-3-ryncsn@gmail.com> From: Kemeng Shi Message-ID: <17bdc50c-1b2c-bb3b-f828-bd9ce93ea086@huaweicloud.com> Date: Wed, 18 Jun 2025 10:48:47 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 In-Reply-To: <20250617183503.10527-3-ryncsn@gmail.com> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 7bit X-CM-TRANSID:Syh0CgDn8GMPKVJo3HrvPg--.13305S2 X-Coremail-Antispam: 1UD129KBjvJXoWxZFyDKr48tFWDArWrZryDGFg_yoW5Zry8pF yUKas3JrWktryfAr4Sy3WvqryY934Sgay8tFWfCan3A3ZxGr10k3y8Kr12qry2krykCw4U WF4ft3909w1DtrJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9Ib4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x 0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG 6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFV Cjc4AY6r1j6r4UM4x0Y48IcVAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7Mxk0xIA0c2IE e2xFo4CEbIxvr21lc7CjxVAaw2AFwI0_Jw0_GFyl42xK82IYc2Ij64vIr41l4I8I3I0E4I kC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWU WwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr 0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWU JVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJb IYCTnIWIevJa73UjIFyTuYvjxUF1v3UUUUU X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E48931C0006 X-Stat-Signature: kum4py6dho4ztoait57r8z39wxnmjsq1 X-Rspam-User: X-HE-Tag: 1750214933-5887 X-HE-Meta: U2FsdGVkX1+7UP6rqfi7IcnZOO1pRNXRBQKR2kzDz/aAkYQYK/AjUa1Si7ZcGUmnPzZUQyjt5VS/qVG4C+pe6WJm9LKsGjNj8SXLDVxruB6CJyR2tj7R2SCA6PlSBOsz9z2tCvEj4BQ3Lj2gtfxHyEwpGAeXP/IkIapbaR0umlGRtd1jMgQhc+37kzSyQY3hrzaPiID5S/oTF7qptQVsrc9XRZ5wRNoEXvZ0hrj2yvqYOOvp8Tnv5kLi1yNI4MfHpYSnu941+BPHSZGvXiSZ6CpfUVdlxDiIJ7n35xX82tEWihA5rHvUZnObNWo6wiYkIaNsvFFwy0x4128L+zLiOwaLrI3ELfrAv3tVSwJlFyhMW/cBPtdgaQ9PqOkgZRa8bMy0C7hDl38yzRWEKMqU5AV2pX/+9VQ+O/NNcccodOUUbqX2S6OlHyPwpMDxPPFw8ZH2LJ7uNjyXzfdbwmBolBQv2HTua1639X41Eb6IwL4Xh2n1gNORJHtv4w64gMYBn7IH298OM3zrfQKSnPNEaFMMBArMumaZTmC+gEarGN8qVXekJZxQmvdY0/gX9BP6rNkFCJ04aGGGqLexCkyfPGxZBadNLMjRsxNkMz3HCMP9gOgeWG1HpAxRxC2+gk5dIaGcpT4l4PEerdQP4sam8l17HA8YP+mdXuOCrSrEhx/Mu2++QzO2YXjr2kg4WkuwYeLzUMj5UEgpGESTYrQ5PcZqrcK/WeACZE/BeJ9DmEa9kyp2t4/In+G4S1A6+IgLQWc1BSGeML3LDxbtUWRHZitp/j8vdCp7uWXvItVhQnb2BbPIIN7xVni4u0rq8RFJWudFcUMobWCeSBBdlzzGRcDO53zCF8u1XEbyNEm/dWpTCChAOqDk7cNXYY2KPnlNW1sulUTPWZIcuRXRYiMnvo+gVIb2YAa4SEK7EStOsrcsR0IxdchBhw5Qqv8UjwntiSRb+xOb3vZvhQs99kM sUKhferq Tz1ciSJr7O9Vu7WQjjFlonb0MekCcXGDWelS2kxQynBYvAIyzHi2uIbAS2Zjw9YX+4mRZPgJa/yBGocnRJIoDvKQM6AO2ki16CIIh+c7IH9jP8MLk0cQJuIGdK4siSToQosaK1Osdnee0fvw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: on 6/18/2025 2:35 AM, Kairui Song wrote: > From: Kairui Song > > Currently shmem calls xa_get_order to get the swap radix entry order, > requiring a full tree walk. This can be easily combined with the swap > entry value checking (shmem_confirm_swap) to avoid the duplicated > lookup, which should improve the performance. > > Signed-off-by: Kairui Song > --- > mm/shmem.c | 33 ++++++++++++++++++++++++--------- > 1 file changed, 24 insertions(+), 9 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 4e7ef343a29b..0ad49e57f736 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -505,15 +505,27 @@ static int shmem_replace_entry(struct address_space *mapping, > > /* > * Sometimes, before we decide whether to proceed or to fail, we must check > - * that an entry was not already brought back from swap by a racing thread. > + * that an entry was not already brought back or split by a racing thread. > * > * Checking folio is not enough: by the time a swapcache folio is locked, it > * might be reused, and again be swapcache, using the same swap as before. > + * Returns the swap entry's order if it still presents, else returns -1. > */ > -static bool shmem_confirm_swap(struct address_space *mapping, > - pgoff_t index, swp_entry_t swap) > +static int shmem_swap_check_entry(struct address_space *mapping, pgoff_t index, > + swp_entry_t swap) > { > - return xa_load(&mapping->i_pages, index) == swp_to_radix_entry(swap); > + XA_STATE(xas, &mapping->i_pages, index); > + int ret = -1; > + void *entry; > + > + rcu_read_lock(); > + do { > + entry = xas_load(&xas); > + if (entry == swp_to_radix_entry(swap)) > + ret = xas_get_order(&xas); > + } while (xas_retry(&xas, entry)); > + rcu_read_unlock(); > + return ret; > } > > /* > @@ -2256,16 +2268,20 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > return -EIO; > > si = get_swap_device(swap); > - if (!si) { > - if (!shmem_confirm_swap(mapping, index, swap)) > + order = shmem_swap_check_entry(mapping, index, swap); > + if (unlikely(!si)) { > + if (order < 0) > return -EEXIST; > else > return -EINVAL; > } > + if (unlikely(order < 0)) { > + put_swap_device(si); > + return -EEXIST; > + } Can we re-arrange the code block as following: order = shmem_swap_check_entry(mapping, index, swap); if (unlikely(order < 0)) return -EEXIST; si = get_swap_device(swap); if (!si) { return -EINVAL; ... > > /* Look it up and read it in.. */ > folio = swap_cache_get_folio(swap, NULL, 0); > - order = xa_get_order(&mapping->i_pages, index); > if (!folio) { > int nr_pages = 1 << order; > bool fallback_order0 = false; > @@ -2415,7 +2431,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > *foliop = folio; > return 0; > failed: > - if (!shmem_confirm_swap(mapping, index, swap)) > + if (shmem_swap_check_entry(mapping, index, swap) < 0) > error = -EEXIST; > if (error == -EIO) > shmem_set_folio_swapin_error(inode, index, folio, swap, > @@ -2428,7 +2444,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > folio_put(folio); > } > put_swap_device(si); > - > return error; > } > >