From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7670FC54E49 for ; Tue, 12 Mar 2024 05:11:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EBAD56B018D; Tue, 12 Mar 2024 01:10:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E4B846B018E; Tue, 12 Mar 2024 01:10:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE2A26B018F; Tue, 12 Mar 2024 01:10:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B92406B018D for ; Tue, 12 Mar 2024 01:10:59 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6A3A3A0958 for ; Tue, 12 Mar 2024 05:10:59 +0000 (UTC) X-FDA: 81887212638.29.7E8DFDF Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf06.hostedemail.com (Postfix) with ESMTP id D9F61180007 for ; Tue, 12 Mar 2024 05:10:56 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=GD2Pgbnl; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf06.hostedemail.com: domain of zhouchengming@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=zhouchengming@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710220257; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/glA8B3xldAupSDpF0AdydC8gRQt79BaHk92bXCyytk=; b=vRkA/miTmBVGaiIL4NELc4oi7jReM6WiFW/mP6fvfhv8KYS3BKHY/E7VyQUniUepKqqWeX xxpNuu/8NvCZTyKiMj3md36rwARltCOkJJWu3l9OP5UnEbnt8YH7058NrOtwH2ORgtHI7C /MTt2L8eywMuA1Lq13N1fy4g/phGekU= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=GD2Pgbnl; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf06.hostedemail.com: domain of zhouchengming@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=zhouchengming@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710220257; a=rsa-sha256; cv=none; b=jE093cWEpjMTamRkgxfyDk5+s6EKSNz9Dr2cqBNTPHA8AYwbwuVQOMPnv4CV9oFpvaShX3 c6nE0NGPqqQZQzZDT22qkTcPFKjA4pfvEWxBO+lkc8KqH4MAUpyBJY8XhoZ4QZAeg1MWsb vrr073WwGqyEnL8PwgFbJB8jWxNOkFY= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1dd9066b7c3so12530505ad.2 for ; Mon, 11 Mar 2024 22:10:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1710220255; x=1710825055; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=/glA8B3xldAupSDpF0AdydC8gRQt79BaHk92bXCyytk=; b=GD2PgbnlJFfnYWQaLZjxBvfrX+keHUP4T4wqRL2KsjgtwvYW6eZg7TTXtsSv+Pdf0U eNidyKIVUgzSi97GHIFuPXxn/k4oDt4yPUYzPnqGxdKFoij7Fla1FVBrR/8ikukxM8r3 6y4pZg6I1Krh1O6G6PP3EWO1OgbO5V+uIqd2b3xW4S/ACp6kRjJjI7yo5e/5D82KqVBQ MO48dabnHnggcmUdazrK/T9J6jvTbYfQLXTwjeR9m6cdRR7VmtYCdA+Y9y+HompvCw2O h2noVzI52kDX3AKqcPpKtnYiAVf1ArdaT0M6153UFLs0BFFA5iS0N3YQ0YD8LjHsUJfR 9a/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710220255; x=1710825055; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/glA8B3xldAupSDpF0AdydC8gRQt79BaHk92bXCyytk=; b=v29aLK9/eCHuXV3ROyEL7PVzPQ1gXctQlCFktSRgWKks6bTNIEWdrI5UDAEDUzbF1Z fgnUK/NTTPCHaiNnC2aPdCIM5D38GzIZTyvepNyVmdjloa+UVaGKduBpjo/f/X1ditDb l71IVczde0ykrAw1S+ipFGLoDKvgQDWdhXk9IwNPB8bL+aF+v4adoIeebU8glpFy6BnC iFRRk1S8RtUounYY0TgMxmIcg6WYYC2vU1gsE3EmYfK+gmNm6VaeeETnAUXyN7jYXM0r huH3GgmVnZ//aNyWDe0Tb7vsQHoeRTUFHXSoSTpFTOJnL7hXmIuJdV68Azb8k0A+FRl7 ToDg== X-Forwarded-Encrypted: i=1; AJvYcCV4QaJ7TZxtuy4lUl6l4BjC0KR/r8YZiW9cnWEMNq9b71qmw9+Q+s3SSRTEHGYzC5GdBqmA322LbIhZysahY5dnx/k= X-Gm-Message-State: AOJu0Yy1P8JrZ/vNdrXLKmwO37VRGG2pWAyWPiHXNqJio8h2SWpUDyMy B2ZYRi2nJ/612EZB1UjSatpTJAC1DIOQYAGKqU+jj16sXsozU+D9iJtmlGJJtBI= X-Google-Smtp-Source: AGHT+IHos8j48zjpK0rH7sg4TWOB2R4zODjijQ3kB6WdplgPMZlZ8CnpazoK7DdLRAU1T8w6GWCvZQ== X-Received: by 2002:a17:902:c951:b0:1dd:6530:f7c4 with SMTP id i17-20020a170902c95100b001dd6530f7c4mr3035229pla.5.1710220255385; Mon, 11 Mar 2024 22:10:55 -0700 (PDT) Received: from [10.254.144.159] ([139.177.225.236]) by smtp.gmail.com with ESMTPSA id h15-20020a170902f54f00b001dd707d5fedsm1756779plf.59.2024.03.11.22.10.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 11 Mar 2024 22:10:55 -0700 (PDT) Message-ID: <85d12179-41ef-4fe6-8c55-5f04b837b87e@bytedance.com> Date: Tue, 12 Mar 2024 13:10:47 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5] zswap: replace RB tree with xarray Content-Language: en-US To: Chris Li , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed , Nhat Pham , Johannes Weiner , "Matthew Wilcox (Oracle)" , Barry Song References: <20240311-zswap-xarray-v5-1-a3031feb9c85@kernel.org> From: Chengming Zhou In-Reply-To: <20240311-zswap-xarray-v5-1-a3031feb9c85@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: D9F61180007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 1yhzh1yrs7b3jpc7nghpp8a3iqaas6iu X-HE-Tag: 1710220256-169331 X-HE-Meta: U2FsdGVkX1+h/d6an42v0vwin6TORk9b5ipVm4xbiotfUV3XCW/D7cYs+7kNNc9plEuz7N7QiTCWBnY5lWjlCYfChcw6jj6zcgKTI6E2/J9155V5X6YmrhytR3GiYR2rJvar32vmltfXEzGYeIDnSjpBD9YJMp5wbRL5NZk9Twbdo8cqjakqXirmIUqbB2DEg7nQTk754gQKWPdHH4vu5hFHNJWndlvV7pBxWNcazPgsNuQY9aUcLoyAXbQy/QOv6QSmiIjfEdx18ifpBRgqEW282AcaqQ3WbqgGAg7IMTkMGeBpJ/3oIFBrWEkbLZHxT0DTjyNvMchTrN7x6WtP753vQJVtAIdgle54MxIo5HUGdi+5bTv5Q4PS1rTEnkivfn70psnltx8rZ+73rx3RXDPZkBBJCpj/HHmL0+WT0BySxgtNAuCDzV+tF2q+YQV9jXRF6quLorFwQklklRlBSkGUMvgKVzoklWuKdUdCOpoOIwqg1DC1M9SSYiw3U5wRpv63/HeJelJxif5wy97Cep6VdqYxTUYOX+kW7J+XexaB//3nmHGMWf6SBLpTdTqPPRdaCcfIcRe46bjBXWI7N/ZwchD6bvCPQyXR/z3+6W2ASAL2veUWXQexEPKc83Ko/n7RnOD/2IDeWWQPFUS1fEH1wMiXXJAyFutyuG1rSFiXbDFsxiA8xbPxSKFFLpi75HDmVk0Ukol/kQVuQXtEmFyeI79FHPkR/CxcWiySDEMZtjnXKboy8m3jOy8VRJXuthstpFXxSuzopmDmVt+wyuRu+dJRZJuQeHy08Fon/8gQU+Vp9yaB7rPK9ekmP6uzxGQASieQjgm0H85IIty9uQSG4R9aANCzy2IAkpVcZM6dwD+HOn8SyKABkLlOmVy3b6t89ntoE9ZdfZ/bGG8gPzS2Pjsn+fuLv6VweJ4s4t6QIVnfOtabXUv/QVLB5UfhQRIYtnuPqyb4J6HjGYI W23hxQOm LDCyczy3+07+QcS2tiQ9sjM9WNOpU/jxA7Sj6ncX7H5K+T0q/oBoTSDBuBlvWq8kus7RZXB+mxqwkX/HEnuQXvf7hkLROSznCqgbmi+Ln6BCFqk3RHXplo9JTQe462PxLXapU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/12 06:26, Chris Li wrote: > Very deep RB tree requires rebalance at times. That > contributes to the zswap fault latencies. Xarray does not > need to perform tree rebalance. Replacing RB tree to xarray > can have some small performance gain. > > One small difference is that xarray insert might fail with > ENOMEM, while RB tree insert does not allocate additional > memory. > > The zswap_entry size will reduce a bit due to removing the > RB node, which has two pointers and a color field. Xarray > store the pointer in the xarray tree rather than the > zswap_entry. Every entry has one pointer from the xarray > tree. Overall, switching to xarray should save some memory, > if the swap entries are densely packed. > > Notice the zswap_rb_search and zswap_rb_insert always > followed by zswap_rb_erase. Use xa_erase and xa_store > directly. That saves one tree lookup as well. > > Remove zswap_invalidate_entry due to no need to call > zswap_rb_erase any more. Use zswap_free_entry instead. > > The "struct zswap_tree" has been replaced by "struct xarray". > The tree spin lock has transferred to the xarray lock. > > Run the kernel build testing 10 times for each version, averages: > (memory.max=2GB, zswap shrinker and writeback enabled, > one 50GB swapfile, 24 HT core, 32 jobs) > > mm-9a0181a3710eb xarray v5 > user 3532.385 3535.658 > sys 536.231 530.083 > real 200.431 200.176 > > --- > > > Signed-off-by: Chris Li Looks good to me. Reviewed-by: Chengming Zhou Thanks! > --- > Changes in v5: > - Remove zswap_xa_insert(), call xa_store and xa_erase directly. > - Remove zswap_reject_xarray_fail. > - Link to v4: https://lore.kernel.org/r/20240304-zswap-xarray-v4-1-c4b45670cc30@kernel.org > > Changes in v4: > - Remove zswap_xa_search_and_earse, use xa_erase directly. > - Move charge of objcg after zswap_xa_insert. > - Avoid erase old entry on insert fail error path. > - Remove not needed swap_zswap_tree change > - Link to v3: https://lore.kernel.org/r/20240302-zswap-xarray-v3-1-5900252f2302@kernel.org > > Changes in v3: > - Use xa_cmpxchg instead of zswap_xa_search_and_delete in zswap_writeback_entry. > - Use xa_store in zswap_xa_insert directly. Reduce the scope of spinlock. > - Fix xa_store error handling for same page fill case. > - Link to v2: https://lore.kernel.org/r/20240229-zswap-xarray-v2-1-e50284dfcdb1@kernel.org > > Changes in v2: > - Replace struct zswap_tree with struct xarray. > - Remove zswap_tree spinlock, use xarray lock instead. > - Fold zswap_rb_erase() into zswap_xa_search_and_delete() and zswap_xa_insert(). > - Delete zswap_invalidate_entry(), use zswap_free_entry() instead. > - Link to v1: https://lore.kernel.org/r/20240117-zswap-xarray-v1-0-6daa86c08fae@kernel.org > --- > mm/zswap.c | 166 +++++++++++++++---------------------------------------------- > 1 file changed, 41 insertions(+), 125 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index 011e068eb355..4c3139583a6c 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -20,7 +20,6 @@ > #include > #include > #include > -#include > #include > #include > #include > @@ -196,7 +195,6 @@ static struct { > * This structure contains the metadata for tracking a single compressed > * page within zswap. > * > - * rbnode - links the entry into red-black tree for the appropriate swap type > * swpentry - associated swap entry, the offset indexes into the red-black tree > * length - the length in bytes of the compressed page data. Needed during > * decompression. For a same value filled page length is 0, and both > @@ -208,7 +206,6 @@ static struct { > * lru - handle to the pool's lru used to evict pages. > */ > struct zswap_entry { > - struct rb_node rbnode; > swp_entry_t swpentry; > unsigned int length; > struct zswap_pool *pool; > @@ -220,12 +217,7 @@ struct zswap_entry { > struct list_head lru; > }; > > -struct zswap_tree { > - struct rb_root rbroot; > - spinlock_t lock; > -}; > - > -static struct zswap_tree *zswap_trees[MAX_SWAPFILES]; > +static struct xarray *zswap_trees[MAX_SWAPFILES]; > static unsigned int nr_zswap_trees[MAX_SWAPFILES]; > > /* RCU-protected iteration */ > @@ -253,7 +245,7 @@ static bool zswap_has_pool; > * helpers and fwd declarations > **********************************/ > > -static inline struct zswap_tree *swap_zswap_tree(swp_entry_t swp) > +static inline struct xarray *swap_zswap_tree(swp_entry_t swp) > { > return &zswap_trees[swp_type(swp)][swp_offset(swp) > >> SWAP_ADDRESS_SPACE_SHIFT]; > @@ -804,63 +796,6 @@ void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) > spin_unlock(&zswap.shrink_lock); > } > > -/********************************* > -* rbtree functions > -**********************************/ > -static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) > -{ > - struct rb_node *node = root->rb_node; > - struct zswap_entry *entry; > - pgoff_t entry_offset; > - > - while (node) { > - entry = rb_entry(node, struct zswap_entry, rbnode); > - entry_offset = swp_offset(entry->swpentry); > - if (entry_offset > offset) > - node = node->rb_left; > - else if (entry_offset < offset) > - node = node->rb_right; > - else > - return entry; > - } > - return NULL; > -} > - > -/* > - * In the case that a entry with the same offset is found, a pointer to > - * the existing entry is stored in dupentry and the function returns -EEXIST > - */ > -static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry, > - struct zswap_entry **dupentry) > -{ > - struct rb_node **link = &root->rb_node, *parent = NULL; > - struct zswap_entry *myentry; > - pgoff_t myentry_offset, entry_offset = swp_offset(entry->swpentry); > - > - while (*link) { > - parent = *link; > - myentry = rb_entry(parent, struct zswap_entry, rbnode); > - myentry_offset = swp_offset(myentry->swpentry); > - if (myentry_offset > entry_offset) > - link = &(*link)->rb_left; > - else if (myentry_offset < entry_offset) > - link = &(*link)->rb_right; > - else { > - *dupentry = myentry; > - return -EEXIST; > - } > - } > - rb_link_node(&entry->rbnode, parent, link); > - rb_insert_color(&entry->rbnode, root); > - return 0; > -} > - > -static void zswap_rb_erase(struct rb_root *root, struct zswap_entry *entry) > -{ > - rb_erase(&entry->rbnode, root); > - RB_CLEAR_NODE(&entry->rbnode); > -} > - > /********************************* > * zswap entry functions > **********************************/ > @@ -872,7 +807,6 @@ static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid) > entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid); > if (!entry) > return NULL; > - RB_CLEAR_NODE(&entry->rbnode); > return entry; > } > > @@ -914,17 +848,6 @@ static void zswap_entry_free(struct zswap_entry *entry) > zswap_update_total_size(); > } > > -/* > - * The caller hold the tree lock and search the entry from the tree, > - * so it must be on the tree, remove it from the tree and free it. > - */ > -static void zswap_invalidate_entry(struct zswap_tree *tree, > - struct zswap_entry *entry) > -{ > - zswap_rb_erase(&tree->rbroot, entry); > - zswap_entry_free(entry); > -} > - > /********************************* > * compressed storage functions > **********************************/ > @@ -1113,7 +1036,8 @@ static void zswap_decompress(struct zswap_entry *entry, struct page *page) > static int zswap_writeback_entry(struct zswap_entry *entry, > swp_entry_t swpentry) > { > - struct zswap_tree *tree; > + struct xarray *tree; > + pgoff_t offset = swp_offset(swpentry); > struct folio *folio; > struct mempolicy *mpol; > bool folio_was_allocated; > @@ -1150,19 +1074,13 @@ static int zswap_writeback_entry(struct zswap_entry *entry, > * be dereferenced. > */ > tree = swap_zswap_tree(swpentry); > - spin_lock(&tree->lock); > - if (zswap_rb_search(&tree->rbroot, swp_offset(swpentry)) != entry) { > - spin_unlock(&tree->lock); > + if (entry != xa_cmpxchg(tree, offset, entry, NULL, GFP_KERNEL)) { > delete_from_swap_cache(folio); > folio_unlock(folio); > folio_put(folio); > return -ENOMEM; > } > > - /* Safe to deref entry after the entry is verified above. */ > - zswap_rb_erase(&tree->rbroot, entry); > - spin_unlock(&tree->lock); > - > zswap_decompress(entry, &folio->page); > > count_vm_event(ZSWPWB); > @@ -1471,8 +1389,8 @@ bool zswap_store(struct folio *folio) > { > swp_entry_t swp = folio->swap; > pgoff_t offset = swp_offset(swp); > - struct zswap_tree *tree = swap_zswap_tree(swp); > - struct zswap_entry *entry, *dupentry; > + struct xarray *tree = swap_zswap_tree(swp); > + struct zswap_entry *entry, *old; > struct obj_cgroup *objcg = NULL; > struct mem_cgroup *memcg = NULL; > > @@ -1555,28 +1473,32 @@ bool zswap_store(struct folio *folio) > insert_entry: > entry->swpentry = swp; > entry->objcg = objcg; > - if (objcg) { > - obj_cgroup_charge_zswap(objcg, entry->length); > - /* Account before objcg ref is moved to tree */ > - count_objcg_event(objcg, ZSWPOUT); > - } > > - /* map */ > - spin_lock(&tree->lock); > /* > * The folio may have been dirtied again, invalidate the > * possibly stale entry before inserting the new entry. > */ > - if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { > - zswap_invalidate_entry(tree, dupentry); > - WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry)); > + old = xa_store(tree, offset, entry, GFP_KERNEL); > + > + if (xa_is_err(old)) { > + if (xa_err(old) == -ENOMEM) > + zswap_reject_alloc_fail++; > + goto store_failed; > + } > + if (old) > + zswap_entry_free(old); > + > + if (objcg) { > + obj_cgroup_charge_zswap(objcg, entry->length); > + /* Account before objcg ref is moved to tree */ > + count_objcg_event(objcg, ZSWPOUT); > } > + > if (entry->length) { > INIT_LIST_HEAD(&entry->lru); > zswap_lru_add(&zswap.list_lru, entry); > atomic_inc(&zswap.nr_stored); > } > - spin_unlock(&tree->lock); > > /* update stats */ > atomic_inc(&zswap_stored_pages); > @@ -1585,6 +1507,12 @@ bool zswap_store(struct folio *folio) > > return true; > > +store_failed: > + if (!entry->length) { > + atomic_dec(&zswap_same_filled_pages); > + goto freepage; > + } > + zpool_free(zswap_find_zpool(entry), entry->handle); > put_pool: > zswap_pool_put(entry->pool); > freepage: > @@ -1598,11 +1526,9 @@ bool zswap_store(struct folio *folio) > * possibly stale entry which was previously stored at this offset. > * Otherwise, writeback could overwrite the new data in the swapfile. > */ > - spin_lock(&tree->lock); > - entry = zswap_rb_search(&tree->rbroot, offset); > + entry = xa_erase(tree, offset); > if (entry) > - zswap_invalidate_entry(tree, entry); > - spin_unlock(&tree->lock); > + zswap_entry_free(entry); > return false; > > shrink: > @@ -1615,20 +1541,15 @@ bool zswap_load(struct folio *folio) > swp_entry_t swp = folio->swap; > pgoff_t offset = swp_offset(swp); > struct page *page = &folio->page; > - struct zswap_tree *tree = swap_zswap_tree(swp); > + struct xarray *tree = swap_zswap_tree(swp); > struct zswap_entry *entry; > u8 *dst; > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > > - spin_lock(&tree->lock); > - entry = zswap_rb_search(&tree->rbroot, offset); > - if (!entry) { > - spin_unlock(&tree->lock); > + entry = xa_erase(tree, offset); > + if (!entry) > return false; > - } > - zswap_rb_erase(&tree->rbroot, entry); > - spin_unlock(&tree->lock); > > if (entry->length) > zswap_decompress(entry, page); > @@ -1652,19 +1573,17 @@ bool zswap_load(struct folio *folio) > void zswap_invalidate(swp_entry_t swp) > { > pgoff_t offset = swp_offset(swp); > - struct zswap_tree *tree = swap_zswap_tree(swp); > + struct xarray *tree = swap_zswap_tree(swp); > struct zswap_entry *entry; > > - spin_lock(&tree->lock); > - entry = zswap_rb_search(&tree->rbroot, offset); > + entry = xa_erase(tree, offset); > if (entry) > - zswap_invalidate_entry(tree, entry); > - spin_unlock(&tree->lock); > + zswap_entry_free(entry); > } > > int zswap_swapon(int type, unsigned long nr_pages) > { > - struct zswap_tree *trees, *tree; > + struct xarray *trees, *tree; > unsigned int nr, i; > > nr = DIV_ROUND_UP(nr_pages, SWAP_ADDRESS_SPACE_PAGES); > @@ -1674,11 +1593,8 @@ int zswap_swapon(int type, unsigned long nr_pages) > return -ENOMEM; > } > > - for (i = 0; i < nr; i++) { > - tree = trees + i; > - tree->rbroot = RB_ROOT; > - spin_lock_init(&tree->lock); > - } > + for (i = 0; i < nr; i++) > + xa_init(trees + i); > > nr_zswap_trees[type] = nr; > zswap_trees[type] = trees; > @@ -1687,7 +1603,7 @@ int zswap_swapon(int type, unsigned long nr_pages) > > void zswap_swapoff(int type) > { > - struct zswap_tree *trees = zswap_trees[type]; > + struct xarray *trees = zswap_trees[type]; > unsigned int i; > > if (!trees) > @@ -1695,7 +1611,7 @@ void zswap_swapoff(int type) > > /* try_to_unuse() invalidated all the entries already */ > for (i = 0; i < nr_zswap_trees[type]; i++) > - WARN_ON_ONCE(!RB_EMPTY_ROOT(&trees[i].rbroot)); > + WARN_ON_ONCE(!xa_empty(trees + i)); > > kvfree(trees); > nr_zswap_trees[type] = 0; > > --- > base-commit: 9a0181a3710eba1f5c6d19eadcca888be3d54e4f > change-id: 20240104-zswap-xarray-716260e541e3 > > Best regards,