From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44D58C02198 for ; Fri, 14 Feb 2025 17:58:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D05A8280008; Fri, 14 Feb 2025 12:58:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB666280002; Fri, 14 Feb 2025 12:58:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B08CA280008; Fri, 14 Feb 2025 12:58:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 92B30280002 for ; Fri, 14 Feb 2025 12:58:51 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 380E01607C3 for ; Fri, 14 Feb 2025 17:58:51 +0000 (UTC) X-FDA: 83119310862.17.D9A3E66 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf19.hostedemail.com (Postfix) with ESMTP id 49DB31A0008 for ; Fri, 14 Feb 2025 17:58:49 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jzDdKBGI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739555929; a=rsa-sha256; cv=none; b=R6LAWljjzLZkFiNR2VC2NeeHapSlh9KYJVdjoCBtXWBdJMb+j5y/m+ggWSD5uVQJoNsci7 DirBzZXud76TlHy746+26rrZr3w4aDPj7fAZUNT3ZgzaZu7bJCR2Ozz73jyCMRCA2xEfKO H2fFxgrF2vMzvMDKSTzR/1ZL44RUZ7o= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jzDdKBGI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739555929; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4dCFAluMOzAQvK61QbUUUqwSef8skzDGJtN8wrwwO2c=; b=MBjQU0nIhXgbj2a5Wz/SBFrm74DzsI5Ga2asYRh0s2ZwR8LNS0LmUgE5oh4YjXGAcpSVVS MCi45Kx03y3iPsNTaR+FQe2+Y+AQ5i99JNlV1/ONc5FjxHPzYr2QXR6n6fGr+IZE2Jg0Gm GZZcT1BZIPQftjJYIBH2Q+sq/BezE/w= Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-220c2a87378so34432855ad.1 for ; Fri, 14 Feb 2025 09:58:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739555928; x=1740160728; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=4dCFAluMOzAQvK61QbUUUqwSef8skzDGJtN8wrwwO2c=; b=jzDdKBGIa/ywhxlWMLnzg5UC+2a2Ge7BswlwneRThyj5nW78aFrFNzmcGdeVhSbmHM Zb1iJwxlc+7K97aQ2vFMdYjTVq8bAjirFi5cOFYJztP3yn4tsxYs0N4Bgh7mJeqgfTdl OW6+y/Q9DXXCruZVAqe7SNG4qAew/u7IJpzZ1ajh2TNg6M9a44uQgJZ2mNh/DBZb1bqY NTVBo8ND7TJs3qxLxBi5g1e9dNqNaFJqjNu1zlsCuR2a36MlkXlc81mPjFj/8OLRo4zo z8IFS2ZSWZ2ICaEL8KqZepy95nPJIdDV+HtISSTmOs79UdQnAVDSfs/rETNjgsVRG4Q0 H0yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739555928; x=1740160728; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=4dCFAluMOzAQvK61QbUUUqwSef8skzDGJtN8wrwwO2c=; b=cowIyJKiaLisJ/c1YE2TcmNXIobEgYxS0miV4ZrzCA+A3/X5BM9A/98zVcJ8m5syFY HE3dtV7FX7bAHqgpy7Nh7Rcc4isrkIWLUFO6uAYJMs2QUlC/vS4sCWyOX6UJgkkKMZIc N+27fmHISghXswwGm+WNvSoc6QWz4bKXrP7bq6G7ZleuM1SZ4xLr8OIfzDJgGR0EGs4r mSATW3gKH4NG8ZQv9RstrrhJWiaCVID0WMTTf89e9AU+d8+HXNggYCMZorHHHkoY+iLg CSa8DlJZGYtkcWMBuVvkYaJH8c/dAK4WVl9j5AZ/0EHeoelur/x/LS3YwntBZn30XGqd 0b4Q== X-Gm-Message-State: AOJu0Ywx4bWSdpt0mHdsA/7AuhWFSHd/mXW2voHg4H2Je1VjACYuxXh7 oot0BlDPp5ae5b4b9rWKNGvKWN7cHsn0gTqphC/Gt6ToznIGjDk8kk3xZvHYBFE= X-Gm-Gg: ASbGncsD2903qAzqNZkzXq9mA9VMUy1xV46z03J1yAJG2Zyv0m+LcgcdJiGgc15EC9p z6k5ZsU2TMFwg4fIRQktr+Yi4H2CcSf0f+cW3ksBqje4wD8z33d03czAfHyFNWd9XUgeHw28DEB FFYKK8SDhGxt+1p1QofCES6naVodlsLs7CftqcvhxB1NwWxk2L8agjTV1o9w/gX1axpu3tQUT8h JTML9H9s9GawmK6dOSllGld5gjGwmajT48s+5vdyDXTXPLphvbPtIjMBrBgJyq+8j8SG7XrlLrV aYG6cy7Cen9wpfQXjZrEemN1af0HXG0jOItU X-Google-Smtp-Source: AGHT+IEGQxigwy9i4C5LjKLVWwRE3ILrRt8Bj7f0chMUrukyyWwUbg5alAHyT9i/np1CLUlPEgoenQ== X-Received: by 2002:a17:903:2301:b0:21f:6ce1:7410 with SMTP id d9443c01a7336-2210405d629mr2962655ad.24.1739555927679; Fri, 14 Feb 2025 09:58:47 -0800 (PST) Received: from KASONG-MC4.tencent.com ([106.37.120.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-220d55943b5sm31216605ad.246.2025.02.14.09.58.42 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 14 Feb 2025 09:58:47 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 5/7] mm, swap: use percpu cluster as allocation fast path Date: Sat, 15 Feb 2025 01:57:07 +0800 Message-ID: <20250214175709.76029-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250214175709.76029-1-ryncsn@gmail.com> References: <20250214175709.76029-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 49DB31A0008 X-Stat-Signature: tqe9tbdehcxr1spcp9qp51w9ycjw5131 X-Rspam-User: X-HE-Tag: 1739555929-54635 X-HE-Meta: U2FsdGVkX1+trmoUULi8ryahKbx+eWe/IxGPGZz7XhJuEjB4u0u/1clsu0ve7D4u4zmwsYUsv5sIXM9d09QcLtwkxWjFupASTYCb9qKlDCs6slbeMFsN7DkfASAubKsNFJ+PxKSp7AVOW/D0e5F86x3ovttSok1/qmRebXCyYHTmQ4zjvjL4NPyhgBuy+WiachTLMn4DvF7HVQpAAxnLP5mom1oe8Ds6izrod6aq/vsdQpYpMMtBKdpQ0LgGBa9t++TjKjk8/MRGwXWDmUbHpf+WKpbgu7g09YlGASm+I/bkQlSA7OA20pBDoOOj83HLxYhxdRfbpdIAfTTxpjZvDTLHczfphxV8SYyc6P6FrF8dhFyDjBQKiDo/iAJ4lkJojd2jOWWEs5r2g0ogMBKYFygbYXCt+9C5w0HQwef+Kek8T/5kgshvy6aOgIRPlHCxV/MP5G1WAn8pKsQiWEvk+gWeCnnaSKOU5ZiOdLoaQssmxu7nZCut61mu4I2DEo2zzDtG0sXWWakKLDCT8TeOiNp4eYXSAzi+4T7Ed0gzTdSFJBxPELgqmH6FKSCMjvzcCbJIeJicHpfi0gM0E7h8tQzDb5ayjI6bsggxctc+4HMyDEvWPkiGvqFFgNqnK+XaMHfINbF+QZ/v6O3EQwHrn85NVanMBWYWqEYgqwYy650xjGeozDXJ/XczWYq6lV9OpgENatgfI7WFicFoX23pzVkOJE9bKL/Qzbwn9rfEEmFdzYo05bu3FCj95wrVWiJt0N4w9w87QQdjKWpVOEeCgD96Ks2E7FS4Y2pFoXtK5bfl5iKeD4uqnGGrEjyt1D42STxaYc9bDLZ7pLljjxCAw0WSS2bIaF4KsAw/no30Y6T20oOZJ92h2HHuJGu1r2Gr42HguWQmtcSvM+pUDQFssKHLrg515OKBl5a8/ESzpvDgBg2JDvHgJMEiF1hoqUsyxMpB2CXEJZ2OWowjTfU DVrxmefu lhDec+wyQHeO5ygYYdh688ZPYW3RmZcscWJDdjtsX6lQZh4B4grCz6pZ4r/J8HCH5sKxRBlUCqdPepnifvlq60VmQNZ357hXov3rnlEDpYPTzUxvqq4aFwxdkfnPz7niYAfmhlh40aSeRvNcxIcsMkzL9ObUmYD1+WJFayzaEd7Iy8nWJSGPK/wcZVLkYSG6cfiy0F6NCkePpYkVFGLIsVttOHZOtObi8joIZ1p+2dv0GHk6+Q1XdzmQh9440u9UUswZBlAs484tVHCZcOCGKVjC15sd+mRqFT2w0ju9jR0NkTIFFzYXPcwLUQ0BtlMwuNpkqMxt0Qay0bzrDoBId5uuUUTea5IS95hgzdKV246SzKRwRdSQUlNF04xKJ3ynvtJLAYshwTyvAzmjuH/rZW+gUVwJ2JPH1DLRVVKRRz9hfjRbUhrQ4ZFh/cCFHBiDKQYEMpmkmOUhEk9Ltv9I7PmOeZMsk/K9VLCVpwNPzVP0ILtEQLsV3kYJ2iib2B5xdi15xUibHv/HlZ8KVx7UtsLv6Q8D1s2fX3evoH618PDdierQfFUfVuiAGEkrCTXmW8gDS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Current allocation workflow first traverses the plist with a global lock held, after choosing a device, it uses the percpu cluster on that swap device. This commit moves the percpu cluster variable out of being tied to individual swap devices, making it a global percpu variable, and will be used directly for allocation as a fast path. The global percpu cluster variable will never point to a HDD device, and allocation on HDD devices is still globally serialized. This improves the allocator performance and prepares for removal of the slot cache in later commits. There shouldn't be much observable behavior change, except one thing: this changes how swap device allocation rotation works. Currently, each allocation will rotate the plist, and because of the existence of slot cache (64 entries), swap devices of the same priority are rotated for every 64 entries consumed. And, high order allocations are different, they will bypass the slot cache, and so swap device is rotated for every 16K, 32K, or up to 2M allocation. The rotation rule was never clearly defined or documented, it was changed several times without mentioning too. After this commit, once slot cache is gone in later commits, swap device rotation will happen for every consumed cluster. Ideally non-HDD devices will be rotated if 2M space has been consumed for each order, this seems reasonable. HDD devices is rotated for every allocation regardless of the allocation order, which should be OK and trivial. Signed-off-by: Kairui Song --- include/linux/swap.h | 11 ++-- mm/swapfile.c | 120 +++++++++++++++++++++++++++---------------- 2 files changed, 79 insertions(+), 52 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 2fe91c293636..a8d84f22357e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -284,12 +284,10 @@ enum swap_cluster_flags { #endif /* - * We assign a cluster to each CPU, so each CPU can allocate swap entry from - * its own cluster and swapout sequentially. The purpose is to optimize swapout - * throughput. + * We keep using same cluster for rotating device so swapout will be sequential. + * The purpose is to optimize swapout throughput on rotating device. */ -struct percpu_cluster { - local_lock_t lock; /* Protect the percpu_cluster above */ +struct swap_sequential_cluster { unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */ }; @@ -315,8 +313,7 @@ struct swap_info_struct { atomic_long_t frag_cluster_nr[SWAP_NR_ORDERS]; unsigned int pages; /* total of usable pages of swap */ atomic_long_t inuse_pages; /* number of those currently in use */ - struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ - struct percpu_cluster *global_cluster; /* Use one global cluster for rotating device */ + struct swap_sequential_cluster *global_cluster; /* Use one global cluster for rotating device */ spinlock_t global_cluster_lock; /* Serialize usage of global cluster */ struct rb_root swap_extent_root;/* root of the swap extent rbtree */ struct block_device *bdev; /* swap device or bdev of swap file */ diff --git a/mm/swapfile.c b/mm/swapfile.c index ae3bd0a862fc..791cd7ed5bdf 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -116,6 +116,18 @@ static atomic_t proc_poll_event = ATOMIC_INIT(0); atomic_t nr_rotate_swap = ATOMIC_INIT(0); +struct percpu_swap_cluster { + struct swap_info_struct *si; + unsigned long offset[SWAP_NR_ORDERS]; + local_lock_t lock; +}; + +static DEFINE_PER_CPU(struct percpu_swap_cluster, percpu_swap_cluster) = { + .si = NULL, + .offset = { SWAP_ENTRY_INVALID }, + .lock = INIT_LOCAL_LOCK(), +}; + static struct swap_info_struct *swap_type_to_swap_info(int type) { if (type >= MAX_SWAPFILES) @@ -548,7 +560,7 @@ static bool swap_do_scheduled_discard(struct swap_info_struct *si) ci = list_first_entry(&si->discard_clusters, struct swap_cluster_info, list); /* * Delete the cluster from list to prepare for discard, but keep - * the CLUSTER_FLAG_DISCARD flag, there could be percpu_cluster + * the CLUSTER_FLAG_DISCARD flag, percpu_swap_cluster could be * pointing to it, or ran into by relocate_cluster. */ list_del(&ci->list); @@ -815,10 +827,12 @@ static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, out: relocate_cluster(si, ci); unlock_cluster(ci); - if (si->flags & SWP_SOLIDSTATE) - __this_cpu_write(si->percpu_cluster->next[order], next); - else + if (si->flags & SWP_SOLIDSTATE) { + __this_cpu_write(percpu_swap_cluster.si, si); + __this_cpu_write(percpu_swap_cluster.offset[order], next); + } else { si->global_cluster->next[order] = next; + } return found; } @@ -869,9 +883,8 @@ static void swap_reclaim_work(struct work_struct *work) } /* - * Try to get swap entries with specified order from current cpu's swap entry - * pool (a cluster). This might involve allocating a new cluster for current CPU - * too. + * Try to allocate swap entries with specified order and try set a new + * cluster for current CPU too. */ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int order, unsigned char usage) @@ -879,18 +892,12 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o struct swap_cluster_info *ci; unsigned int offset, found = 0; - if (si->flags & SWP_SOLIDSTATE) { - /* Fast path using per CPU cluster */ - local_lock(&si->percpu_cluster->lock); - offset = __this_cpu_read(si->percpu_cluster->next[order]); - } else { + if (!(si->flags & SWP_SOLIDSTATE)) { /* Serialize HDD SWAP allocation for each device. */ spin_lock(&si->global_cluster_lock); offset = si->global_cluster->next[order]; - } - - if (offset) { ci = lock_cluster(si, offset); + /* Cluster could have been used by another order */ if (cluster_is_usable(ci, order)) { if (cluster_is_empty(ci)) @@ -980,9 +987,7 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o } } done: - if (si->flags & SWP_SOLIDSTATE) - local_unlock(&si->percpu_cluster->lock); - else + if (!(si->flags & SWP_SOLIDSTATE)) spin_unlock(&si->global_cluster_lock); return found; } @@ -1203,6 +1208,41 @@ static bool get_swap_device_info(struct swap_info_struct *si) return true; } +/* + * Fast path try to get swap entries with specified order from current + * CPU's swap entry pool (a cluster). + */ +static int swap_alloc_fast(swp_entry_t entries[], + unsigned char usage, + int order, int n_goal) +{ + struct swap_cluster_info *ci; + struct swap_info_struct *si; + unsigned int offset, found; + int n_ret = 0; + + n_goal = min(n_goal, SWAP_BATCH); + + si = __this_cpu_read(percpu_swap_cluster.si); + offset = __this_cpu_read(percpu_swap_cluster.offset[order]); + if (!si || !offset || !get_swap_device_info(si)) + return 0; + + while (offset) { + ci = lock_cluster(si, offset); + found = alloc_swap_scan_cluster(si, ci, offset, order, usage); + if (!found) + break; + entries[n_ret++] = swp_entry(si->type, found); + if (n_ret == n_goal) + break; + offset = __this_cpu_read(percpu_swap_cluster.offset[order]); + } + + put_swap_device(si); + return n_ret; +} + int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) { int order = swap_entry_order(entry_order); @@ -1211,19 +1251,28 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) int n_ret = 0; int node; + /* Fast path using percpu cluster */ + local_lock(&percpu_swap_cluster.lock); + n_ret = swap_alloc_fast(swp_entries, + SWAP_HAS_CACHE, + order, n_goal); + if (n_ret == n_goal) + goto out; + + n_goal = min_t(int, n_goal - n_ret, SWAP_BATCH); + /* Rotate the device and switch to a new cluster */ spin_lock(&swap_avail_lock); start_over: node = numa_node_id(); plist_for_each_entry_safe(si, next, &swap_avail_heads[node], avail_lists[node]) { - /* requeue si to after same-priority siblings */ plist_requeue(&si->avail_lists[node], &swap_avail_heads[node]); spin_unlock(&swap_avail_lock); if (get_swap_device_info(si)) { - n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, - n_goal, swp_entries, order); + n_ret += scan_swap_map_slots(si, SWAP_HAS_CACHE, n_goal, + swp_entries + n_ret, order); put_swap_device(si); if (n_ret || size > 1) - goto check_out; + goto out; } spin_lock(&swap_avail_lock); @@ -1241,12 +1290,10 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) if (plist_node_empty(&next->avail_lists[node])) goto start_over; } - spin_unlock(&swap_avail_lock); - -check_out: +out: + local_unlock(&percpu_swap_cluster.lock); atomic_long_sub(n_ret * size, &nr_swap_pages); - return n_ret; } @@ -2733,8 +2780,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) arch_swap_invalidate_area(p->type); zswap_swapoff(p->type); mutex_unlock(&swapon_mutex); - free_percpu(p->percpu_cluster); - p->percpu_cluster = NULL; kfree(p->global_cluster); p->global_cluster = NULL; vfree(swap_map); @@ -3133,7 +3178,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, unsigned long nr_clusters = DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); struct swap_cluster_info *cluster_info; unsigned long i, j, idx; - int cpu, err = -ENOMEM; + int err = -ENOMEM; cluster_info = kvcalloc(nr_clusters, sizeof(*cluster_info), GFP_KERNEL); if (!cluster_info) @@ -3142,20 +3187,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, for (i = 0; i < nr_clusters; i++) spin_lock_init(&cluster_info[i].lock); - if (si->flags & SWP_SOLIDSTATE) { - si->percpu_cluster = alloc_percpu(struct percpu_cluster); - if (!si->percpu_cluster) - goto err_free; - - for_each_possible_cpu(cpu) { - struct percpu_cluster *cluster; - - cluster = per_cpu_ptr(si->percpu_cluster, cpu); - for (i = 0; i < SWAP_NR_ORDERS; i++) - cluster->next[i] = SWAP_ENTRY_INVALID; - local_lock_init(&cluster->lock); - } - } else { + if (!(si->flags & SWP_SOLIDSTATE)) { si->global_cluster = kmalloc(sizeof(*si->global_cluster), GFP_KERNEL); if (!si->global_cluster) @@ -3432,8 +3464,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) bad_swap_unlock_inode: inode_unlock(inode); bad_swap: - free_percpu(si->percpu_cluster); - si->percpu_cluster = NULL; kfree(si->global_cluster); si->global_cluster = NULL; inode = NULL; -- 2.48.1