From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3C32C369DC for ; Wed, 30 Apr 2025 00:56:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BBC526B00CF; Tue, 29 Apr 2025 20:56:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B6C4A6B00D0; Tue, 29 Apr 2025 20:56:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E6356B00DC; Tue, 29 Apr 2025 20:56:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7D4926B00CF for ; Tue, 29 Apr 2025 20:56:26 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0CE835F8A4 for ; Tue, 29 Apr 2025 23:39:05 +0000 (UTC) X-FDA: 83388699450.15.86417FE Received: from mail-yb1-f173.google.com (mail-yb1-f173.google.com [209.85.219.173]) by imf27.hostedemail.com (Postfix) with ESMTP id 3E55A40008 for ; Tue, 29 Apr 2025 23:39:03 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ihP5pQlN; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.219.173 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745969943; a=rsa-sha256; cv=none; b=nQ0TkWI9o/pH4wE4+OE8JsWXBMwtS8PMqffV0E14ftaj936P7Rd3MnAThDa34T7Lz41L7p 6l4wPL7RZOrZIa5ixv306ydAmOqSxiyecgCrMpG0unz9QJh+V0/agB3cFui6n5AzWjKFnQ n0AY2Tp758AL2+FlxwEaYhSKUl2DlLI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ihP5pQlN; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.219.173 as permitted sender) smtp.mailfrom=nphamcs@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745969943; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U2S3fikh6wnFZd7wLpLwS9wxRlqN0WemSs8LsRqS7Hc=; b=7ok+95VbWg4GCChsOrhYSjRKkwO6RHIBD59BC0wZLrlGJdsxrEk0euZ3ZChZj7uQqWTpTV K5ADwmFT2h2ra2JGVp30Xry1uqes6m5a0hG10TOACGlYp/hOveTg1gA/vpoOSKiumlNNfb MhABi5R2VOqNiU4zzv51B3zPbr8sv7w= Received: by mail-yb1-f173.google.com with SMTP id 3f1490d57ef6-e731a56e111so4423455276.1 for ; Tue, 29 Apr 2025 16:39:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745969942; x=1746574742; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U2S3fikh6wnFZd7wLpLwS9wxRlqN0WemSs8LsRqS7Hc=; b=ihP5pQlNML1aCUQGIGbcPShZdrqpidAOVtzAPZdXBA/1yttClVraC9ZXjrsEjb9Whm 433O5VrLMCFhO9lo6SJhQqLn/y0tM5sdhYWtyZuRj/uFBAeDozDuVgnTAIhKRMdLn1j3 eujo6x9dF6xtB8wcjHNMty+0tO+/NDCzjEKczNb2d8pBDcUOFBTn2hfHRd8Qlw6AQDAz RQlBawfI+6OLcWftKzeiCaeybC+IpEMP/luXzpO9ntq3DZItv5qEEJtLgv8AFU5n9MCh Wgc2sZZwOEthH1op0limJHdFcbEiJFyO2qJUfSW3FTpz6adghRUwXdxOaACLfvm/wH8Z WycQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745969942; x=1746574742; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U2S3fikh6wnFZd7wLpLwS9wxRlqN0WemSs8LsRqS7Hc=; b=sfuAvzSXkNKcVIKGM7Osajytai5Ecxph1tAjni1T8xvh++npsC0dBRVjQAVI0jg+Aj kZo16htl4q2CrGaGaa9yG8oWf/4lMOZM45YX/+x/5d+SL42zfJug+6z3jWshyrgAC/MH kZSrNP2ENz65wTK+27J+hYnR1mCJrmVs/cC8DXrW93L2wbPshVRRW85waZ7woz7YjZFX tggA4Bf6KamChIKrRpd1Jm2uVRoV7++DxaBnNknT7Lvg6WFzOEfOmlRZAjTL27stAs2J DAB2BmXQOQGjJtpqy0yfJyCEoPhW4uEF7W2zbWZbkXoHZ+ibYq9235+3Lf2LBzTYdnCJ JS7Q== X-Gm-Message-State: AOJu0YyeGYPHWXLoZ4+r4n/B58jLxqO02kvKU3TWE80s3HgpHVjR98ND ro+CAYK2RGyQzn+KI69q4Tk3jsujx886ehsTZppLw3htd+IohfX5lxSadg== X-Gm-Gg: ASbGncsr0/H6K+Pf8HkfihHLAav/zgky/Yt/gbLAZXqyj1mbCp11FYoi4pulkso321W C/CZ31GKScwGmmwYsNazkphyOWAVGn1jqJ6XG1rsi0blKBHLkW9E1FFRVvDxD1BxFzsaI18kve/ 8mHdmGlOhDduTF44Hrt5oYU7s3vndP8aFZUrBtj0uJAQuN8VN2EyGjvPNJEMOcdPK62+ZCu2ecp SX4ReCc/HZOVJp6//F3/p6bk98yxiD7wNtzaKEyUJwAaywP3A/CgA4lMdVQzBn7NPZBYQbtMVDP zeCIcSG77fCK2WgE6ZSUybrxIAyrwMT+ X-Google-Smtp-Source: AGHT+IFU9ZEHTZiihNXiiX7ZqSu0t5gcjbUJ3ndqu9lphvRCw+DJM5mi2qAtWYKs7OOSvIaNJnNGuw== X-Received: by 2002:a05:6902:e04:b0:e73:192b:2963 with SMTP id 3f1490d57ef6-e73eaadb162mr1580318276.14.1745969942215; Tue, 29 Apr 2025 16:39:02 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:74::]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e7412a39697sm64520276.8.2025.04.29.16.39.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Apr 2025 16:39:01 -0700 (PDT) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, viro@zeniv.linux.org.uk, baohua@kernel.org, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org, peterx@redhat.com Subject: [RFC PATCH v2 14/18] memcg: swap: only charge physical swap slots Date: Tue, 29 Apr 2025 16:38:42 -0700 Message-ID: <20250429233848.3093350-15-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250429233848.3093350-1-nphamcs@gmail.com> References: <20250429233848.3093350-1-nphamcs@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3E55A40008 X-Stat-Signature: 9xnemt7tiz76y3jypujxyt16hhkbrgua X-HE-Tag: 1745969943-187243 X-HE-Meta: U2FsdGVkX1+1qsyLgMjNFR0JEfvalzyOLcUtZI/jgywQxjsWIzL016ukEGQ02kKxF3GYWQCnnXhUGj5orc7e/qNbHfREMvCYTz7tnr8EstQ007L9WMGSP92AivoOErrmBXCrSCl5GAPJCe6vsUWGmqJIIkXr/I7AVe/e63yjpS4VDzoG+T1NM1PaCU+Ok6loDN6fpEp7wCFTfVx9AEi2o2vlFBWX/yiJwxZIADM4qoWnXD/3ewUIWtRHFaene4O4oKKiN7F0KZiZAIbZIfRfXHvQYAf6LxkAc+03EkX4vxDeN6vXjlrSlXAhWJdE/QDDyEeQRHhoXMkRNGNT7nYlsQPuoPxsPEMke9GXHBJGoLUAxhatefgCP/CwuZYEWMELM5V6uEG35sW++y0mccl0N0xFulms6fjf5a80O5gxR/lVD8XU/K4skSqkEJ3fhy2bezx978nELDUjr5xrKhOGc+2GLn/3BxYGubH/h+lewcl5iPF3pPhwfj9DMRbo2O4iWaT7yv17gfwQRmM5dFbrHcMrwAMPoL89EmJvrUPa7NEb4DdcfVViOZ+KeFVfDDeMCQsLM8pBJufcfrn+HmFmFhvxVe49VVTp7BjWZaezxxBG2/FN96qXMPSCpHhIAwLbdazivEAsgyN83+q0rqUet8nVMrUsPcekYjav/6ZAtq/z553QwB4XKLN4tocsBTURMJTZsJiilwO1tcIE9Pb/79UjRYVJAGv3hLXsakG23gijsZU4B5LBX+bjpwVMUX+1IXSQTHz2+12jhA09A03g9yQfjWLjYJsiDXasmDqrivLdWaIZZI5WB9Fh1UcP9kC7+zof1sPe0Hq2G8Pv7czCHWFyTi8jXRfroMESJc8kdrvTzHGPwbxktBOS6A0KTU+p3HE3SsBRT4RLN8cOmun2IcRwOIUCyWHcRUuQKF7Gfan1OT0Aws9cyTpB7XBxohGLPzHIbc/UBZow+KKHzpB ue7rcYXq ymAG8xgwEiM9hHmi64OFG9t+DD8Kj3FQy/21lghRXPq/Z4wOxu35N0/0dCqKJbuicem8Ql2vEm1jIEuoRpg91kCqgxUZXmqQY06pKVFj2RZNmAnnSEOvhvzhVGitThVweKy+dKaqcihrlkrIlK1L0EiprlW8MegfJiXPu7XJzJgZzcA1gAZmfnf0edYykL+dGFzHMZqWhGa/ocuD3hcbgoE+oDWHbHXGcHjyUwyxbgG0HQ/g3sEIgut1rxuDxrSBcHxkbRTndYLHBVkMMKvSze1r0Takgn80tY/BUS83HRKFBCTTKovx4lohfrt7qWWpsOS77/Q8Q1Ajiha6P0Q7VHDqjPDBe+yIY65jNuDUiV3wzWmDbE3mTNV9JCsIvuhb2yr8Q7tpfFtJXpR8DtYf+lrx1+LUxwKXLdUxIHr3NmSECCNv1ulYyGi2n4iY3GUUc9nt72okohKRI+GCYEhePKtzbDYZWp0amP588HO9Hl43IiO8UGI4AwyyTBQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that zswap and the zero-filled swap page optimization no longer takes up any physical swap space, we should not charge towards the swap usage and limits of the memcg in these case. We will only record the memcg id on virtual swap slot allocation, and defer physical swap charging (i.e towards memory.swap.current) until the virtual swap slot is backed by an actual physical swap slot (on zswap store failure fallback or zswap writeback). Signed-off-by: Nhat Pham --- include/linux/swap.h | 17 ++++++++ mm/memcontrol.c | 102 ++++++++++++++++++++++++++++++++++--------- mm/vswap.c | 43 ++++++++---------- 3 files changed, 118 insertions(+), 44 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 9c92a982d546..a65b22de4cdd 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -690,6 +690,23 @@ static inline void folio_throttle_swaprate(struct folio *folio, gfp_t gfp) #if defined(CONFIG_MEMCG) && defined(CONFIG_SWAP) void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry); + +void __mem_cgroup_record_swap(struct folio *folio, swp_entry_t entry); +static inline void mem_cgroup_record_swap(struct folio *folio, + swp_entry_t entry) +{ + if (!mem_cgroup_disabled()) + __mem_cgroup_record_swap(folio, entry); +} + +void __mem_cgroup_unrecord_swap(swp_entry_t entry, unsigned int nr_pages); +static inline void mem_cgroup_unrecord_swap(swp_entry_t entry, + unsigned int nr_pages) +{ + if (!mem_cgroup_disabled()) + __mem_cgroup_unrecord_swap(entry, nr_pages); +} + int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry); static inline int mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 126b2d0e6aaa..c6bee12f2016 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5020,6 +5020,46 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) css_put(&memcg->css); } +/** + * __mem_cgroup_record_swap - record the folio's cgroup for the swap entries. + * @folio: folio being swapped out. + * @entry: the first swap entry in the range. + * + * In the virtual swap implementation, we only record the folio's cgroup + * for the virtual swap slots on their allocation. We will only charge + * physical swap slots towards the cgroup's swap usage, i.e when physical swap + * slots are allocated for zswap writeback or fallback from zswap store + * failure. + */ +void __mem_cgroup_record_swap(struct folio *folio, swp_entry_t entry) +{ + unsigned int nr_pages = folio_nr_pages(folio); + struct mem_cgroup *memcg; + + memcg = folio_memcg(folio); + + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); + if (!memcg) + return; + + memcg = mem_cgroup_id_get_online(memcg); + if (nr_pages > 1) + mem_cgroup_id_get_many(memcg, nr_pages - 1); + swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); +} + +void __mem_cgroup_unrecord_swap(swp_entry_t entry, unsigned int nr_pages) +{ + unsigned short id = swap_cgroup_clear(entry, nr_pages); + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg = mem_cgroup_from_id(id); + if (memcg) + mem_cgroup_id_put_many(memcg, nr_pages); + rcu_read_unlock(); +} + /** * __mem_cgroup_try_charge_swap - try charging swap space for a folio * @folio: folio being added to swap @@ -5038,34 +5078,47 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry) if (do_memsw_account()) return 0; - memcg = folio_memcg(folio); + if (IS_ENABLED(CONFIG_VIRTUAL_SWAP)) { + /* + * In the virtual swap implementation, we already record the cgroup + * on virtual swap allocation. Note that the virtual swap slot holds + * a reference to memcg, so this lookup should be safe. + */ + rcu_read_lock(); + memcg = mem_cgroup_from_id(lookup_swap_cgroup_id(entry)); + rcu_read_unlock(); + } else { + memcg = folio_memcg(folio); - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) - return 0; + VM_WARN_ON_ONCE_FOLIO(!memcg, folio); + if (!memcg) + return 0; - if (!entry.val) { - memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; - } + if (!entry.val) { + memcg_memory_event(memcg, MEMCG_SWAP_FAIL); + return 0; + } - memcg = mem_cgroup_id_get_online(memcg); + memcg = mem_cgroup_id_get_online(memcg); + } if (!mem_cgroup_is_root(memcg) && !page_counter_try_charge(&memcg->swap, nr_pages, &counter)) { memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - mem_cgroup_id_put(memcg); + if (!IS_ENABLED(CONFIG_VIRTUAL_SWAP)) + mem_cgroup_id_put(memcg); return -ENOMEM; } - /* Get references for the tail pages, too */ - if (nr_pages > 1) - mem_cgroup_id_get_many(memcg, nr_pages - 1); + if (!IS_ENABLED(CONFIG_VIRTUAL_SWAP)) { + /* Get references for the tail pages, too */ + if (nr_pages > 1) + mem_cgroup_id_get_many(memcg, nr_pages - 1); + swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); + } mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); - swap_cgroup_record(folio, mem_cgroup_id(memcg), entry); - return 0; } @@ -5079,7 +5132,11 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) struct mem_cgroup *memcg; unsigned short id; - id = swap_cgroup_clear(entry, nr_pages); + if (IS_ENABLED(CONFIG_VIRTUAL_SWAP)) + id = lookup_swap_cgroup_id(entry); + else + id = swap_cgroup_clear(entry, nr_pages); + rcu_read_lock(); memcg = mem_cgroup_from_id(id); if (memcg) { @@ -5090,7 +5147,8 @@ void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) page_counter_uncharge(&memcg->swap, nr_pages); } mod_memcg_state(memcg, MEMCG_SWAP, -nr_pages); - mem_cgroup_id_put_many(memcg, nr_pages); + if (!IS_ENABLED(CONFIG_VIRTUAL_SWAP)) + mem_cgroup_id_put_many(memcg, nr_pages); } rcu_read_unlock(); } @@ -5099,7 +5157,7 @@ static bool mem_cgroup_may_zswap(struct mem_cgroup *original_memcg); long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) { - long nr_swap_pages, nr_zswap_pages = 0; + long nr_swap_pages; /* * If swap is virtualized and zswap is enabled, we can still use zswap even @@ -5108,10 +5166,14 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) if (IS_ENABLED(CONFIG_VIRTUAL_SWAP) && zswap_is_enabled() && (mem_cgroup_disabled() || do_memsw_account() || mem_cgroup_may_zswap(memcg))) { - nr_zswap_pages = PAGE_COUNTER_MAX; + /* + * No need to check swap cgroup limits, since zswap is not charged + * towards swap consumption. + */ + return PAGE_COUNTER_MAX; } - nr_swap_pages = max_t(long, nr_zswap_pages, get_nr_swap_pages()); + nr_swap_pages = get_nr_swap_pages(); if (mem_cgroup_disabled() || do_memsw_account()) return nr_swap_pages; for (; !mem_cgroup_is_root(memcg); memcg = parent_mem_cgroup(memcg)) diff --git a/mm/vswap.c b/mm/vswap.c index a42d346b7e93..c51ff5c54480 100644 --- a/mm/vswap.c +++ b/mm/vswap.c @@ -341,6 +341,7 @@ static inline void release_backing(swp_entry_t entry, int nr) swap_slot_free_nr(slot, nr); swap_slot_put_swap_info(si); } + mem_cgroup_uncharge_swap(entry, nr); } } @@ -360,7 +361,7 @@ static void vswap_free(swp_entry_t entry) virt_clear_shadow_from_swap_cache(entry); release_backing(entry, 1); - mem_cgroup_uncharge_swap(entry, 1); + mem_cgroup_unrecord_swap(entry, 1); /* erase forward mapping and release the virtual slot for reallocation */ release_vswap_slot(entry.val); kfree_rcu(desc, rcu); @@ -378,27 +379,13 @@ swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; struct swp_desc *desc; - int i, nr = folio_nr_pages(folio); + int nr = folio_nr_pages(folio); entry = vswap_alloc(nr); if (!entry.val) return entry; - /* - * XXX: for now, we charge towards the memory cgroup's swap limit on virtual - * swap slots allocation. This will be changed soon - we will only charge on - * physical swap slots allocation. - */ - if (mem_cgroup_try_charge_swap(folio, entry)) { - for (i = 0; i < nr; i++) { - vswap_free(entry); - entry.val++; - } - atomic_add(nr, &vswap_alloc_reject); - entry.val = 0; - return entry; - } - + mem_cgroup_record_swap(folio, entry); XA_STATE(xas, &vswap_map, entry.val); rcu_read_lock(); @@ -440,6 +427,9 @@ swp_slot_t vswap_alloc_swap_slot(struct folio *folio) if (!slot.val) return slot; + if (mem_cgroup_try_charge_swap(folio, entry)) + goto free_phys_swap; + /* establish the vrtual <-> physical swap slots linkages. */ for (i = 0; i < nr; i++) { err = xa_insert(&vswap_rmap, slot.val + i, @@ -448,13 +438,7 @@ swp_slot_t vswap_alloc_swap_slot(struct folio *folio) if (err) { while (--i >= 0) xa_erase(&vswap_rmap, slot.val + i); - /* - * We have not updated the backing type of the virtual swap slot. - * Simply free up the physical swap slots here! - */ - swap_slot_free_nr(slot, nr); - slot.val = 0; - return slot; + goto uncharge; } } @@ -491,6 +475,17 @@ swp_slot_t vswap_alloc_swap_slot(struct folio *folio) } rcu_read_unlock(); return slot; + +uncharge: + mem_cgroup_uncharge_swap(entry, nr); +free_phys_swap: + /* + * We have not updated the backing type of the virtual swap slot. + * Simply free up the physical swap slots here! + */ + swap_slot_free_nr(slot, nr); + slot.val = 0; + return slot; } /** -- 2.47.1