From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 572B7C48BC4 for ; Tue, 20 Feb 2024 11:37:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD7496B0081; Tue, 20 Feb 2024 06:37:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D83F38D0003; Tue, 20 Feb 2024 06:37:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C24368D0001; Tue, 20 Feb 2024 06:37:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id ADA876B0081 for ; Tue, 20 Feb 2024 06:37:34 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 721F3160637 for ; Tue, 20 Feb 2024 11:37:34 +0000 (UTC) X-FDA: 81811982028.05.6019173 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) by imf20.hostedemail.com (Postfix) with ESMTP id 790BE1C001A for ; Tue, 20 Feb 2024 11:37:32 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=oPjn6G6S; spf=pass (imf20.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.170 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708429052; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=92Vtmk4shKGDy68G7cvQyAOzVy7wB5PQ3z9H0ogo7j8=; b=doJcA6HQYjw8JJRz4xhIfB7lRQlQpyf3LOl971ru0H8eR5kfvnXxG1cp+XBBLaJXcgIWK1 emI4EMsFWaFCrhUBI0EHvCRQzw0f4qe1Rq2TG+IEjAOsICC5QvCMHDijAqE0A+UBdeVFlx ujivU3av4K79y0RvbU8SloviH/ky8N4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708429052; a=rsa-sha256; cv=none; b=IEpyiXzKA/SsqISfhFmbHlTa95O4U73p87BJlT0rgZoESBVnitAKYi/SCWiLmLmBdjiDZv nSxHyKZXFOjTLMO4ofNlI4yGm8zWRCK2qPQyXZqoGOwxz5sCteyeqrZ5Vja/QK7Rk7lSNW SVkN8hKMKMhoC3VsoGK42JmDRp+4TgY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=oPjn6G6S; spf=pass (imf20.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.170 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1708429051; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=92Vtmk4shKGDy68G7cvQyAOzVy7wB5PQ3z9H0ogo7j8=; b=oPjn6G6Sm/vrC4om3sEOub6fZmmj5FSnVFKEkEH7SgYTvgp/jDUgGz84ktCIkpOeFOxDM1 aaZVmFQv4ip0oB8UG+ursJb5tM3HpMRXVGwjdzGBrp+dMoxzHmx+32D3PswWnuc04qd5BZ vjKroKX4jLtlQRmCUe7EaIrZA9EktAs= From: Chengming Zhou Date: Tue, 20 Feb 2024 11:36:59 +0000 Subject: [PATCH RESEND 2/3] mm/zsmalloc: remove migrate_write_lock_nested() MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Message-Id: <20240219-b4-szmalloc-migrate-v1-2-fc21039bed7b@linux.dev> References: <20240219-b4-szmalloc-migrate-v1-0-fc21039bed7b@linux.dev> In-Reply-To: <20240219-b4-szmalloc-migrate-v1-0-fc21039bed7b@linux.dev> To: hannes@cmpxchg.org, Sergey Senozhatsky , Minchan Kim , Andrew Morton , nphamcs@gmail.com, yosryahmed@google.com Cc: linux-mm@kvack.org, Chengming Zhou , linux-kernel@vger.kernel.org X-Developer-Signature: v=1; a=ed25519-sha256; t=1708429041; l=2959; i=chengming.zhou@linux.dev; s=20240220; h=from:subject:message-id; bh=OUia51YESybtl45fp4ZaNAosIUgwtCmJ4/X7GVIhXhM=; b=8ijbXFuBVbIAfmOXcNTVodntrXfcwjB4acsw5QXCN052rSsXXdYpcn6HbsyCIPUDL5ekuERGC VMk9bAuraaKBIadh21NRdWVdE5N/IMKOvjmOxFePXE7syuPQng8VanC X-Developer-Key: i=chengming.zhou@linux.dev; a=ed25519; pk=5+68Wfci+T30FoQos5RH+hfToF6SlC+S9LMPSPBFWuw= X-Migadu-Flow: FLOW_OUT X-Stat-Signature: rdkt5yastiow6rdsuwwjpygqk8zxuuxh X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 790BE1C001A X-Rspam-User: X-HE-Tag: 1708429052-415200 X-HE-Meta: U2FsdGVkX19lbm/Qjux+wcvFCr3AYty9a+iKUnyUB46gKalQxbkLeWyTtUlqbCXvvN/Bl0+VA6ylm9B06Hc4B2v0UnItT7e+ehCWXx76eqIjBW7n+wgffleDDGVxwRThbYZ8xWwSpaokmsFW1DZBYumDda1i21ar+UK8mPVyqd9jf/oYhx06B68wAETHvhgzbLTSgaIlq9JtC/xx+MLiVt/H9iMb3Wv8uJlloLBHsAl/TTgWBjW6kaBvuxPvyuaeu0HZFGYhF/nripFxaiETctw+8sjZnildXkVFWhjmnzHS++tUxnq95PgtaX7QhxPjB1ji/crga176/h4cvd3WA3h5UjJ10UOJthgj7/xMpCRcoGsfmpHClpjE5gjSEocqv1QdoqJ/Vw8bibxO+eB98TjFNfi55xzQeZ0aAy2+NpcjNp1vKLO9JX02IDvu23x8cUoz7/Z8qpYXIvgsCpke7LtdgMMeWIERrBw25yi8YzU2/6RNm8/p9H1GjvYxeysUCW4dM3rQrqIWKjiWjpd1t9Uv0Kws2tVJuX3+Ou8dnEo9a3ZUGKUodgTVRNYwbvCJLdzWWxIQiPSdVkj1qpj6XHrY1kolNqeDn8yjP1Tk5cuqY065PvtnZiXwbSIEAQgF+eimCSnI26WAT+5AFNOePQsXFuEWfH9IDwnBYhB7X+MZBO781BLk9FYWl8Z5vBULsz4N6fyBLJMZC2CZROzAPqLrV1Ai/+eFpRVb0jAjTYoCrJkLxYGLntZ9Msz49rXOZq/5m27ET77nDMctuqgE/sP+c8IjCJwwDTvcac4NeGH+lVqPP+eVocpnKpeMWyodT0isWY1lIE0XYJMvnv7dM/k3KKznL58cEkb8tHgZ81zEWmtjjEgAVUVL0X3w54N/I1Zwszz3MuDgjwzWPZwuvJNnFRwVf3kM2V2yb5lKRWOvyvyv91MGXbxCEI7WRjs8sm2mRBGjqaTt5pV1b/E UgCglYUo HnKDNoKKXtLNvwwQQ7W50GiQfz9bfVvFDpf5aPDLhWTWNFmrCX9Pxl1CSKFvnSD+IdlFzV/Kx8ldmdyJ3akpXzn50zBsfl9/CKCy60Nq+VtW80J76LzrJvo8Uewobq1HpDTnrnXuVM6xIBAeG0DkoHp0D4s9ctdsOv4kT2WbykDiVW0g7cCReUa4Mqw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou The migrate write lock is to protect the race between zspage migration and zspage objects' map users. We only need to lock out the map users of src zspage, not dst zspage, which is safe to map by users concurrently, since we only need to do obj_malloc() from dst zspage. So we can remove the migrate_write_lock_nested() use case. As we are here, cleanup the __zs_compact() by moving putback_zspage() outside of migrate_write_unlock since we hold pool lock, no malloc or free users can come in. Signed-off-by: Chengming Zhou --- mm/zsmalloc.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 64d5533fa5d8..f2ae7d4c6f21 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -279,7 +279,6 @@ static void migrate_lock_init(struct zspage *zspage); static void migrate_read_lock(struct zspage *zspage); static void migrate_read_unlock(struct zspage *zspage); static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_lock_nested(struct zspage *zspage); static void migrate_write_unlock(struct zspage *zspage); #ifdef CONFIG_COMPACTION @@ -1727,11 +1726,6 @@ static void migrate_write_lock(struct zspage *zspage) write_lock(&zspage->lock); } -static void migrate_write_lock_nested(struct zspage *zspage) -{ - write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING); -} - static void migrate_write_unlock(struct zspage *zspage) { write_unlock(&zspage->lock); @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool, dst_zspage = isolate_dst_zspage(class); if (!dst_zspage) break; - migrate_write_lock(dst_zspage); } src_zspage = isolate_src_zspage(class); if (!src_zspage) break; - migrate_write_lock_nested(src_zspage); - + migrate_write_lock(src_zspage); migrate_zspage(pool, src_zspage, dst_zspage); - fg = putback_zspage(class, src_zspage); migrate_write_unlock(src_zspage); + fg = putback_zspage(class, src_zspage); if (fg == ZS_INUSE_RATIO_0) { free_zspage(pool, class, src_zspage); pages_freed += class->pages_per_zspage; @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 || spin_is_contended(&pool->lock)) { putback_zspage(class, dst_zspage); - migrate_write_unlock(dst_zspage); dst_zspage = NULL; spin_unlock(&pool->lock); @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool, } } - if (src_zspage) { + if (src_zspage) putback_zspage(class, src_zspage); - migrate_write_unlock(src_zspage); - } - if (dst_zspage) { + if (dst_zspage) putback_zspage(class, dst_zspage); - migrate_write_unlock(dst_zspage); - } + spin_unlock(&pool->lock); return pages_freed; -- b4 0.10.1