From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43B05C0218F for ; Fri, 31 Jan 2025 09:07:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C106828028F; Fri, 31 Jan 2025 04:07:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BBEF92800EF; Fri, 31 Jan 2025 04:07:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A379128028F; Fri, 31 Jan 2025 04:07:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 864DA2800EF for ; Fri, 31 Jan 2025 04:07:58 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 410C9160968 for ; Fri, 31 Jan 2025 09:07:58 +0000 (UTC) X-FDA: 83067169836.04.BA1EDB4 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf20.hostedemail.com (Postfix) with ESMTP id 501261C0016 for ; Fri, 31 Jan 2025 09:07:56 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=irRfTbg8; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf20.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.177 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738314476; a=rsa-sha256; cv=none; b=yTLzZR8/0ldPwGmXd+AbBVUvcaYFpF76wflHmGwUvjRUKOytJjVXzF2SYSGuJwiE59lauI Ql9YFpC5b0LZadsVaLNfsr8AOKbgATEaKUiUjmvdUYPgFaVZx+X6QUIzJ6SGjmVA3DOCBN g/mrZZXEdgSOaM0JWKtrfXcOw5sF9yU= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=irRfTbg8; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf20.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.177 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738314476; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aqDkbsViMBfIzGLM8hcQE+HJuYFQMLk+aSWGFS+XSh0=; b=neerG/vjr78g8LRiMarvC1j1sxe/Nr7u3JbCkXjacouXSPiVHMk33Cr67YObvApQKJCy1R 1Xy956NqkbLF44B0N8tuoV6NlcVOISTAHaQrqA1GLyed97AgR7jetikrtbZFGMjpGfBUYX iIkE/DW6a/eA2sX3L02/jdjsmiF0Rfg= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-21634338cfdso41848585ad.2 for ; Fri, 31 Jan 2025 01:07:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738314475; x=1738919275; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aqDkbsViMBfIzGLM8hcQE+HJuYFQMLk+aSWGFS+XSh0=; b=irRfTbg8UsQo/NlRGgz/WoClroBQr9cDaMjnno5lhq7K7QkuheEZe1srh7kbBRnM25 m1v6bU4VGtuG8YbtlVbkEd9zQHnqz9yPz/PpJsGidBOxA5njVwn8aQFIfEeXDr3BYG0G gEf/cWMoBBRQLEAniRK/4GKacyX5IXip8ilug= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738314475; x=1738919275; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aqDkbsViMBfIzGLM8hcQE+HJuYFQMLk+aSWGFS+XSh0=; b=Bnf2ci6OeiegtqQz4eoeTeaZRAkooT1E0N/bOUNr5i5m2+nhpn+XiBWUFxH4BIXnwk gpqjefZAOS1n+rBdRgOS8RU2P0pwCxe0Uk/ni1V/NCoWQp3EEfqqJBlP+0enIsFRuo4z eUr0oEApH9TWaSuPwi/3siOQ79rtpFKaDw437Q6kc4L8XO9IUAg28DBJWyhcBQR+IqSG OpvFSMMQH8ymH4XzU11NX3CCAGSzpdK2i+7W/sY1yD2mbmBVrcumAKDdt8ZhzkY9Hv0B 8rBymt6dgRTBjW9OlJ4X3z97ZvxeBVSDdYmk29Pjp2IQERXW8KQoYVYlkDzXuR+1DrP+ Gg9Q== X-Forwarded-Encrypted: i=1; AJvYcCVtY+EDUJxbtRTyAbSxXQjiNO963PMIJs+fNir3eKdl8vDqCpgarTZUeijhpCa47y76KPi3mRqpNA==@kvack.org X-Gm-Message-State: AOJu0Yz7DUZHIAZuhmvSEDZDEOSDtaMIv/8aBL0O/RE9V/f87ZThf1UJ ISeO5mhzG6inJQb5qodtnv/yL4i8JdT+S7A756Qp8CO84ehv6J4pfnQ8304XDw== X-Gm-Gg: ASbGncsH4aTkb0FrR+ONRa6hfPlZKBRSMOO4+Pt7CN/015AlrBbx4hvYpQ21Ecg1qxi Po9JclqWV4K85t7xmM/VOxZkSc/IbLYuEnyh6fYQfM2kKTwbzXTPr74oDd7502l93zoQIQUBfEN 7MvuznBMy8fe45/8YpHblnH2ZuUV8aqKIofSeF6YdFC04nNPUBgvIQzlnqMcatvF7sTFs7qjzIr 8R2KqdxWcV+7cwUznwXF0xPkeXThGNUOaf/7i4vHBd6m96xZ5r3wLC8y2qxkZS6Aq4kDhBQgmIA 4jFgPfQ8kNAfrqxcbg== X-Google-Smtp-Source: AGHT+IGX15C7/nD8ive9k+yk6hUyhUaEscDdlRfPOcsZ6APSPwhqAsXH+BxwUfuwKmae5iWyxWBwog== X-Received: by 2002:a17:902:ebc2:b0:21b:d105:26b8 with SMTP id d9443c01a7336-21eddbf3366mr33372725ad.7.1738314475027; Fri, 31 Jan 2025 01:07:55 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:c752:be9d:3368:16fa]) by smtp.gmail.com with UTF8SMTPSA id 98e67ed59e1d1-2f83bfc0ddbsm5391902a91.45.2025.01.31.01.07.52 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 31 Jan 2025 01:07:54 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky , Yosry Ahmed Subject: [PATCHv4 12/17] zsmalloc: factor out pool locking helpers Date: Fri, 31 Jan 2025 18:06:11 +0900 Message-ID: <20250131090658.3386285-13-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.362.g079036d154-goog In-Reply-To: <20250131090658.3386285-1-senozhatsky@chromium.org> References: <20250131090658.3386285-1-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 501261C0016 X-Stat-Signature: 5485y6qu6tt5bogyxxtj69h5dqxfetq1 X-HE-Tag: 1738314476-706979 X-HE-Meta: U2FsdGVkX19CNiAGrgu6Dn58t46u23BOLQjziyTtdQ8x7593+If5IrEvTMsh73aUFo4Nn9pl06LI2eAHEmkG1f9i8HJsR4vZ6ZbRQt4241Wt7HNUaQektpEUbhCaU6z1ycbAxR8xEgBf92AQe0xAPL5+KBo6CgjrJtVOTOtQ1DMoD5ajN8dgW5E25F3K8/3fmiDbup7sF3LH/n7mk8+dsgKqs+CkXso8r2LW9OVEYwfrQWz0wu4AOUcoClOik37R09uy7i7Cz9bLlUSzH2a05lkahzIHOKb7rExWy5Q5EE4b/YIDj/VX5W48kAFq2DTFNYtW0CQcTtopJ/3ktU7Ti9jPa3VhQX+WvlACXp1fgD7QcFxNhCHEtBctUljyeWxUrA+58QJjc0TQecR1YzrTXwQEvvIZwV9+SzPGdmia31+9Hbah9SFXXB2yh3Cs5pWWQcw3iKTWejYWoo3JWVo1Zgx7WwOCwgL5b2FczUj6FYF0d4ydYeXrm+aQDqOVqYI7vrxTZXZWInlbJUfYyNXFxmCTUOIbXyLPk20GTYfV66lZjgpV+iwLyNS81VW9QBJF5QRvyyYYwAiVm2FUeN/cJ/RSi+dt8OeNxeq46WzwzXPkW1lQtvZTdf9xD2YWs80HBj4KV8DlT46ESSZcEekbiolWILXK+NM/T3BFHYU9/lXC7IeT88CQonalyE1zz3DD9KVaXVcFwfklJ4PgvFrpivKC6agmOs9iNlZqm9LxeqlaOwLjVg0xuBdbJ+lgG2QkjqP9/05WXix7H763m3esQm3oZnCyFN4IAX9rOH1Dv2h+nO4I8JPpYSIq1vCDADXCL8BRMgxVLaNo3gdj8hN+rsDFfPHfNJqkGpRiEo34R8Oaz93b+8bM7MC4dPhWhfu6bewzMBO395HflNCNmKepOerJf0vPcX7RKrzN5JuAvWxi+msqozSx/Qi/PXQG+oHRig79WjsQcCmFe06gcBF lWxkVWt7 +N2AxUxFLxx+YPIjNdxHtbDs5mz/a2j5pj+ev2CvsiM5+CL2Yf7L5+tLvnzexj86yz3Xf3S5jfScYkymphAf6EYHnc8KLCT/itn1rqd8+TSBe8jhhVc30xsvpGK9ehwfLTLhRy3Bhe6MlyYEAExohoS1OVPDepkZamIeT2akjxcPj70RtCtKY3+HOUYV1f2J6+J94Mfq0vG/BCrasRAlJuezyCT1hvLPPhmltd9ltevOVVqf28CaOsVMSKWNdxC2sBc6ovawT13sz77qW96ag/52lV2fhTQkpI7oPyc1DUQHs9CSEWQHmJrownw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We currently have a mix of migrate_{read,write}_lock() helpers that lock zspages, but it's zs_pool that actually has a ->migrate_lock access to which is opene-coded. Factor out pool migrate locking into helpers, zspage migration locking API will be renamed to reduce confusion. It's worth mentioning that zsmalloc locks sync not only migration, but also compaction. Signed-off-by: Sergey Senozhatsky Cc: Yosry Ahmed --- mm/zsmalloc.c | 69 +++++++++++++++++++++++++++++++++++---------------- 1 file changed, 47 insertions(+), 22 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 817626a351f8..c129596ab960 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -18,7 +18,7 @@ /* * lock ordering: * page_lock - * pool->migrate_lock + * pool->lock * class->lock * zspage->lock */ @@ -224,10 +224,35 @@ struct zs_pool { struct work_struct free_work; #endif /* protect page/zspage migration */ - rwlock_t migrate_lock; + rwlock_t lock; atomic_t compaction_in_progress; }; +static void pool_write_unlock(struct zs_pool *pool) +{ + write_unlock(&pool->lock); +} + +static void pool_write_lock(struct zs_pool *pool) +{ + write_lock(&pool->lock); +} + +static void pool_read_unlock(struct zs_pool *pool) +{ + read_unlock(&pool->lock); +} + +static void pool_read_lock(struct zs_pool *pool) +{ + read_lock(&pool->lock); +} + +static bool pool_lock_is_contended(struct zs_pool *pool) +{ + return rwlock_is_contended(&pool->lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -290,7 +315,7 @@ static bool ZsHugePage(struct zspage *zspage) return zspage->huge; } -static void migrate_lock_init(struct zspage *zspage); +static void lock_init(struct zspage *zspage); static void migrate_read_lock(struct zspage *zspage); static void migrate_read_unlock(struct zspage *zspage); static void migrate_write_lock(struct zspage *zspage); @@ -992,7 +1017,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, return NULL; zspage->magic = ZSPAGE_MAGIC; - migrate_lock_init(zspage); + lock_init(zspage); for (i = 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; @@ -1206,7 +1231,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, BUG_ON(in_interrupt()); /* It guarantees it can get zspage from handle safely */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); zspage = get_zspage(zpdesc); @@ -1218,7 +1243,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * which is smaller granularity. */ migrate_read_lock(zspage); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class = zspage_class(pool, zspage); off = offset_in_page(class->size * obj_idx); @@ -1450,16 +1475,16 @@ void zs_free(struct zs_pool *pool, unsigned long handle) return; /* - * The pool->migrate_lock protects the race with zpage's migration + * The pool->lock protects the race with zpage's migration * so it's safe to get the page from handle. */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); spin_lock(&class->lock); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); obj_free(class->size, obj); @@ -1703,7 +1728,7 @@ static void lock_zspage(struct zspage *zspage) } #endif /* CONFIG_COMPACTION */ -static void migrate_lock_init(struct zspage *zspage) +static void lock_init(struct zspage *zspage) { rwlock_init(&zspage->lock); } @@ -1793,10 +1818,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page, pool = zspage->pool; /* - * The pool migrate_lock protects the race between zpage migration + * The pool lock protects the race between zpage migration * and zs_free. */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); class = zspage_class(pool, zspage); /* @@ -1833,7 +1858,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. */ - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); spin_unlock(&class->lock); migrate_write_unlock(zspage); @@ -1956,7 +1981,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * protect the race between zpage migration and zs_free * as well as zpage allocation/free */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); while (zs_can_compact(class)) { int fg; @@ -1983,14 +2008,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, src_zspage = NULL; if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 - || rwlock_is_contended(&pool->migrate_lock)) { + || pool_lock_is_contended(pool)) { putback_zspage(class, dst_zspage); dst_zspage = NULL; spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); cond_resched(); - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); } } @@ -2002,7 +2027,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); return pages_freed; } @@ -2014,10 +2039,10 @@ unsigned long zs_compact(struct zs_pool *pool) unsigned long pages_freed = 0; /* - * Pool compaction is performed under pool->migrate_lock so it is basically + * Pool compaction is performed under pool->lock so it is basically * single-threaded. Having more than one thread in __zs_compact() - * will increase pool->migrate_lock contention, which will impact other - * zsmalloc operations that need pool->migrate_lock. + * will increase pool->lock contention, which will impact other + * zsmalloc operations that need pool->lock. */ if (atomic_xchg(&pool->compaction_in_progress, 1)) return 0; @@ -2139,7 +2164,7 @@ struct zs_pool *zs_create_pool(const char *name) return NULL; init_deferred_free(pool); - rwlock_init(&pool->migrate_lock); + rwlock_init(&pool->lock); atomic_set(&pool->compaction_in_progress, 0); pool->name = kstrdup(name, GFP_KERNEL); -- 2.48.1.362.g079036d154-goog