From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 753A2C0219D for ; Thu, 13 Feb 2025 11:33:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 096646B0089; Thu, 13 Feb 2025 06:33:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 045846B008A; Thu, 13 Feb 2025 06:33:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E76C06B008C; Thu, 13 Feb 2025 06:33:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CB7186B0089 for ; Thu, 13 Feb 2025 06:33:06 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7A0F6120CE0 for ; Thu, 13 Feb 2025 11:33:06 +0000 (UTC) X-FDA: 83114709972.26.D2017B5 Received: from mail115-100.sinamail.sina.com.cn (mail115-100.sinamail.sina.com.cn [218.30.115.100]) by imf05.hostedemail.com (Postfix) with ESMTP id 7457C100004 for ; Thu, 13 Feb 2025 11:33:03 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf05.hostedemail.com: domain of hdanton@sina.com designates 218.30.115.100 as permitted sender) smtp.mailfrom=hdanton@sina.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739446384; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uSSF0m99xfE8PZl4CBK4iGjLmKAUz0CnqZ81vXDNx3Q=; b=LOLcAhyCmdrHjnMHKyOpoi6Z4oS0XIoSVx/+7mSNQuBDh7fUppmZLzcEpfLb4DWeZHiAx9 YfDc94n3Dc5Nc33MK2cp1E2WnOWpqZZpKcICLhz/UdtaWVMNGsR12c6NDWTz1NGe8zLtNK LH2axgmh+GPGjkkjL3EUyCJ/ivhiho4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739446384; a=rsa-sha256; cv=none; b=i2pVHfHuUZJVDayZEHQ2TVq9oukF6tp7LOfRC2EWhMpy3BFDLozkrxQWApqyQeVyI7APst Ay9noRq3LbY28L7xfWGljp7r9zEHwsVQ/D6TEoisSHpkPaHBEmSTEnbjSaYAolzzPuRA9H CSt7k7OnePFAxLMaO5RRzQZ7/yY1Asw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf05.hostedemail.com: domain of hdanton@sina.com designates 218.30.115.100 as permitted sender) smtp.mailfrom=hdanton@sina.com X-SMAIL-HELO: localhost.localdomain Received: from unknown (HELO localhost.localdomain)([116.24.9.117]) by sina.com (10.185.250.23) with ESMTP id 67ADD86700000167; Thu, 13 Feb 2025 19:32:57 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 6767338913384 X-SMAIL-UIID: CE2940338DB543F5879DFA9EB4FAB75B-20250213-193257-1 From: Hillf Danton To: Sergey Senozhatsky Cc: Yosry Ahmed , Kairui Song , Minchan Kim , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 12/18] zsmalloc: make zspage lock preemptible Date: Thu, 13 Feb 2025 19:32:47 +0800 Message-ID: <20250213113248.2225-1-hdanton@sina.com> In-Reply-To: <20250212063153.179231-13-senozhatsky@chromium.org> References: <20250212063153.179231-1-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 7457C100004 X-Rspamd-Server: rspam07 X-Stat-Signature: 7m5g9iwy6foxw593qbgpnj4ek819gou6 X-HE-Tag: 1739446383-796679 X-HE-Meta: U2FsdGVkX1+8ic3hTJyT8tsAAjYBfz/xeOQENkpW7hWuGyAA9TGRejVKn6QkrKN8mwEmX8+zswxmYmgrHi6J3Qp4d1rBc7F15WBgBuggkU7v3vBZyJWnSPDY+GWU5OcTar0UTv2m6BWWF+tx3Az6JE78Zkf+CkqlYyl88EutdCNMczuGUaMrGeeh0hu688S/x1Ok5de0aZcL++oMW4PE+7Qt4uLqBVzgJQ1A/czod9hu55Nxecivatav6Hwk8W8dbLuldmLmd/ZDaQfAtWrdetDMIJM1KfryshpSir+ZcPyqAJEOKHQ8i/S6AyiLcEbf/HmUawyIO2ZtkMghgJpICVOmlC1gSQJT7seUGs+fLRD0J5qztZ08XU6/ED3ynNGBP6sOxaCn7azg+C5VXujfzc8Ai+ec1i1evpt2RWQRkgzpWu9KDjU1nF2lrohXYn3+b/BOelO0KEN76HPMGUP4tYD2Sr0E7OCbo+5lgmhpUfTr3iwKROQYnhS4FmErQxHRq7mYmLs5VsBHwyNBlK+WzFWpRfaS357S5cysp2Rx9XYp88H5+lZxjNmzJeEK09Iyd6ImrlHGfRZOEqD0h/48BOh80Fvn6Anrinej30fsMzFIFmQwOF+qXH4tEsxhTUY8ODZNVdnKWMm4pcE08HtV++iUKXw20jCyPxVQy3U/90vRNLoRK1VT2FZyYlw8D56OqQkO558uJR6hISvFBxNRAh/NgnZFGlux8D8mBe2D5pE3kPmZ9wVzSaqZQn0BX41UoDXWmkLyHSToWOUSdw9qDIa4FzaQKAdG+prGmSCh120p3zc6nuBQMJRD8qK/xUrn9TXWGf0I+KFaF4JbziEsLxZ5OO4vjrCFrA3CoajICRfYl7usxS9n6xR2r1rbY7oGLm5zllV6QvfL991vTG6krHodXTUZWTMdrfBXAVebUkt08LqKosyq3LMyFsrX4ccrZWyI7nnJ4/RmKvXS7I7 dRtJ9D/t d+IiF85gKBUQLS20D1pKnVGZJ8FNwgyYl0Ee1eXiOzpoFLB4udIzqWrdhFhNvsxK3ZP08AzcQrzy5kyAVAGLTBSRHyCA4rTMHZ5inQewufKEvZYYwLJPr3iT89XC+RkNWonLzWIQBtV8YpPLLDcCzqYPO7tJeB9/+3+QopGHX5wahrKPQ4UfHDqiokw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, 12 Feb 2025 15:27:10 +0900 Sergey Senozhatsky > Switch over from rwlock_t to a atomic_t variable that takes negative > value when the page is under migration, or positive values when the > page is used by zsmalloc users (object map, etc.) Using a rwsem > per-zspage is a little too memory heavy, a simple atomic_t should > suffice. > > zspage lock is a leaf lock for zs_map_object(), where it's read-acquired. > Since this lock now permits preemption extra care needs to be taken when > it is write-acquired - all writers grab it in atomic context, so they > cannot spin and wait for (potentially preempted) reader to unlock zspage. > There are only two writers at this moment - migration and compaction. In > both cases we use write-try-lock and bail out if zspage is read locked. > Writers, on the other hand, never get preempted, so readers can spin > waiting for the writer to unlock zspage. > > With this we can implement a preemptible object mapping. > > Signed-off-by: Sergey Senozhatsky > Cc: Yosry Ahmed > --- > mm/zsmalloc.c | 183 +++++++++++++++++++++++++++++++++++--------------- > 1 file changed, 128 insertions(+), 55 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index c82c24b8e6a4..80261bb78cf8 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -226,6 +226,9 @@ struct zs_pool { > /* protect page/zspage migration */ > rwlock_t lock; > atomic_t compaction_in_progress; > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + struct lock_class_key lockdep_key; > +#endif > }; > > static void pool_write_unlock(struct zs_pool *pool) > @@ -292,6 +295,9 @@ static inline void free_zpdesc(struct zpdesc *zpdesc) > __free_page(page); > } > > +#define ZS_PAGE_UNLOCKED 0 > +#define ZS_PAGE_WRLOCKED -1 > + > struct zspage { > struct { > unsigned int huge:HUGE_BITS; > @@ -304,7 +310,11 @@ struct zspage { > struct zpdesc *first_zpdesc; > struct list_head list; /* fullness list */ > struct zs_pool *pool; > - rwlock_t lock; > + atomic_t lock; > + > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + struct lockdep_map lockdep_map; > +#endif > }; > > struct mapping_area { > @@ -314,6 +324,88 @@ struct mapping_area { > enum zs_mapmode vm_mm; /* mapping mode */ > }; > > +static void zspage_lock_init(struct zspage *zspage) > +{ > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + lockdep_init_map(&zspage->lockdep_map, "zsmalloc-page", > + &zspage->pool->lockdep_key, 0); > +#endif > + > + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); > +} > + > +/* > + * zspage locking rules: > + * > + * 1) writer-lock is exclusive > + * > + * 2) writer-lock owner cannot sleep > + * > + * 3) writer-lock owner cannot spin waiting for the lock > + * - caller (e.g. compaction and migration) must check return value and > + * handle locking failures > + * - there is only TRY variant of writer-lock function > + * > + * 4) reader-lock owners (multiple) can sleep > + * > + * 5) reader-lock owners can spin waiting for the lock, in any context > + * - existing readers (even preempted ones) don't block new readers > + * - writer-lock owners never sleep, always unlock at some point > + */ > +static void zspage_read_lock(struct zspage *zspage) > +{ > + atomic_t *lock = &zspage->lock; > + int old = atomic_read_acquire(lock); > + > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + rwsem_acquire_read(&zspage->lockdep_map, 0, 0, _RET_IP_); > +#endif > + > + do { > + if (old == ZS_PAGE_WRLOCKED) { > + cpu_relax(); > + old = atomic_read_acquire(lock); > + continue; > + } > + } while (!atomic_try_cmpxchg_acquire(lock, &old, old + 1)); Given mcs_spinlock, inventing spinlock in 2025 sounds no good. See below for the spinlock version. > +} > + > +static void zspage_read_unlock(struct zspage *zspage) > +{ > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + rwsem_release(&zspage->lockdep_map, _RET_IP_); > +#endif > + atomic_dec_return_release(&zspage->lock); > +} > + > +static __must_check bool zspage_try_write_lock(struct zspage *zspage) > +{ > + atomic_t *lock = &zspage->lock; > + int old = ZS_PAGE_UNLOCKED; > + > + WARN_ON_ONCE(preemptible()); > + > + preempt_disable(); > + if (atomic_try_cmpxchg_acquire(lock, &old, ZS_PAGE_WRLOCKED)) { > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + rwsem_acquire(&zspage->lockdep_map, 0, 1, _RET_IP_); > +#endif > + return true; > + } > + > + preempt_enable(); > + return false; > +} > + > +static void zspage_write_unlock(struct zspage *zspage) > +{ > +#ifdef CONFIG_DEBUG_LOCK_ALLOC > + rwsem_release(&zspage->lockdep_map, _RET_IP_); > +#endif > + atomic_set_release(&zspage->lock, ZS_PAGE_UNLOCKED); > + preempt_enable(); > +} struct zspage_lock { spinlock_t lock; int cnt; struct lockdep_map lockdep_map; }; static __must_check bool zspage_write_trylock(struct zspage_lock *zl) { spin_lock(&zl->lock); if (zl->cnt == ZS_PAGE_UNLOCKED) { // zl->cnt = ZS_PAGE_WRLOCKED; rwsem_acquire(&zl->lockdep_map, 0, 1, _RET_IP_); return true; } spin_unlock(&zl->lock); return false; } static void zspage_write_unlock(struct zspage_lock *zl) { rwsem_release(&zl->lockdep_map, _RET_IP_); spin_unlock(&zl->lock); } static void zspage_read_lock(struct zspage_lock *zl) { rwsem_acquire_read(&zl->lockdep_map, 0, 0, _RET_IP_); spin_lock(&zl->lock); zl->cnt++; spin_unlock(&zl->lock); } static void zspage_read_unlock(struct zspage_lock *zl) { rwsem_release(&zl->lockdep_map, _RET_IP_); spin_lock(&zl->lock); zl->cnt--; spin_unlock(&zl->lock); }