From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854BCC4167B for ; Sun, 3 Dec 2023 11:47:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C47BC6B02C0; Sun, 3 Dec 2023 06:47:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BF7286B02EE; Sun, 3 Dec 2023 06:47:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE63D6B02EF; Sun, 3 Dec 2023 06:47:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A0AA56B02C0 for ; Sun, 3 Dec 2023 06:47:52 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 786E91C0104 for ; Sun, 3 Dec 2023 11:47:52 +0000 (UTC) X-FDA: 81525332784.08.231D3F3 Received: from out-183.mta1.migadu.com (out-183.mta1.migadu.com [95.215.58.183]) by imf02.hostedemail.com (Postfix) with ESMTP id 75E8B80007 for ; Sun, 3 Dec 2023 11:47:50 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=BKsjoxak; spf=pass (imf02.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.183 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701604070; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LuXtRobxGFcs62ikNGb7y5LBc2CHQbhGfyI3ae6uq+8=; b=u1CYnBprcXSN1vdk2x/x67h5rYcVYC+VjabdzxSlc9DQd+5Z/ZikxDS65Z6qZoRGGRugtu qZrttBstrNQLBuk61jkzVK3SbV4njyS6+8B7M361Dw6rtrk8oKCg1Q6/QpD23SyZZ816Tr O8Pev6R4J+mA/OHSIHnyjQ0P8ux0UYs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701604070; a=rsa-sha256; cv=none; b=CrWRARDzr1KeIxYYFwRBbxx49cwjlsgBe3zjAeZGiguvEmgJ470PpLMeZTGtNeMqnNCI/M pBz0BkPWKbvKT/mktbJOZBc4F9IQNZnhVd9YTUvt99CGeQwt/xAWC+v+kIBTa8Nl9tGBPg yblcH1pIS4c1JHixxaEEYEp9ifWBWB4= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=BKsjoxak; spf=pass (imf02.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.183 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: <294cf54b-cdf5-4441-9924-67d934e54883@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1701604068; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LuXtRobxGFcs62ikNGb7y5LBc2CHQbhGfyI3ae6uq+8=; b=BKsjoxakdAFlR9zmabCsFNS0m+wLeqMPXbTs9J7ST29kZalyjB3cvZGsevc6mnxGFdr9/e KXkoJtfZFyUc9kyQ2e+VDWLoWrb5XSFl53U569gIgVF+vh2+S/phMRhvutVZoMaHrsF1E7 nOswtiAgXZLatT8ArpGxcAgqHBJFrAk= Date: Sun, 3 Dec 2023 19:47:18 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v5 7/9] slub: Optimize deactivate_slab() Content-Language: en-US To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chengming Zhou References: <20231102032330.1036151-1-chengming.zhou@linux.dev> <20231102032330.1036151-8-chengming.zhou@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 75E8B80007 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: n3newj8btqz4e4ztad631ubhkce7uo13 X-HE-Tag: 1701604070-426329 X-HE-Meta: U2FsdGVkX18lQ6QM6cFWvGjlJQmHIS9egD2TQhy99XaFIJj3KlT4Z+uYivVQrrWXD5esNwb/DzpLzBYKjyyHKCQZWE0I12XWeR6YoIZCd9Xx0k/Yw5CR+kId+GmdkWyEYReIhsWod/bkC48SfRTQF+C3cjetVuU8H2ZSUasdd7sk3BoekZUkcvFS0K5ndn3n4bRhKbGnSNYj5jLq4EMUET/3FkkTSlwdQw000oeBKfL84Nouuds50EHfL81a4+aheaj/D0EErflDCP5EbJzgoKpx5HPOlo72BIVlF2zyYb6qcuVc60WU2R1lR8wjKmIu4/2umFgZqhj6gpWAXe9H8gTsmhZWz7VIegVBUymFoQbSJHV0TYrYEeyohI65oFxMNEBposwc8HvdH5tyKu5uwqLq13LJXKGPm9Cj+sESjVvcXjrjtMEQSQ9DOXG829yMltqlNy3PoGDCk1Ne6VkHC7Kii49Gw848TGFYePhyhWsT1jLFc/jMuvNGYhIB4FPO/deP3jA/jcsbSfvAkHGnV+czHO5Rgq4GZbt4BDvA3LF1zUL3jhLjF/7DUblS9IoCA//gPqbjKzStVtP2w8Cqu/J1Mfwt+UmQ+TJc6lCAGzuiBfuCHC6JtL39lvkB/mSoX5qg4HEzxLH0Wpu6gM5hX+xaCGGSMZ8c6dm1lIKcfkKu0rjJ9Pzjx+j5cb/Jl2zhT/uoSK6SE19xqmXP0w/chvjGwEYb+FdZnhqWV1Sy/zbVTncLCqlqUuef1PcrXgdZcb5TKJc46+EFWGo3i/eh0X0f362S02Ao/tAEH1IPAiOh4Q6DqWWo8AE5V2sYzi83b84Ly6ZpV1CpwZmENzmSfzpPtl9eKpgvNwTK15ICnQUDzUKrzbfy1ABv0mq7GcR851B7g6djuOgfujVM+gRtdf89DLgfs9ZcDeyFONwb9jTTh/TTd/S6age82Ml1PchReXFT3i27myHpvSgwr/G 7CAlKmWR 9QVuZTtRGN07viDLkK8/IHwYqk/T6S13o15Omb9+KlVPHrtZKk8yGCm7maubyz8WhkIu6MHkg/G5UvoWxT4bm7tCFrBGdwk2WoHcIJqTHS2DedVB24rWHad3dexwsS0XTBJFQeIyxhVJBm9CzfIeSlNaoW6efiTb3JO/h6UHjkUhyw5aYiyTbuhtzW7ImgLuBwO/AdtGEZPfm0VE7yuoWfjqp9BRSf2Mt3di79gqJgD4xKx0+zFObiGdNJL8cYWZmzuMtYdiz2vftRwXM5dKWFgEnkFtIrat+bLOx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/12/3 19:19, Hyeonggon Yoo wrote: > On Sun, Dec 3, 2023 at 7:26 PM Chengming Zhou wrote: >> >> On 2023/12/3 17:23, Hyeonggon Yoo wrote: >>> On Thu, Nov 2, 2023 at 12:25 PM wrote: >>>> >>>> From: Chengming Zhou >>>> >>>> Since the introduce of unfrozen slabs on cpu partial list, we don't >>>> need to synchronize the slab frozen state under the node list_lock. >>>> >>>> The caller of deactivate_slab() and the caller of __slab_free() won't >>>> manipulate the slab list concurrently. >>>> >>>> So we can get node list_lock in the last stage if we really need to >>>> manipulate the slab list in this path. >>>> >>>> Signed-off-by: Chengming Zhou >>>> Reviewed-by: Vlastimil Babka >>>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> >>>> --- >>>> mm/slub.c | 79 ++++++++++++++++++------------------------------------- >>>> 1 file changed, 26 insertions(+), 53 deletions(-) >>>> >>>> diff --git a/mm/slub.c b/mm/slub.c >>>> index bcb5b2c4e213..d137468fe4b9 100644 >>>> --- a/mm/slub.c >>>> +++ b/mm/slub.c >>>> @@ -2468,10 +2468,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) >>>> static void deactivate_slab(struct kmem_cache *s, struct slab *slab, >>>> void *freelist) >>>> { >>>> - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; >>>> struct kmem_cache_node *n = get_node(s, slab_nid(slab)); >>>> int free_delta = 0; >>>> - enum slab_modes mode = M_NONE; >>>> void *nextfree, *freelist_iter, *freelist_tail; >>>> int tail = DEACTIVATE_TO_HEAD; >>>> unsigned long flags = 0; >>>> @@ -2509,65 +2507,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, >>>> /* >>>> * Stage two: Unfreeze the slab while splicing the per-cpu >>>> * freelist to the head of slab's freelist. >>>> - * >>>> - * Ensure that the slab is unfrozen while the list presence >>>> - * reflects the actual number of objects during unfreeze. >>>> - * >>>> - * We first perform cmpxchg holding lock and insert to list >>>> - * when it succeed. If there is mismatch then the slab is not >>>> - * unfrozen and number of objects in the slab may have changed. >>>> - * Then release lock and retry cmpxchg again. >>>> */ >>>> -redo: >>>> - >>>> - old.freelist = READ_ONCE(slab->freelist); >>>> - old.counters = READ_ONCE(slab->counters); >>>> - VM_BUG_ON(!old.frozen); >>>> - >>>> - /* Determine target state of the slab */ >>>> - new.counters = old.counters; >>>> - if (freelist_tail) { >>>> - new.inuse -= free_delta; >>>> - set_freepointer(s, freelist_tail, old.freelist); >>>> - new.freelist = freelist; >>>> - } else >>>> - new.freelist = old.freelist; >>>> - >>>> - new.frozen = 0; >>>> + do { >>>> + old.freelist = READ_ONCE(slab->freelist); >>>> + old.counters = READ_ONCE(slab->counters); >>>> + VM_BUG_ON(!old.frozen); >>>> + >>>> + /* Determine target state of the slab */ >>>> + new.counters = old.counters; >>>> + new.frozen = 0; >>>> + if (freelist_tail) { >>>> + new.inuse -= free_delta; >>>> + set_freepointer(s, freelist_tail, old.freelist); >>>> + new.freelist = freelist; >>>> + } else { >>>> + new.freelist = old.freelist; >>>> + } >>>> + } while (!slab_update_freelist(s, slab, >>>> + old.freelist, old.counters, >>>> + new.freelist, new.counters, >>>> + "unfreezing slab")); >>>> >>>> + /* >>>> + * Stage three: Manipulate the slab list based on the updated state. >>>> + */ >>> >>> deactivate_slab() might unconsciously put empty slabs into partial list, like: >>> >>> deactivate_slab() __slab_free() >>> cmpxchg(), slab's not empty >>> cmpxchg(), slab's empty >>> and unfrozen >> >> Hi, >> >> Sorry, but I don't get it here how __slab_free() can see the slab empty, >> since the slab is not empty from deactivate_slab() path, and it can't be >> used by any CPU at that time? > > The scenario is CPU B previously allocated an object from slab X, but > put it into node partial list and then CPU A have taken slab X into cpu slab. > > While slab X is CPU A's cpu slab, when CPU B frees an object from slab X, > it puts the object into slab X's freelist using cmpxchg. > > Let's say in CPU A the deactivation path performs cmpxchg and X.inuse was 1, > and then CPU B frees (__slab_free()) to slab X's freelist using cmpxchg, > _before_ slab X's put into partial list by CPU A. > > Then CPU A thinks it's not empty so put it into partial list, but by CPU B > the slab has become empty. > > Maybe I am confused, in that case please tell me I'm wrong :) > Ah, you're right! I misunderstood the slab "empty" with "full". :) Yes, in this case the "empty" slab would be put into the node partial list, and it should be fine in the real world as you noted earlier. Thanks!