From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFB5CC433F5 for ; Wed, 30 Mar 2022 06:39:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F3AF8D0002; Wed, 30 Mar 2022 02:39:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A3108D0001; Wed, 30 Mar 2022 02:39:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 343538D0002; Wed, 30 Mar 2022 02:39:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 24CFA8D0001 for ; Wed, 30 Mar 2022 02:39:11 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A57151828A812 for ; Wed, 30 Mar 2022 06:39:10 +0000 (UTC) X-FDA: 79300100460.24.F6373A0 Received: from mail-yw1-f181.google.com (mail-yw1-f181.google.com [209.85.128.181]) by imf22.hostedemail.com (Postfix) with ESMTP id 852B2C0005 for ; Wed, 30 Mar 2022 06:39:08 +0000 (UTC) Received: by mail-yw1-f181.google.com with SMTP id 00721157ae682-2e64a6b20eeso207467627b3.3 for ; Tue, 29 Mar 2022 23:39:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=2jpIraTYVy4+KW5tVKFlrsJ4TJFxRhQCZzwCcMrDysQ=; b=MEWKTEvCsDle3dITB/iGqmG2nuVbFY4r/X0tW7OvlvsFQBtdzGiCXPUGGGi44lgkSG gwBxZJiKD3ibwUHdcdWOoBGjtSk+1IvNkRvPgX/cBIU2ZHktfy3qksn80JuQxFXjbAVz 6oueOewMLxUJcQwQpz8VZ5Cdo4lJMtA7Z6FnvvppcNb8xtaoXpkjHyzZxKIPyAJIgTrb POFS5tYdMogwTcND4whrMguNp1FuZ29UKcLkklEG/QM/Ypyx+jevZ3G5znphjgvAFO75 GiMR2gj09qMPA30TrN4oz/dUw9hVAlOUei7RzwpHDuHZZoWFdUpnmNdKhHAMOIeECO0Z l/Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=2jpIraTYVy4+KW5tVKFlrsJ4TJFxRhQCZzwCcMrDysQ=; b=Y+mbPlkO/KEBPptf30R6SoZsbQ2LlpmPYQEhvcI1pELTq7kueBan2uh5Vi1TaZGiMd oDikQ3mBZMJyCoZWg9NFOLrnUkekDkpTXcSqIYfnX9ZRhL/EpAo5xvu2gOCg15RgYNBW pIk8qr8YO4QDfY+t7EJKk5jiIa6GT2eebP0LnxypeZ9FRAZ6fIVxet2jhJDAWEQWxcI8 Kd+CY/ptNVLZtHHuWibL7rXf+kEzBc0ZU1rQDuZ0dY5Ij7vsB2yEhTw6pBYhC1+LWI94 3afm0HOcnQoUIF+aH0LGDgUJceAnz0Q4c/EwXrHDdK7xSD+AsGs/4CUMyXl020fHbFJa VI/A== X-Gm-Message-State: AOAM532j5Ed4teTTa8kaV1BuhG+zOthON+S0REM28FQnCD0fJW146LxC oMUatg1Qv2BtvExre4NAMC93DpXnB6im623rMW1j5w== X-Google-Smtp-Source: ABdhPJy3q4c0i4haH+2YtHMErDDuUpBmlm9A2lJLAWorkvOk6avqxhd4F1PThmNVEhAdzByl9EYgJKp6AkzQAFbciKM= X-Received: by 2002:a0d:ccca:0:b0:2e6:2b53:3f16 with SMTP id o193-20020a0dccca000000b002e62b533f16mr35680433ywd.35.1648622347570; Tue, 29 Mar 2022 23:39:07 -0700 (PDT) MIME-Version: 1.0 References: <20220309144000.1470138-1-longman@redhat.com> <2263666d-5eef-b1fe-d5e3-b166a3185263@redhat.com> <07be89ad-e355-69b9-6e36-07beaebf2d8b@redhat.com> In-Reply-To: <07be89ad-e355-69b9-6e36-07beaebf2d8b@redhat.com> From: Muchun Song Date: Wed, 30 Mar 2022 14:38:31 +0800 Message-ID: Subject: Re: [PATCH-mm v3] mm/list_lru: Optimize memcg_reparent_list_lru_node() To: Waiman Long Cc: Roman Gushchin , Andrew Morton , Linux Memory Management List , LKML Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: fxybji9iw85574wfsn6eq9nyzfqo7y7c Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=MEWKTEvC; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf22.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.128.181 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 852B2C0005 X-HE-Tag: 1648622348-305171 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 30, 2022 at 5:53 AM Waiman Long wrote: > > On 3/28/22 21:15, Muchun Song wrote: > > On Tue, Mar 29, 2022 at 3:12 AM Roman Gushchin wrote: > >> On Sun, Mar 27, 2022 at 08:57:15PM -0400, Waiman Long wrote: > >>> On 3/22/22 22:12, Muchun Song wrote: > >>>> On Wed, Mar 23, 2022 at 9:55 AM Waiman Long wrote: > >>>>> On 3/22/22 21:06, Muchun Song wrote: > >>>>>> On Wed, Mar 9, 2022 at 10:40 PM Waiman Long wrote: > >>>>>>> Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node() > >>>>>>> to be race free"), we are tracking the total number of lru > >>>>>>> entries in a list_lru_node in its nr_items field. In the case of > >>>>>>> memcg_reparent_list_lru_node(), there is nothing to be done if nr_items > >>>>>>> is 0. We don't even need to take the nlru->lock as no new lru entry > >>>>>>> could be added by a racing list_lru_add() to the draining src_idx memcg > >>>>>>> at this point. > >>>>>> Hi Waiman, > >>>>>> > >>>>>> Sorry for the late reply. Quick question: what if there is an inflight > >>>>>> list_lru_add()? How about the following race? > >>>>>> > >>>>>> CPU0: CPU1: > >>>>>> list_lru_add() > >>>>>> spin_lock(&nlru->lock) > >>>>>> l = list_lru_from_kmem(memcg) > >>>>>> memcg_reparent_objcgs(memcg) > >>>>>> memcg_reparent_list_lrus(memcg) > >>>>>> memcg_reparent_list_lru() > >>>>>> memcg_reparent_list_lru_node() > >>>>>> if (!READ_ONCE(nlru->nr_items)) > >>>>>> // Miss reparenting > >>>>>> return > >>>>>> // Assume 0->1 > >>>>>> l->nr_items++ > >>>>>> // Assume 0->1 > >>>>>> nlru->nr_items++ > >>>>>> > >>>>>> IIUC, we use nlru->lock to serialise this scenario. > >>>>> I guess this race is theoretically possible but very unlikely since it > >>>>> means a very long pause between list_lru_from_kmem() and the increment > >>>>> of nr_items. > >>>> It is more possible in a VM. > >>>> > >>>>> How about the following changes to make sure that this race can't happen? > >>>>> > >>>>> diff --git a/mm/list_lru.c b/mm/list_lru.c > >>>>> index c669d87001a6..c31a0a8ad4e7 100644 > >>>>> --- a/mm/list_lru.c > >>>>> +++ b/mm/list_lru.c > >>>>> @@ -395,9 +395,10 @@ static void memcg_reparent_list_lru_node(struct > >>>>> list_lru *lru, int nid, > >>>>> struct list_lru_one *src, *dst; > >>>>> > >>>>> /* > >>>>> - * If there is no lru entry in this nlru, we can skip it > >>>>> immediately. > >>>>> + * If there is no lru entry in this nlru and the nlru->lock is free, > >>>>> + * we can skip it immediately. > >>>>> */ > >>>>> - if (!READ_ONCE(nlru->nr_items)) > >>>>> + if (!READ_ONCE(nlru->nr_items) && !spin_is_locked(&nlru->lock)) > >>>> I think we also should insert a smp_rmb() between those two loads. > >>> Thinking about this some more, I believe that adding spin_is_locked() check > >>> will be enough for x86. However, that will likely not be enough for arches > >>> with a more relaxed memory semantics. So the safest way to avoid this > >>> possible race is to move the check to within the lock critical section, > >>> though that comes with a slightly higher overhead for the 0 nr_items case. I > >>> will send out a patch to correct that. Thanks for bring this possible race > >>> to my attention. > >> Yes, I think it's not enough: > > I think it may be enough if we insert a smp_rmb() between those two loads. > > > >> CPU0 CPU1 > >> READ_ONCE(&nlru->nr_items) -> 0 > >> spin_lock(&nlru->lock); > >> nlru->nr_items++; > > ^^^ > > ||| > > The nlr here is not the > > same as the one in CPU0, > > since CPU0 have done the > > memcg reparting. Then > > CPU0 will not miss nlru > > reparting. If I am wrong, please > > correct me. Thanks. > >> spin_unlock(&nlru->lock); > >> && !spin_is_locked(&nlru->lock) -> 0 > > I just realize that there is another lock/unlock pair in > memcg_reparent_objcgs(): > > memcg_reparent_objcgs() > spin_lock_irq() > memcg reparenting > spin_unlock_irq() > percpu_ref_kill() > spin_lock_irqsave() > ... > spin_unlock_irqrestore() > > This lock/unlock pair in percpu_ref_kill() will stop the reordering of > read/write before the memcg reparenting. Now I think just adding a > spin_is_locked() check with smp_rmb() should be enough. However, I would > like to change the ordering like that: > > if (!spin_is_locked(&nlru->lock)) { > smp_rmb(); > if (!READ_ONCE(nlru->nr_items)) > return; > } Does the following race still exist? CPU0: CPU1: spin_is_locked(&nlru->lock) list_lru_add() spin_lock(&nlru->lock) l = list_lru_from_kmem(memcg) memcg_reparent_objcgs(memcg) memcg_reparent_list_lrus(memcg) memcg_reparent_list_lru() memcg_reparent_list_lru_node() if (!READ_ONCE(nlru->nr_items)) // Miss reparenting return // Assume 0->1 l->nr_items++ // Assume 0->1 nlru->nr_items++ > > Otherwise, we will have the problem > > list_lru_add() > spin_lock(&nlru->lock) > l = list_lru_from_kmem(memcg) > READ_ONCE(nlru->nr_items); > // Assume 0->1 > l->nr_items++ > // Assume 0->1 > nlru->nr_items++ > spin_unlock(&nlru->lock) > spin_is_locked() You are right. > > If spin_is_locked() is before spin_lock() in list_lru_add(), > list_lru_from_kmem() is guarantee to get the reparented memcg and so > won't added to the reparented lru. >