From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0217C433F5 for ; Wed, 23 Mar 2022 02:14:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3621C6B0072; Tue, 22 Mar 2022 22:14:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 310B26B0073; Tue, 22 Mar 2022 22:14:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B1446B0074; Tue, 22 Mar 2022 22:14:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 0B8086B0072 for ; Tue, 22 Mar 2022 22:14:31 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BD867239F7 for ; Wed, 23 Mar 2022 02:14:30 +0000 (UTC) X-FDA: 79274031900.09.2404362 Received: from mail-yw1-f179.google.com (mail-yw1-f179.google.com [209.85.128.179]) by imf31.hostedemail.com (Postfix) with ESMTP id B424320002 for ; Wed, 23 Mar 2022 02:14:29 +0000 (UTC) Received: by mail-yw1-f179.google.com with SMTP id 00721157ae682-2e592e700acso2507547b3.5 for ; Tue, 22 Mar 2022 19:14:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=KI0F2AuD6/hdoLjdWqIEQmO22+3iFJMOWkqzpVEzI/Q=; b=MZI37+o9fpZSLZbbVV1mjmO7ePcLcIW/aGJK1t/5qc24QTQ6Nl9cwOFGGTbuV8+DDn dbPjcoiFCZlgEnx4rE003FDhDlxTDe5XtIvbst5iaaxZEaZ9OJTZA+miiI32wSD/ANv/ 8jSYj3ue4jMAL9ImwFMwhqQQyHXAEW7i3Z7oqT+JyQg+wNa9GAH2CSp2OukSON2ehUQq XW3Os0ClM1GAeYXw4G6VePWwd43j/4BQTRHojzXIiKK/IdlpR+b2jV1psoJRaFfgN54i fZEP3aNmzuMW6ObDL0NkXMi3rhq9T3K1ire9AwY+u+datYf/mFK1u8V0WNNni523LXQI +keg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=KI0F2AuD6/hdoLjdWqIEQmO22+3iFJMOWkqzpVEzI/Q=; b=apwvP9I7EX1M9VRo3ak9/UWhqZr7UZwM4ftPqcNkuBWnj1y1NLiImroQcqHOQSTMoQ FzH1EPDz6yTLdmtgo64sPIk7vSj5TWsDEsKqtgu5S5uZd13gTU+5V+09OAYQ6gnc+0YN ZVuw2n66lreh6L+CWJ9gmVJ7WqOjdbrh5iCAEIVHydk7l5OwwpCb7WeflJvqwbOhvEMw bmKLRFuFao0JP7HKbfG0VsyEWzmqF3GYNxG/1C69wNc4S49VtZaqoW61zMro8tOJ4cZ5 Y4dq6u5dFjvM794wHLLANamwWwszniacndR44P92iYEzBzGQjSzWVeVWpwLv2G4/2Oeb UDkg== X-Gm-Message-State: AOAM533EVuevBc/vyTCFExCAhAMS4wI6/IjRru2IH97UrgkLQm7PaXxz 4w7b0BIh0zboiW6RhrSlSHUA5pYJkA8yesPhHxXCIw== X-Google-Smtp-Source: ABdhPJwZhduYtEgTd/LkiHt2OKyrlVJ9+49tpctzXbSOr3Bagl4Oo2yWk04BtfW/xhTTdHfexyqoJ4IPeHc876MoEQk= X-Received: by 2002:a05:690c:827:b0:2e5:a53b:cffc with SMTP id by7-20020a05690c082700b002e5a53bcffcmr31806260ywb.31.1648001668809; Tue, 22 Mar 2022 19:14:28 -0700 (PDT) MIME-Version: 1.0 References: <20220309144000.1470138-1-longman@redhat.com> <2263666d-5eef-b1fe-d5e3-b166a3185263@redhat.com> In-Reply-To: <2263666d-5eef-b1fe-d5e3-b166a3185263@redhat.com> From: Muchun Song Date: Wed, 23 Mar 2022 10:12:32 +0800 Message-ID: Subject: Re: [PATCH-mm v3] mm/list_lru: Optimize memcg_reparent_list_lru_node() To: Waiman Long Cc: Andrew Morton , Linux Memory Management List , LKML , Roman Gushchin Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Stat-Signature: pmjpew4es9q4yd4dedux7utxte8gw4gy Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=MZI37+o9; spf=pass (imf31.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.128.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B424320002 X-HE-Tag: 1648001669-992418 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 23, 2022 at 9:55 AM Waiman Long wrote: > > On 3/22/22 21:06, Muchun Song wrote: > > On Wed, Mar 9, 2022 at 10:40 PM Waiman Long wrote: > >> Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node() > >> to be race free"), we are tracking the total number of lru > >> entries in a list_lru_node in its nr_items field. In the case of > >> memcg_reparent_list_lru_node(), there is nothing to be done if nr_items > >> is 0. We don't even need to take the nlru->lock as no new lru entry > >> could be added by a racing list_lru_add() to the draining src_idx memcg > >> at this point. > > Hi Waiman, > > > > Sorry for the late reply. Quick question: what if there is an inflight > > list_lru_add()? How about the following race? > > > > CPU0: CPU1: > > list_lru_add() > > spin_lock(&nlru->lock) > > l = list_lru_from_kmem(memcg) > > memcg_reparent_objcgs(memcg) > > memcg_reparent_list_lrus(memcg) > > memcg_reparent_list_lru() > > memcg_reparent_list_lru_node() > > if (!READ_ONCE(nlru->nr_items)) > > // Miss reparenting > > return > > // Assume 0->1 > > l->nr_items++ > > // Assume 0->1 > > nlru->nr_items++ > > > > IIUC, we use nlru->lock to serialise this scenario. > > I guess this race is theoretically possible but very unlikely since it > means a very long pause between list_lru_from_kmem() and the increment > of nr_items. It is more possible in a VM. > > How about the following changes to make sure that this race can't happen? > > diff --git a/mm/list_lru.c b/mm/list_lru.c > index c669d87001a6..c31a0a8ad4e7 100644 > --- a/mm/list_lru.c > +++ b/mm/list_lru.c > @@ -395,9 +395,10 @@ static void memcg_reparent_list_lru_node(struct > list_lru *lru, int nid, > struct list_lru_one *src, *dst; > > /* > - * If there is no lru entry in this nlru, we can skip it > immediately. > + * If there is no lru entry in this nlru and the nlru->lock is free, > + * we can skip it immediately. > */ > - if (!READ_ONCE(nlru->nr_items)) > + if (!READ_ONCE(nlru->nr_items) && !spin_is_locked(&nlru->lock)) I think we also should insert a smp_rmb() between those two loads. Thanks.