From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62F5AD10BE3 for ; Sat, 26 Oct 2024 03:59:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D46036B0092; Fri, 25 Oct 2024 23:59:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF48F6B0095; Fri, 25 Oct 2024 23:59:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B95C06B0096; Fri, 25 Oct 2024 23:59:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 963156B0092 for ; Fri, 25 Oct 2024 23:59:25 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A9E51C13BB for ; Sat, 26 Oct 2024 03:59:03 +0000 (UTC) X-FDA: 82714397682.01.FC48B79 Received: from mail-vs1-f41.google.com (mail-vs1-f41.google.com [209.85.217.41]) by imf29.hostedemail.com (Postfix) with ESMTP id 1DCF5120008 for ; Sat, 26 Oct 2024 03:58:55 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Elsip7W4; spf=pass (imf29.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.41 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729915007; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3KB/LcqBhsdnYUzdTisxph+TSCmdRKdSymSSUMenIBw=; b=1jl16vIrAtYn42ONSslQNHdtTLXY/DxuDi23PFzTBSE/YCXCa3Ar061mv40XwH/g2hj3c/ biubx9o4ui3zaPINIu9z/scXn2bElvxqfDpPhTp6HAoXcDII/Fqd23r0BpzmJZb7ZdVY4u 3XsiNbky2Pgk5EG+WbeMf4mxPY7reDI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729915007; a=rsa-sha256; cv=none; b=o4lopcdFlU8LooT+jrKn/TPRF2hmdXkLkxLc3sMPPWGvfK2vC5X20ym2j3qtzaUqZ0xfXe +njyHbCdKcJnHb+A94Zaf1vajNTko/mUush9al9Zx72CQrzXfLZuW3mR3zKeLUoD37UdnW LoUaZrMrFQ8SRsUmMg/p7MNZrQTgA1Q= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Elsip7W4; spf=pass (imf29.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.41 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-vs1-f41.google.com with SMTP id ada2fe7eead31-4a4861a689eso784201137.2 for ; Fri, 25 Oct 2024 20:59:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1729915162; x=1730519962; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=3KB/LcqBhsdnYUzdTisxph+TSCmdRKdSymSSUMenIBw=; b=Elsip7W4cmgHOzGmPydE0zWknoxCwwLp7PC6weyTUvfTMqO0WP+LYLAggl+rfSc8vi XQlyhgvhKUK9u+hN7P3ZbTdUXLEpHgP0YHjpAD9LTgZyUJp7qpnHi+NwhOhv+BDqJmrd 6gOHC+dc4XBYhCt3VAUqM1iXKqVkRfM4YDMl+sJxvxi8VR8FoI7KDNO63hg4aYaAZEeg /G9sJmzKDh3+fDTw+8C1NIu8ZutsrI9RH61yIP23Wq9yjy4IPeyGTSZhBqs0Z9X6Oegt yCLWoNr6kQ0d800l+njSwH0spouDJjLv1jMfVZCe3eRj9NnkOoGzPdrAeEnL9Dxab/KC SLIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729915162; x=1730519962; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3KB/LcqBhsdnYUzdTisxph+TSCmdRKdSymSSUMenIBw=; b=mmw3u3S+eZ6JiaRIwDwygxfEq6UnjC9AFnTQDT/8h1C1FL329AtICu8Bm1SqdSpeHI 82jINE345qlr41JUxATeC5YtmCLyDGfcOVZA5OnMCyWqojko1WfPcC4i/BUbhqZE0JbS 7l0LvJqqJAWDVmJwNKYjl0DyeSx3F4SdOJSyFBjd6FC+sGGdKRrktVk8JSaJ1YNgxduU jZqgslwdIXpnRNRM7Inb17FxL/QApFIPxvmn00owDRh23Qe6JVSE2x+4DQ0ZsihrpTmk Vf+FBOjzYoWbDsRz5uUvu3MiDsu42ASqE0Gkv2HPO/lf7gxtuNT2+OVQnxaTKgd5enw1 eKVQ== X-Forwarded-Encrypted: i=1; AJvYcCVl4+XJV3GYqcyHFx2FNjv3mCeUy4yBIp9VBGcQLveZMsSqKyVBbb55qNKFHNCLx1+SIhL2OTlHyQ==@kvack.org X-Gm-Message-State: AOJu0YxrNwd29gTVx2rToOrRjkX+DnQ5TsVPx+yicEmXPcw81i/+5hOl 3aC/d5MWJkrZo9YFE267gl6ohwPrkg+kQkqTxslcK9lKEWSidQfOLo5LQQO+DKoGOLz+R2YS57I 0DmmwNzd4QWJkx6SGdjkoruAmeXA7urQUEWvz X-Google-Smtp-Source: AGHT+IGAOdbwDufxdTWY1PatysXL3lH/4WN/WWRD5cb5neztcqlyDvKK0fc7DMPGOOKjO4i7Nn22hw0cAdo0dtH19WM= X-Received: by 2002:a05:6102:160a:b0:4a4:8756:d899 with SMTP id ada2fe7eead31-4a8cfd723demr1233728137.29.1729915162377; Fri, 25 Oct 2024 20:59:22 -0700 (PDT) MIME-Version: 1.0 References: <20241025012304.2473312-1-shakeel.butt@linux.dev> <20241025012304.2473312-7-shakeel.butt@linux.dev> In-Reply-To: <20241025012304.2473312-7-shakeel.butt@linux.dev> From: Yu Zhao Date: Fri, 25 Oct 2024 21:58:45 -0600 Message-ID: Subject: Re: [PATCH v1 6/6] memcg-v1: remove memcg move locking code To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Hugh Dickins , Yosry Ahmed , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, Meta kernel team Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: rbt8rn5a1qphm7b63pf9e5ym1ydpyu6e X-Rspamd-Queue-Id: 1DCF5120008 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1729915135-29950 X-HE-Meta: U2FsdGVkX18tqgZYNMy67Mk4XfNmtCB1F2A73JSLf+byP3UmJEKtrYeJkDO272Q+EcEJDBxgP5BbRXEq64mkO7sskIFh2I1ZnQAYEY0PugC3n2JNhIIcd4gV9A46/VQUU4t8bykG4hiF9Xs13GOQey7Lan+7lgJcnpRGorBb5ihy6x41e/zV0Mx2j3vJ1pf6ck91VhxuMC4AK0yaxgHvuLHM9suKRZ8Ev9UACDbiL0JY7vlLL2Km92WVlMJ006/Pl3UEn11NKEVqFAg2skvu5GDKM8OeHUcp/7QpwobjOTf6TK310je5Oe/j0Ps3c7KB/phoBFzCx6PpMHLyHyPbZTmPxzPoqKy1qHUG7G5j9dcqltd+DCQIkpiHi2eCY7TxJ0udzMHkQeaIV5k92fCoqD9ZbyGKh/yi/zC89rkE4Yeug0UHjCqTuhlhuJjTVClvZopkvMo3lgEyUXRDGX4NxS76gw4kSGPV5FgGiRr1ICfC6nirN2CJ9OKZ40mgQvRrgIn4qRRMXR3mTTky6xbkhKsKJllHCjCfKGRlD9DlJ+Vz44p+hjpKmMPsLd7RVvvGrgHzDtndNUvRx0KRVr3yzvLozBkzvJGW33C1D1TuZUIsJY3/neQhjnDvphwzyYIl/z2jVNK6vB6Y6nR/KgoVpdVPEWWkWXpodDqlkxXC3ZC7F/Kc1j2NH+dT+NIXkFAtmho3DqwmnmKrTw6sFkkhHdmv1iYqJeppY9Im4n2kOblwXVm/BEV8oBlVBaWGVDCcw/4YRJRB9/PEIAg24bf/7XTpywrSO9q4sgGgP1HJ+k1HtgJTOHbfC0hewOmxJ9jYCPX6WuRardR+sL+vKRXq0rQ+CplWXwooDrwRkCukLoi6R+xL0GRpX1am/zmdJcL+S/s5dLxkceu5xZBK2uY9eJPoi4AEbSnvyqXuae/1JwxIw4TAlT70ZPq5UzMizE/LXivOuBZEkihxMU3zwss 7f52VwkP BWxvRJZdpYxX34DxGWubunZd2OHkP5KriKnM4Aa/+u+/yOQMmOjrguivu59+Se0yvB61Fr8v2jeLjVm/+XGKnDcEZNkzIdLtykDD+uGomsKahoga8ze+XiXfBPOmCDTS6GMo423vvcO1tZ/yD5sOAXVkteHZgi97ARX5PDZ8td3lgedPs2UlMOWmWkLPH/o1DNVN0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 24, 2024 at 7:23=E2=80=AFPM Shakeel Butt wrote: > > The memcg v1's charge move feature has been deprecated. All the places > using the memcg move lock, have stopped using it as they don't need the > protection any more. Let's proceed to remove all the locking code > related to charge moving. > > Signed-off-by: Shakeel Butt > --- > > Changes since RFC: > - Remove the memcg move locking in separate patches. > > include/linux/memcontrol.h | 54 ------------------------- > mm/filemap.c | 1 - > mm/memcontrol-v1.c | 82 -------------------------------------- > mm/memcontrol.c | 5 --- > mm/rmap.c | 1 - > 5 files changed, 143 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 798db70b0a30..932534291ca2 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -299,20 +299,10 @@ struct mem_cgroup { > /* For oom notifier event fd */ > struct list_head oom_notify; > > - /* taken only while moving_account > 0 */ > - spinlock_t move_lock; > - unsigned long move_lock_flags; > - > /* Legacy tcp memory accounting */ > bool tcpmem_active; > int tcpmem_pressure; > > - /* > - * set > 0 if pages under this cgroup are moving to other cgroup. > - */ > - atomic_t moving_account; > - struct task_struct *move_lock_task; > - > /* List of events which userspace want to receive */ > struct list_head event_list; > spinlock_t event_list_lock; > @@ -428,9 +418,7 @@ static inline struct obj_cgroup *__folio_objcg(struct= folio *folio) > * > * - the folio lock > * - LRU isolation > - * - folio_memcg_lock() > * - exclusive reference > - * - mem_cgroup_trylock_pages() > * > * For a kmem folio a caller should hold an rcu read lock to protect mem= cg > * associated with a kmem folio from being released. > @@ -499,9 +487,7 @@ static inline struct mem_cgroup *folio_memcg_rcu(stru= ct folio *folio) I think you missed folio_memcg_rcu(). (I don't think workingset_activation() needs it, since its only caller must hold a refcnt on the folio.) > * > * - the folio lock > * - LRU isolation > - * - lock_folio_memcg() > * - exclusive reference > - * - mem_cgroup_trylock_pages() > * > * For a kmem folio a caller should hold an rcu read lock to protect mem= cg > * associated with a kmem folio from being released. > @@ -1873,26 +1859,6 @@ static inline bool task_in_memcg_oom(struct task_s= truct *p) > return p->memcg_in_oom; > } > > -void folio_memcg_lock(struct folio *folio); > -void folio_memcg_unlock(struct folio *folio); > - > -/* try to stablize folio_memcg() for all the pages in a memcg */ > -static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > -{ > - rcu_read_lock(); > - > - if (mem_cgroup_disabled() || !atomic_read(&memcg->moving_account)= ) > - return true; > - > - rcu_read_unlock(); > - return false; > -} > - > -static inline void mem_cgroup_unlock_pages(void) > -{ > - rcu_read_unlock(); > -} > - > static inline void mem_cgroup_enter_user_fault(void) > { > WARN_ON(current->in_user_fault); > @@ -1914,26 +1880,6 @@ unsigned long memcg1_soft_limit_reclaim(pg_data_t = *pgdat, int order, > return 0; > } > > -static inline void folio_memcg_lock(struct folio *folio) > -{ > -} > - > -static inline void folio_memcg_unlock(struct folio *folio) > -{ > -} > - > -static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > -{ > - /* to match folio_memcg_rcu() */ > - rcu_read_lock(); > - return true; > -} > - > -static inline void mem_cgroup_unlock_pages(void) > -{ > - rcu_read_unlock(); > -} > - > static inline bool task_in_memcg_oom(struct task_struct *p) > { > return false; > diff --git a/mm/filemap.c b/mm/filemap.c > index 630a1c431ea1..e582a1545d2a 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -119,7 +119,6 @@ > * ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) > * bdi.wb->list_lock (folio_remove_rmap_pte->set_page_= dirty) > * ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty) > - * ->memcg->move_lock (folio_remove_rmap_pte->folio_memcg_lock) > * bdi.wb->list_lock (zap_pte_range->set_page_dirty) > * ->inode->i_lock (zap_pte_range->set_page_dirty) > * ->private_lock (zap_pte_range->block_dirty_folio) > diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c > index 9c0fba8c8a83..539ceefa9d2d 100644 > --- a/mm/memcontrol-v1.c > +++ b/mm/memcontrol-v1.c > @@ -401,87 +401,6 @@ unsigned long memcg1_soft_limit_reclaim(pg_data_t *p= gdat, int order, > return nr_reclaimed; > } > > -/** > - * folio_memcg_lock - Bind a folio to its memcg. > - * @folio: The folio. > - * > - * This function prevents unlocked LRU folios from being moved to > - * another cgroup. > - * > - * It ensures lifetime of the bound memcg. The caller is responsible > - * for the lifetime of the folio. > - */ > -void folio_memcg_lock(struct folio *folio) > -{ > - struct mem_cgroup *memcg; > - unsigned long flags; > - > - /* > - * The RCU lock is held throughout the transaction. The fast > - * path can get away without acquiring the memcg->move_lock > - * because page moving starts with an RCU grace period. > - */ > - rcu_read_lock(); > - > - if (mem_cgroup_disabled()) > - return; > -again: > - memcg =3D folio_memcg(folio); > - if (unlikely(!memcg)) > - return; > - > -#ifdef CONFIG_PROVE_LOCKING > - local_irq_save(flags); > - might_lock(&memcg->move_lock); > - local_irq_restore(flags); > -#endif > - > - if (atomic_read(&memcg->moving_account) <=3D 0) > - return; > - > - spin_lock_irqsave(&memcg->move_lock, flags); > - if (memcg !=3D folio_memcg(folio)) { > - spin_unlock_irqrestore(&memcg->move_lock, flags); > - goto again; > - } > - > - /* > - * When charge migration first begins, we can have multiple > - * critical sections holding the fast-path RCU lock and one > - * holding the slowpath move_lock. Track the task who has the > - * move_lock for folio_memcg_unlock(). > - */ > - memcg->move_lock_task =3D current; > - memcg->move_lock_flags =3D flags; > -} > - > -static void __folio_memcg_unlock(struct mem_cgroup *memcg) > -{ > - if (memcg && memcg->move_lock_task =3D=3D current) { > - unsigned long flags =3D memcg->move_lock_flags; > - > - memcg->move_lock_task =3D NULL; > - memcg->move_lock_flags =3D 0; > - > - spin_unlock_irqrestore(&memcg->move_lock, flags); > - } > - > - rcu_read_unlock(); > -} > - > -/** > - * folio_memcg_unlock - Release the binding between a folio and its memc= g. > - * @folio: The folio. > - * > - * This releases the binding created by folio_memcg_lock(). This does > - * not change the accounting of this folio to its memcg, but it does > - * permit others to change it. > - */ > -void folio_memcg_unlock(struct folio *folio) > -{ > - __folio_memcg_unlock(folio_memcg(folio)); > -} > - > static u64 mem_cgroup_move_charge_read(struct cgroup_subsys_state *css, > struct cftype *cft) > { > @@ -1189,7 +1108,6 @@ void memcg1_memcg_init(struct mem_cgroup *memcg) > { > INIT_LIST_HEAD(&memcg->oom_notify); > mutex_init(&memcg->thresholds_lock); > - spin_lock_init(&memcg->move_lock); > INIT_LIST_HEAD(&memcg->event_list); > spin_lock_init(&memcg->event_list_lock); > } > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 94279b9c766a..3c223aaeb6af 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1189,7 +1189,6 @@ void lruvec_memcg_debug(struct lruvec *lruvec, stru= ct folio *folio) > * These functions are safe to use under any of the following conditions= : > * - folio locked > * - folio_test_lru false > - * - folio_memcg_lock() > * - folio frozen (refcount of 0) > * > * Return: The lruvec this folio is on with its lock held. > @@ -1211,7 +1210,6 @@ struct lruvec *folio_lruvec_lock(struct folio *foli= o) > * These functions are safe to use under any of the following conditions= : > * - folio locked > * - folio_test_lru false > - * - folio_memcg_lock() > * - folio frozen (refcount of 0) > * > * Return: The lruvec this folio is on with its lock held and interrupts > @@ -1235,7 +1233,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *= folio) > * These functions are safe to use under any of the following conditions= : > * - folio locked > * - folio_test_lru false > - * - folio_memcg_lock() > * - folio frozen (refcount of 0) > * > * Return: The lruvec this folio is on with its lock held and interrupts > @@ -2375,9 +2372,7 @@ static void commit_charge(struct folio *folio, stru= ct mem_cgroup *memcg) > * > * - the page lock > * - LRU isolation > - * - folio_memcg_lock() > * - exclusive reference > - * - mem_cgroup_trylock_pages() > */ > folio->memcg_data =3D (unsigned long)memcg; > } > diff --git a/mm/rmap.c b/mm/rmap.c > index 4785a693857a..c6c4d4ea29a7 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -32,7 +32,6 @@ > * swap_lock (in swap_duplicate, swap_info_get) > * mmlist_lock (in mmput, drain_mmlist and others) > * mapping->private_lock (in block_dirty_folio) > - * folio_lock_memcg move_lock (in block_dirty_foli= o) > * i_pages lock (widely used) > * lruvec->lru_lock (in folio_lruvec_lock_irq) > * inode->i_lock (in set_page_dirty's __mark_inode_d= irty) > -- > 2.43.5 > >