From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD96FD11708 for ; Fri, 25 Oct 2024 06:59:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 65F366B0082; Fri, 25 Oct 2024 02:59:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 60F036B008C; Fri, 25 Oct 2024 02:59:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AF6C6B0092; Fri, 25 Oct 2024 02:59:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2D3296B0082 for ; Fri, 25 Oct 2024 02:59:55 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CAB0A1C14E2 for ; Fri, 25 Oct 2024 06:59:32 +0000 (UTC) X-FDA: 82711223952.27.2F28051 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) by imf12.hostedemail.com (Postfix) with ESMTP id 2666F40016 for ; Fri, 25 Oct 2024 06:59:43 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=AwPLsIuU; spf=pass (imf12.hostedemail.com: domain of mhocko@suse.com designates 209.85.128.42 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729839515; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HNlDlbGnZjEseQwHJzCTTzalLqX7nd5y80oKxZwRmOg=; b=j7HkFFiy8MGU2M/nTMVEwZgNjhWqdVKS2Ea+h9XPrS6gydCpWXf9nKQ0BUCYmh0WX01/XJ NJDK3jED0N+WCzRLE3Ta/eIImSzI18oC0IZn8jXoklqiBmQi1xlftoLRGSC5JWYTANVfq3 DpkhDvumgkm85lW8UFJRPcIJY7brgUo= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=AwPLsIuU; spf=pass (imf12.hostedemail.com: domain of mhocko@suse.com designates 209.85.128.42 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729839515; a=rsa-sha256; cv=none; b=z6J9YcvYKvRoXA+w6kgEeKAe3M+1SYAFltJi7TGbi7kVIEaoFEuqD2TBQdsbYgoyJ+G2j+ AxNOkh6xrC1mW6VKZBRjUgAH1v+qMsJRwf0jtil+nL8ehyXieuO/UdoyEJymZEEHQuMDZk C5Oo6A1Rpv7RvMCrG3iOuB9GaK7W2kc= Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-431688d5127so17239795e9.0 for ; Thu, 24 Oct 2024 23:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1729839591; x=1730444391; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=HNlDlbGnZjEseQwHJzCTTzalLqX7nd5y80oKxZwRmOg=; b=AwPLsIuUlRViJ4aNru7ZUZ5TmQkl4rwY/Z842Vov08tHsqTuBN+gV7ghPFFGTGcA7j pjNKwN05k4QT5NNgIau02CwXv9hku2GkY7ntHakE5eIaDD3SHiPD2W7qME7rgKxGkMWV WOS+tDQTzwqvbj8NsMHzZC6T5zK6/LRET+Gl0gNALHqy/Qcfed0QfdA4FzybEORZrQU1 UdC3gwIxaRuQ6Y4rckrQ2yCSIlVyABfcUppks8+j9P8ttShY4+hZeid2gP1ksIHW+Dlu 94pe6h3Rw/b+HaFFs4FeLRtrFMMRaBozVHFigH1VZ8X7GYI65ZJ/DNFJlq5uhqycBa5s 3rig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729839591; x=1730444391; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=HNlDlbGnZjEseQwHJzCTTzalLqX7nd5y80oKxZwRmOg=; b=XAPA1u1TK/xn+Or48v+iEsi+xWumcTK+xzWoQKurMlaj6YZsVegRsxOpwaFv/RP5uq JgsevHSPMxpL/XP+7ijWYIMptQPIy6u1S6LhP+t8oGmKTyTrVwG9BrKOid16Cp7Und6G OOvN8b2Rr7nI6pENfkqCBBnugM5u43GBh/yhGwG6IzXEqiGY/ulrlqs8zPY56tSUXrb2 BAThiIqN6h3u2vNgKm6uSrWGo8EA3oJAmHYWCzz5Y1ulhwefl2Jokd2D5kZMZuf6JqrA yB03tjQA6BVXPGLmP50q/iAxezNDkVJvngm8C2dLwCLc71Oh+BmFWAU6zQmGSl3kqpHc VI2w== X-Forwarded-Encrypted: i=1; AJvYcCV+oYaK25B+FsnCk1rE479Tpv8B4bg1EDrTJt2NrYYEcnaRLKkIgFsIKv6o+eB/DdrIXLlxaB2y6g==@kvack.org X-Gm-Message-State: AOJu0YwRYxcvF9PZD22nFW1VvQtyUGS8Dx8MxiEVbcC7CJssGr0sYOng NmKK0zZJuq08SPcDMdxB7WXPO/km4pNlcHst9yp2XpVcsvy3O50b/eA08l30IPk= X-Google-Smtp-Source: AGHT+IEz5go4s70yRgnb7sCkqqL1NoAbqlFuG3nuWrwbpPPRkMbsbBXXdAzPHGvjE01RqknnfC54XQ== X-Received: by 2002:adf:f98b:0:b0:37d:5130:b384 with SMTP id ffacd0b85a97d-380458967dbmr3018377f8f.35.1729839591527; Thu, 24 Oct 2024 23:59:51 -0700 (PDT) Received: from localhost (109-81-81-105.rct.o2.cz. [109.81.81.105]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38058b3bde6sm765004f8f.40.2024.10.24.23.59.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Oct 2024 23:59:51 -0700 (PDT) Date: Fri, 25 Oct 2024 08:59:50 +0200 From: Michal Hocko To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Roman Gushchin , Muchun Song , Hugh Dickins , Yosry Ahmed , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, Meta kernel team Subject: Re: [PATCH v1 6/6] memcg-v1: remove memcg move locking code Message-ID: References: <20241025012304.2473312-1-shakeel.butt@linux.dev> <20241025012304.2473312-7-shakeel.butt@linux.dev> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241025012304.2473312-7-shakeel.butt@linux.dev> X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 2666F40016 X-Stat-Signature: gt9x5mtrwi9pch83jf58g51sdrpenjjr X-HE-Tag: 1729839583-912363 X-HE-Meta: U2FsdGVkX1/WZo9zI/aPMC9srmZIOaZZ1dZBQWgzCQOog2NsRBsUNIOy9HtWmCieW14q9C3ZfVP7gERE7AKnmnqUCu+0lmMUfLLjOw7vK7tZDeTjemCb64tz08MvFtjRlByX5v0VjkW47VB4k7YvamTLMOfX9vig0nqvRba7hagRuH6gloNEM/Vw2IeMOOmFFY+/HWRIAjFynR4G/VMlGJGhgmGJaEAKXWCqoRyF0o1WA/aAUBdneFnCI6CQazkYj8n55OdPPIVq/9Wz1bcCrMm6MZOfvudUQg/NSPivv5trge+qLb166Ms3VpnBAJuPO+rmKveKhWVTA0ssBS5Le7b6DbO/fn7dK76O9qbLVAWv4VcGJcIUdZg5/k6lIg8fMT0+MWA7AODtdS5TfUgWc8iURKh+Nto9dDw0mHgov2+9jF7w3d99kkWRjA6Ls2ygoNBEACaskA41c3vX2h6TBafbP5yex6qNB4+plxESyKttX4YdbIHgRJa/TAF55bvAgTPsEXCvZz2CaQxQOLtbLuolustkrjLfhhyNfom2bdPOlPoWblIzPCv54HNRZeia3FP58bh2KmBzHqkZS/7/IKc7j94wWG+FsG7DAnYR1D6viTVPVSSGP8Wk0aynnQ7J6Ycp6jMdGEQMFbGP1BLs+m1hl7EUDfCV9UsF6g/cgDsyMQnCFKji3yAVdKrFwBBzniAa97XX53xsXg4/JD/FboDp/0dfRiY3ICpo10pg0iyAYdiCf6LyG1LtKKciP/wnFY7/ufexVY6qAyn8Z25MA8r4km6OjDoAadyLmLm2wNvvHo3bgJJanVYUGvhkg5x3RdXrTI8HxOvXuM3hwF3TH6CkRfM1ERHJszsYCYe25kygi9vcH85otcBWQyQAn4OdKVCvP9wJA/wfoRkieWIMeZktTfD6GKK6zqjLRluwa3IkqvCCPu+9MvLwVKk1+4C/25Ieo45tHHgu5fTpbWQ a9FfTNqi qGlOi1nCh93gp0czg+oHdde0qN7qHtuu00hBubiVnhpp2olEYJ5f10iniZqY6j+rvbAglaBmz1S7WMwZqq7zkuzRyjp6n11ooWx7n5hKGGhV6lJrSnU520GMJwh6EckJevveTvDVE37qIxiUf/0UTGazMHBdvpAufDYN9KhZaN6/xmUiFTTiFg0AYNFSCzvI2ixrLeoD86HeGBcLPyq8KeLrUhz/eA9CPo94SHfPfVtqdwcZs1mMQrG3F0MK+/WQLq2RCTkSMQZGJEcD+hgCQBcBS69Esb/RAhH+9OqhpCM5dpSYFwu3Il3TAkkg4XdMTs8wEOtwYUK+1KN7xCwe5kmmNhkj7pG8KTc+LjrdGjawXUz3V4WlxCCaomw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu 24-10-24 18:23:03, Shakeel Butt wrote: > The memcg v1's charge move feature has been deprecated. All the places > using the memcg move lock, have stopped using it as they don't need the > protection any more. Let's proceed to remove all the locking code > related to charge moving. > > Signed-off-by: Shakeel Butt Thank you for restructuring this. Having all callers gone by now certainly makes this much safer and easier to review. Acked-by: Michal Hocko > --- > > Changes since RFC: > - Remove the memcg move locking in separate patches. > > include/linux/memcontrol.h | 54 ------------------------- > mm/filemap.c | 1 - > mm/memcontrol-v1.c | 82 -------------------------------------- > mm/memcontrol.c | 5 --- > mm/rmap.c | 1 - > 5 files changed, 143 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 798db70b0a30..932534291ca2 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -299,20 +299,10 @@ struct mem_cgroup { > /* For oom notifier event fd */ > struct list_head oom_notify; > > - /* taken only while moving_account > 0 */ > - spinlock_t move_lock; > - unsigned long move_lock_flags; > - > /* Legacy tcp memory accounting */ > bool tcpmem_active; > int tcpmem_pressure; > > - /* > - * set > 0 if pages under this cgroup are moving to other cgroup. > - */ > - atomic_t moving_account; > - struct task_struct *move_lock_task; > - > /* List of events which userspace want to receive */ > struct list_head event_list; > spinlock_t event_list_lock; > @@ -428,9 +418,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > * > * - the folio lock > * - LRU isolation > - * - folio_memcg_lock() > * - exclusive reference > - * - mem_cgroup_trylock_pages() > * > * For a kmem folio a caller should hold an rcu read lock to protect memcg > * associated with a kmem folio from being released. > @@ -499,9 +487,7 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) > * > * - the folio lock > * - LRU isolation > - * - lock_folio_memcg() > * - exclusive reference > - * - mem_cgroup_trylock_pages() > * > * For a kmem folio a caller should hold an rcu read lock to protect memcg > * associated with a kmem folio from being released. > @@ -1873,26 +1859,6 @@ static inline bool task_in_memcg_oom(struct task_struct *p) > return p->memcg_in_oom; > } > > -void folio_memcg_lock(struct folio *folio); > -void folio_memcg_unlock(struct folio *folio); > - > -/* try to stablize folio_memcg() for all the pages in a memcg */ > -static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > -{ > - rcu_read_lock(); > - > - if (mem_cgroup_disabled() || !atomic_read(&memcg->moving_account)) > - return true; > - > - rcu_read_unlock(); > - return false; > -} > - > -static inline void mem_cgroup_unlock_pages(void) > -{ > - rcu_read_unlock(); > -} > - > static inline void mem_cgroup_enter_user_fault(void) > { > WARN_ON(current->in_user_fault); > @@ -1914,26 +1880,6 @@ unsigned long memcg1_soft_limit_reclaim(pg_data_t *pgdat, int order, > return 0; > } > > -static inline void folio_memcg_lock(struct folio *folio) > -{ > -} > - > -static inline void folio_memcg_unlock(struct folio *folio) > -{ > -} > - > -static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) > -{ > - /* to match folio_memcg_rcu() */ > - rcu_read_lock(); > - return true; > -} > - > -static inline void mem_cgroup_unlock_pages(void) > -{ > - rcu_read_unlock(); > -} > - > static inline bool task_in_memcg_oom(struct task_struct *p) > { > return false; > diff --git a/mm/filemap.c b/mm/filemap.c > index 630a1c431ea1..e582a1545d2a 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -119,7 +119,6 @@ > * ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) > * bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) > * ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty) > - * ->memcg->move_lock (folio_remove_rmap_pte->folio_memcg_lock) > * bdi.wb->list_lock (zap_pte_range->set_page_dirty) > * ->inode->i_lock (zap_pte_range->set_page_dirty) > * ->private_lock (zap_pte_range->block_dirty_folio) > diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c > index 9c0fba8c8a83..539ceefa9d2d 100644 > --- a/mm/memcontrol-v1.c > +++ b/mm/memcontrol-v1.c > @@ -401,87 +401,6 @@ unsigned long memcg1_soft_limit_reclaim(pg_data_t *pgdat, int order, > return nr_reclaimed; > } > > -/** > - * folio_memcg_lock - Bind a folio to its memcg. > - * @folio: The folio. > - * > - * This function prevents unlocked LRU folios from being moved to > - * another cgroup. > - * > - * It ensures lifetime of the bound memcg. The caller is responsible > - * for the lifetime of the folio. > - */ > -void folio_memcg_lock(struct folio *folio) > -{ > - struct mem_cgroup *memcg; > - unsigned long flags; > - > - /* > - * The RCU lock is held throughout the transaction. The fast > - * path can get away without acquiring the memcg->move_lock > - * because page moving starts with an RCU grace period. > - */ > - rcu_read_lock(); > - > - if (mem_cgroup_disabled()) > - return; > -again: > - memcg = folio_memcg(folio); > - if (unlikely(!memcg)) > - return; > - > -#ifdef CONFIG_PROVE_LOCKING > - local_irq_save(flags); > - might_lock(&memcg->move_lock); > - local_irq_restore(flags); > -#endif > - > - if (atomic_read(&memcg->moving_account) <= 0) > - return; > - > - spin_lock_irqsave(&memcg->move_lock, flags); > - if (memcg != folio_memcg(folio)) { > - spin_unlock_irqrestore(&memcg->move_lock, flags); > - goto again; > - } > - > - /* > - * When charge migration first begins, we can have multiple > - * critical sections holding the fast-path RCU lock and one > - * holding the slowpath move_lock. Track the task who has the > - * move_lock for folio_memcg_unlock(). > - */ > - memcg->move_lock_task = current; > - memcg->move_lock_flags = flags; > -} > - > -static void __folio_memcg_unlock(struct mem_cgroup *memcg) > -{ > - if (memcg && memcg->move_lock_task == current) { > - unsigned long flags = memcg->move_lock_flags; > - > - memcg->move_lock_task = NULL; > - memcg->move_lock_flags = 0; > - > - spin_unlock_irqrestore(&memcg->move_lock, flags); > - } > - > - rcu_read_unlock(); > -} > - > -/** > - * folio_memcg_unlock - Release the binding between a folio and its memcg. > - * @folio: The folio. > - * > - * This releases the binding created by folio_memcg_lock(). This does > - * not change the accounting of this folio to its memcg, but it does > - * permit others to change it. > - */ > -void folio_memcg_unlock(struct folio *folio) > -{ > - __folio_memcg_unlock(folio_memcg(folio)); > -} > - > static u64 mem_cgroup_move_charge_read(struct cgroup_subsys_state *css, > struct cftype *cft) > { > @@ -1189,7 +1108,6 @@ void memcg1_memcg_init(struct mem_cgroup *memcg) > { > INIT_LIST_HEAD(&memcg->oom_notify); > mutex_init(&memcg->thresholds_lock); > - spin_lock_init(&memcg->move_lock); > INIT_LIST_HEAD(&memcg->event_list); > spin_lock_init(&memcg->event_list_lock); > } > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 94279b9c766a..3c223aaeb6af 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1189,7 +1189,6 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) > * These functions are safe to use under any of the following conditions: > * - folio locked > * - folio_test_lru false > - * - folio_memcg_lock() > * - folio frozen (refcount of 0) > * > * Return: The lruvec this folio is on with its lock held. > @@ -1211,7 +1210,6 @@ struct lruvec *folio_lruvec_lock(struct folio *folio) > * These functions are safe to use under any of the following conditions: > * - folio locked > * - folio_test_lru false > - * - folio_memcg_lock() > * - folio frozen (refcount of 0) > * > * Return: The lruvec this folio is on with its lock held and interrupts > @@ -1235,7 +1233,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) > * These functions are safe to use under any of the following conditions: > * - folio locked > * - folio_test_lru false > - * - folio_memcg_lock() > * - folio frozen (refcount of 0) > * > * Return: The lruvec this folio is on with its lock held and interrupts > @@ -2375,9 +2372,7 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) > * > * - the page lock > * - LRU isolation > - * - folio_memcg_lock() > * - exclusive reference > - * - mem_cgroup_trylock_pages() > */ > folio->memcg_data = (unsigned long)memcg; > } > diff --git a/mm/rmap.c b/mm/rmap.c > index 4785a693857a..c6c4d4ea29a7 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -32,7 +32,6 @@ > * swap_lock (in swap_duplicate, swap_info_get) > * mmlist_lock (in mmput, drain_mmlist and others) > * mapping->private_lock (in block_dirty_folio) > - * folio_lock_memcg move_lock (in block_dirty_folio) > * i_pages lock (widely used) > * lruvec->lru_lock (in folio_lruvec_lock_irq) > * inode->i_lock (in set_page_dirty's __mark_inode_dirty) > -- > 2.43.5 -- Michal Hocko SUSE Labs