From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF36AC433F5 for ; Mon, 21 Feb 2022 15:19:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C56D98D0002; Mon, 21 Feb 2022 10:19:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDE3D8D0001; Mon, 21 Feb 2022 10:19:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A58338D0002; Mon, 21 Feb 2022 10:19:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 8EA2B8D0001 for ; Mon, 21 Feb 2022 10:19:56 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 54F7A181CBC11 for ; Mon, 21 Feb 2022 15:19:56 +0000 (UTC) X-FDA: 79167147192.12.79B163B Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf17.hostedemail.com (Postfix) with ESMTP id C846D40009 for ; Mon, 21 Feb 2022 15:19:55 +0000 (UTC) Date: Mon, 21 Feb 2022 16:19:52 +0100 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1645456793; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=iiqDs9VmzVxtjLGqmwh2wXqRO32y274elUOjlfl5pnA=; b=Csp7yKVAgNp7j1ujRn4fDH0+27J/BHkBSyTu/EJaAY+GfA/KxhRzDodQqBp6V3t8F4MjgY 6T6snrOf3E+o9m0gQZgBI8gQ0XXefphcvdrj65RXqBVGoplkipjbLwzy/bdpRSxVm/TLZP p4wyuaV9r0TTDrwMpLNptw2gyswu7MnOdPRLYMWEyQGyoS4VVwmteuMk2fy0MZIc7dedyN eJceeBWGyVJT/zXxiRznGpJtPgGYfoOW4ZtPMYxHBt03UiWZeB2561reMAd2BE7ItQ/XhC q+yqYN2OoXr7m7Pto+ptKTS+8uIINgaWtuSKyWonAn+LxWn3/49imFouxE51/Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1645456793; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=iiqDs9VmzVxtjLGqmwh2wXqRO32y274elUOjlfl5pnA=; b=19vfFOgPBN5Px5aR1YBszTg4dPsBy1E6qIzOt/WCzph8OK++1Qt7FQibaqKw9CfSNwoxHi iwPTNXgWiS1SbsAQ== From: Sebastian Andrzej Siewior To: Michal Hocko Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Johannes Weiner , Michal =?utf-8?Q?Koutn=C3=BD?= , Peter Zijlstra , Thomas Gleixner , Vladimir Davydov , Waiman Long , kernel test robot Subject: Re: [PATCH v3 5/5] mm/memcg: Protect memcg_stock with a local_lock_t Message-ID: References: <20220217094802.3644569-1-bigeasy@linutronix.de> <20220217094802.3644569-6-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: C846D40009 X-Stat-Signature: 3fy68q54xdpfzzy1m77r39kcckep1c4z Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=Csp7yKVA; dkim=pass header.d=linutronix.de header.s=2020e header.b=19vfFOgP; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf17.hostedemail.com: domain of bigeasy@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1645456795-957707 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022-02-21 15:46:30 [+0100], Michal Hocko wrote: > On Thu 17-02-22 10:48:02, Sebastian Andrzej Siewior wrote: > [...] > > @@ -2266,7 +2273,6 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > > * as well as workers from this path always operate on the local > > * per-cpu data. CPU up doesn't touch memcg_stock at all. > > */ > > - curcpu = get_cpu(); > > Could you make this a separate patch? Sure. > > for_each_online_cpu(cpu) { > > struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); > > struct mem_cgroup *memcg; > > @@ -2282,14 +2288,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) > > rcu_read_unlock(); > > > > if (flush && > > - !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { > > - if (cpu == curcpu) > > - drain_local_stock(&stock->work); > > - else > > - schedule_work_on(cpu, &stock->work); > > - } > > + !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) > > + schedule_work_on(cpu, &stock->work); > > Maybe I am missing but on !PREEMPT kernels there is nothing really > guaranteeing that the worker runs so there should be cond_resched after > the mutex is unlocked. I do not think we want to rely on callers to be > aware of this subtlety. There is no guarantee on PREEMPT kernels, too. The worker will be made running and will be put on the CPU when the scheduler sees it fit and there could be other worker which take precedence (queued earlier). But I was not aware that the worker _needs_ to run before we return. We might get migrated after put_cpu() so I wasn't aware that this is important. Should we attempt best effort and wait for the worker on the current CPU? > An alternative would be to split out __drain_local_stock which doesn't > do local_lock. but isn't the section in drain_local_stock() unprotected then? Sebastian