From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A16CDC46467 for ; Tue, 10 Jan 2023 23:58:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E76A68E0002; Tue, 10 Jan 2023 18:58:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E26E08E0001; Tue, 10 Jan 2023 18:58:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D16458E0002; Tue, 10 Jan 2023 18:58:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C10A68E0001 for ; Tue, 10 Jan 2023 18:58:44 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7897D1A02CF for ; Tue, 10 Jan 2023 23:58:44 +0000 (UTC) X-FDA: 80340556968.14.2B4CD69 Received: from mail3-163.sinamail.sina.com.cn (mail3-163.sinamail.sina.com.cn [202.108.3.163]) by imf09.hostedemail.com (Postfix) with ESMTP id 0A67214000A for ; Tue, 10 Jan 2023 23:58:40 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf09.hostedemail.com: domain of hdanton@sina.com designates 202.108.3.163 as permitted sender) smtp.mailfrom=hdanton@sina.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673395123; a=rsa-sha256; cv=none; b=AaLXeXAC6xxHoClu0nV7kbAOzT/AnFBA3ik0M6EMn7s3y3eWh9GGZANuSPi4PABilOFd/u 49ts2yosqG0uAk2ZcLaAh5VbKyue8aYxAH3jfM+YwICSZsYN5id1R714oq7nezuigmZ7Ai AkPyjc0X525XYJxUr1uKphfcnfNtSJQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf09.hostedemail.com: domain of hdanton@sina.com designates 202.108.3.163 as permitted sender) smtp.mailfrom=hdanton@sina.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673395123; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s8BzF9K6bDkADfRqBWYei/+Hwn+lglmLG0BBl4SpEQU=; b=KAjXPwBh7qJGRl/21f0Kn+tOS+0z7Zv9txtVnNw1r0yFxKZmWat6KOTbNby7gW9MG/EmqQ AhvWsytXd5fpz7sZ+30kzTR/Kt2u0+CJGo+EeDt49o/gavrtlzYmP9fbseNO2FpYn3JZHI YIEAL1yJ3ZjPiLJhA/sNJWR3I9zXbSc= Received: from unknown (HELO localhost.localdomain)([114.249.61.130]) by sina.com (172.16.97.27) with ESMTP id 63BDFB4400037925; Tue, 11 Jan 2023 07:56:53 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 85471749285769 From: Hillf Danton To: Frederic Weisbecker Cc: Marcelo Tosatti , atomlin@atomlin.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v13 3/6] mm/vmstat: manage per-CPU stats from CPU context when NOHZ full Date: Wed, 11 Jan 2023 07:58:22 +0800 Message-Id: <20230110235822.456-1-hdanton@sina.com> In-Reply-To: References: <20230110151901.402-1-hdanton@sina.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0A67214000A X-Stat-Signature: c7mtqzudyhrnbrmc3j4axgy3jc3yjhc4 X-HE-Tag: 1673395120-573133 X-HE-Meta: U2FsdGVkX18UjYryudLe40BChc7jVfXnc1DqSZ3v8fIz4h5tJGcWGWtV4pydabWKT5J2UYcbGjty1x99y4rGdYAZvZbvDN2yyW+uemC0olT0lj1MX6x8D6XXg+NzL3EkgR0yMnnOsHExTBmfjDdFgUAyAMQwEhAFMQzBu0jO8rR3ZtHCdrfD68NF5Fnen1E3TO0549CJ+cBklBv879LXabKJO6Z+7mkHmLF0sGor7ioF4G5wQcrl4UmhUzMhssbfM3IeQQ7I9GaRap9AJxTF8T86eC2QDQQKmEP79BQQik4cokfog9kxin3W1nUKzI1/mqC1ujkmRKF4z6aqwY0w8i/+YEres1oDSu7Qx1982z+Bul768qthyRkVgZev3JbxcflLLbgElMDFkE7QK4HHNQXfoFkiaSPUXVWOQozU+UjKRgdMc35fVHoZx6DVJSw9nzIj+aTRs8uekvp3YHR56y0BWPY2DxjomYcwMTgqGgT7bnNLBHVcAt7mRoIbb+KKDV/HNKfJviCR0lsdsJtA2QpQkODjWes7XNiQ5emVc29Q3n5SLM2+g9FDsX+zz8lyutkCV6GssR+7n7+6ptd5+vCNAhSBoVkRj9kMr5VQeSu5UAoDhz63vHrmTD/BMrfEDzYTLA3Ps+uFkVumxKNAlbkRgTno1IjWWRQS5MDZiPshQG7geADBf0QEA/jrub8UuK4x0+Tam8bUJ5+tBx2pdDhtjCpGlM0djVOOXYCD3b9185taOhn0biFoG6xndmyJXRKgcIWFiswu5HkaI5BVFyxtHc/1+yWWhfn9lpmbQkk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10 Jan 2023 17:12:22 +0100 Frederic Weisbecker > On Tue, Jan 10, 2023 at 11:19:01PM +0800, Hillf Danton wrote: > > On Tue, 10 Jan 2023 08:50:28 -0300 Marcelo Tosatti > > > On Tue, Jan 10, 2023 at 10:43:56AM +0800, Hillf Danton wrote: > > > > On 9 Jan 2023 11:12:49 -0300 Marcelo Tosatti > > > > > > > > > > Yes, but if you do not return to userspace, then the per-CPU vm > > > > > statistics can be dirty indefinitely. > > > > > > > > Could you specify the reasons for failing to return to userspace, > > > > given it is undesired intereference for the shepherd to queue work > > > > on the isolated CPUs. > > > > > > Any system call that takes longer than the threshold to sync vmstats. > > > > Which ones? > > > > If schedule() occurs during syscall because of acquiring mutex for instance > > then anything on the isolated runqueue, including workqueue worker shepherd > > wakes up, can burn CPU cycles without undesired interference produced. > > The above confuses me. How others tasks would help with syscalls that take too long too > service? Given no scheduling in userspace, no chance for other tasks to interfere after returning to userspace, on one hand. Upon scheduling during syscall on the other hand, it is the right time to sync vmstats for example. But no vmstats can be updated without works queued by shepherd. In a nutshell, no interference could happen without scheduling, and how work is queued does not matter. So the current shepherd behavior is prefered. > > > > > > > Or a long running kernel thread, for example: > > > > It is a buggyyyy example. > > > > > > https://stackoverflow.com/questions/65111483/long-running-kthread-and-synchronize-net > > I can imagine a CPU spending most of its time processing networking packets > through interrupts/softirq within ksoftirqd/NAPI while another CPU process > these packets in userspace. > > In this case the CPU handling the kernel part can theoretically never go to > idle/user. nohz_full isn't optimized toward such job but there is nothing > to prevent it from doing such job. A simple FIFO task launched by an administrator can get a CPU out of scheduler's control for a week, regardless of isolation.