From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CCD7C77B78 for ; Sun, 23 Apr 2023 01:34:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 479B26B0080; Sat, 22 Apr 2023 21:33:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 429326B0081; Sat, 22 Apr 2023 21:33:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F0CD6B0082; Sat, 22 Apr 2023 21:33:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 207086B0080 for ; Sat, 22 Apr 2023 21:33:59 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DA9E11A01C5 for ; Sun, 23 Apr 2023 01:33:58 +0000 (UTC) X-FDA: 80710934556.17.6460138 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 1BF31160003 for ; Sun, 23 Apr 2023 01:33:56 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bxsY+pLL; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682213637; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jLOVOny8O2dWoMM2WOsk22I7C/agD8h9lmOtwFfKMYA=; b=0zDGvlZqwESp24xRZhjbbcn391hcS0MdpkwEQL1VT1UGa58ld5p5rlWFLmAKazAt2am9Fw mHpbdlj4ToIj7uRxClahGjLCMuAWpOmg62kU/e5dH9DRWEwReBBEd+1xCchKe1q68BssAL 4JM9mi6H+nY8V8h0eXj14VBbjHRdR3s= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bxsY+pLL; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682213637; a=rsa-sha256; cv=none; b=0uAe5arlmhBBP/ttSImpvdx0RPMXTrnRyahJRun0lBs4BlbvDHACt060eV0FMUQ+WPb47A K1sdCRSX2E3igmTERbILK6ZTCR32WnfYkKZjFzHsn5vxRGRLqAUOT+bnmrwmFZydjB0ABs OqPhfjWIEQx2DMfECWqfNPdLYC5rlCY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682213636; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=jLOVOny8O2dWoMM2WOsk22I7C/agD8h9lmOtwFfKMYA=; b=bxsY+pLLqrkt5b+NCo5e9YsV6NAD1jJ/9bJu2vBLjQa23aAQkJeqUyvv1t76xW9D4fc48a uqsru10I0UU2yqm3/5yqYrvE+pRpfSkUtyuliVecOGr2xXyPjQ86Rv2u0TJNeTDdNc5SFD ZPIYO24Qk/iVPSgHkTPS6zq2vIWKE4E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-394-kAiGWq3jMReOV2txbgZ6zQ-1; Sat, 22 Apr 2023 21:33:50 -0400 X-MC-Unique: kAiGWq3jMReOV2txbgZ6zQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EAB7D85C064; Sun, 23 Apr 2023 01:33:49 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5B62F2023163; Sun, 23 Apr 2023 01:33:49 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 748114039CABD; Sat, 22 Apr 2023 22:10:02 -0300 (-03) Date: Sat, 22 Apr 2023 22:10:02 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: Frederic Weisbecker , Andrew Morton , Christoph Lameter , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Russell King , Huacai Chen , Heiko Carstens , x86@kernel.org, Vlastimil Babka Subject: Re: [PATCH v7 00/13] fold per-CPU vmstats remotely Message-ID: References: <20230320180332.102837832@redhat.com> <20230418150200.027528c155853fea8e4f58b2@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1BF31160003 X-Stat-Signature: 9sgxdqygo7ewtrc8ceq5rkuku7d8i4rx X-HE-Tag: 1682213636-590161 X-HE-Meta: U2FsdGVkX1/sY90NsxxERTNMni2xarq2hiwvPpUDY/hoFwX3cfPdu/J5jAhhQu1HwSeSA4wWSZxY71z1mHrUGKlRMf2XWyV/IdA18zWESQkpQkCNnKnDnK8NkehgbWeQy6GiAQSAliIOo8UWX6ln51drPYkEE9BWuqgIw5iJUFiHoWoYFz+QlYYbh+TBWBGAuC3Hz2pPxdgxLdPGX5f4q08T3fr8P1V7+FMWNgumpJfZqmbCAsRAuyqKyRVHh1HGESiMZsWMNSeox0CQru5/54nP/9ePQFMpoxNuwVvk4MaH9M15UY5jgv7f+f3adubtllsjBqnI9PtEpJdCgfiXqH2AS/JwPq044pQDRytUeG8bW2dN4+BiUJ6ii/OrYtuUvVLC7eNIeyuApItXWyKKfvFczdwo4z4BK9tdT1DvOJdcPj0leZeMwTlTbjWv38gk/LmOAMrmHLgqQMvjzmFWLlIuFpiMO3vtQ9IHZ5npSAjsdNJvRrHBBRzExjPvd//e8NUQWb3JrtxekFUewwqjwK7ISnUgAg2OFGbAA8i/6BjTz8xlZ7g5BgPIe+3AFOW1qWpIdvutqoCmN0mcyq3SmLN39QQ6ljHht0GxmukKfHe7QWgo31V6bljO5s7BXMSgoy/1J9Zje8wtcWg856YSe82h2pGh2stYrm31wXzx/NYB6eZxwIUvX90GpYyAlwYzjhMQGgGvyqHmceNE8n+WSsFLwrKH7c+5dMLjlXo+zanEG/z13H44ZVM8HFalJ/9USoGYgtfzBiGp6FHNlntSbLBAauwFlrjKXciovZFrOOGzLeiNM5nTwoZIpsobnLV1UdqWkpJTEseLuMvAlg90zS966xAzIAGL9rbURNdD99HlSCON1EFvzfNaNRAMA0oIGndFs+7RW1sXCf2lrsMkeAM6Su9qUm/SdOb0jE9xI8B6JnWgRQ4u4VUbCopRK0kEl3bPnINHrdpU84PbsMD P0cNY5KW hhPb2ialyAFlxleVBGYswVs+A/8Rc4BIps5NeI5wF6ENxa0BpOFwz6x/hIbqivEHAvQfAHa5pXxnu72+QY9yLWNoIenGWrPu9d4rnMfO3DWYD+dg+5UfOSM0JLWOHdpzMPlajCeMOh9OAslz2RupNv/op9K56XeZrB3Vh8KZZuwfwW5T7X1GqCkPKDQxeCrxpy/Jxix3MMmCIqlOrisMchbptRO+VD0rqN+nS4sPM65AM/lV8TOPJvWTRtvce1C0OvxcAmsUVTPGX7+tsAc5jT5uLfXPvzTfyt+22NSvhrB3rFlU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 20, 2023 at 10:40:25AM +0200, Michal Hocko wrote: > On Wed 19-04-23 13:35:12, Marcelo Tosatti wrote: > [...] > > This is a burden for application writers and for system configuration. > > Yes. And I find it reasonable to expect that burden put there as there > are non-trivial requirements for those workloads anyway. It is not > out-of-the-box thing, right? See below. > > Or it could be done automatically (from outside of the application). > > Which is what is described and implemented here: > > > > https://lore.kernel.org/lkml/20220204173537.429902988@fedora.localdomain/ > > > > "Task isolation is divided in two main steps: configuration and > > activation. > > > > Each step can be performed by an external tool or the latency > > sensitive application itself. util-linux contains the "chisol" tool > > for this purpose." > > I cannot say I would be a fan of prctl interfaces in general but I do > agree with the overal idea to forcing a quiescent state on a set of > CPUs. This has been avoided with success so far. > > But not only that, the second thing is: > > > > "> Another important point is this: if an application dirties > > > its own per-CPU vmstat cache, while performing a system call, > > > > Or while handling a VM-exit from a vCPU. > > Do you have any specific examples on this? Interrupt handling freeing a page. handle_access_fault (ARM64) -> handle_changed_spte_acc_track (x86) -> kvm_set_pfn_accessed -> kvm_set_page_accessed -> mark_page_accessed -> folio_mark_accessed -> folio_activate -> folio_activate_fn -> lruvec_add_folio -> update_lru_size -> __update_lru_size -> __mod_zone_page_state(&pgdat->node_zones[zid], NR_ZONE_LRU_BASE + lru, nr_pages); The other option would be to _FORBID_ use of __mod_zone_page_state in certain code sections. > > This are, in my mind, sufficient reasons to discard the "flush per-cpu > > caches" idea. This is also why i chose to abandon the prctrl interface > > patchset. > > > > > and a vmstat sync event is triggered on a different CPU, you'd have to: > > > > > > 1) Wait for that CPU to return to userspace and sync its stats > > > (unfeasible). > > > > > > 2) Queue work to execute on that CPU (undesirable, as that causes > > > an interruption). > > > > > > 3) Remotely sync the vmstat for that CPU." > > > > So the only option is to remotely sync vmstat for the CPU > > (unless you have a better suggestion). > > `echo 1 > /proc/sys/vm/stat_refresh' achieves essentially the same > without any kernel changes. It is unsuitable. You'd have to guarantee that, by the time you return from the write() system call to that file, there has been no other mod_zone_page_state call. For example, no interrupt or exception that frees or allocates a page through rmqueue (NR_FREE_PAGES counter), or that bounce_end_io cannot be called (since it calls dec_zone_page_state). It has been used internally as a workaround, but it is not reliable. > But let me repeat, this is not just about vmstats. Just have a look at > other queue_work_on users. You do not want to handy pick each and every > one and do so in the future as well. The ones that are problematic are being fixed for sometime now. For example: commit 2de79ee27fdb52626ac4ac48ec6d8d52ba6f9047 Author: Paolo Abeni Date: Thu Sep 10 23:33:18 2020 +0200 net: try to avoid unneeded backlog flush flush_all_backlogs() may cause deadlock on systems running processes with FIFO scheduling policy. The above is critical in -RT scenarios, where user-space specifically ensure no network activity is scheduled on the CPU running the mentioned FIFO process, but still get stuck. This commit tries to address the problem checking the backlog status on the remote CPUs before scheduling the flush operation. If the backlog is empty, we can skip it. v1 -> v2: - explicitly clear flushed cpu mask - Eric Signed-off-by: Paolo Abeni Signed-off-by: David S. Miller And it has been a normal process so far. I think what needs to be done is to avoid new queue_work_on() users from being introduced in the tree (the number of existing ones is finite and can therefore be fixed).