From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A2BCC433FE for ; Thu, 3 Nov 2022 17:14:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 218236B0073; Thu, 3 Nov 2022 13:14:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C6178E0001; Thu, 3 Nov 2022 13:14:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08F026B0075; Thu, 3 Nov 2022 13:14:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id EF2086B0073 for ; Thu, 3 Nov 2022 13:14:12 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6DBCCC08BE for ; Thu, 3 Nov 2022 17:14:12 +0000 (UTC) X-FDA: 80092779144.17.CAA076D Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf13.hostedemail.com (Postfix) with ESMTP id ED30F20004 for ; Thu, 3 Nov 2022 17:14:11 +0000 (UTC) Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-367f94b9b16so23809307b3.11 for ; Thu, 03 Nov 2022 10:14:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=w4dPaNZUF+FLVyD4KDNkcedA4kyLOnqLhZHYhiV98ak=; b=XO1QzccCzEHSaizdJ83Md7LFtvvRxyWNQHfG62OAa45yrTp9K6XxJ3Ra+brK0cKdYU 1SbetESIZMD4tYcwhwENH+u6njmTg4vOGG6KIL9351fkVep+FXOEwo8mZ2D5fbv8L8Wq DzI8Jp8FyEd7dZ9ZMo2D0tm8/la9NU2L0S9OkLGLRzMFTpJ06e2JWSmF5NJuVUyFd5cG ThuiUaHIG084PkuQn3c2hn5eVCmXY9QicYaasQSvLrhpAQEbyugdo2cvX274ahZphCAf stXvXIjXfXB1xzzE6a8aQ7mveiBjccf57fxHcAd48c95XZHdFlVBn0w+LaCH0VeRkJ+K oNHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=w4dPaNZUF+FLVyD4KDNkcedA4kyLOnqLhZHYhiV98ak=; b=epDLrhSz8ShB+h6hoJA8R30WLxYF0zwcvBwMACQ0U9d/p6JjNlo4WRN+3gXSiGBsp+ gBuU9q79gyfM2pszCpYULCZE5wDe2tUwgGU/C/sApipMXQriPP0qCzcZcOe9JJHjQmj2 /017RsHQ03PsSwJQCb0eAXNlJoIwbXHKjhHm82xshEPccgb9EzKKludFm53GiW7uylHq BYVX8Hzh68x/jC3aDvHFF6Ol8s5nhPi5FGwM7e5SQHrJtCVQy9+Ri/lw7ear1QiKrL6Y ga2beFlFdQCWTHNKh4K4D5Ne56rCEoChZKsz0exm5bXhTxA4if/KQ5OypTk0T5p65tSq OiXQ== X-Gm-Message-State: ACrzQf3GDMvw+O6gw/79nMO+ANjQWvFO3Oj4b9L/QXkjHLEqJmwnJp9u 8WVBbkNzReQg13O7pbmGothhWxo4Hu2T3Q== X-Google-Smtp-Source: AMsMyM5LBrYoIRdr3ZeIZmYO9AkJTvHgSUQEWmasSB7+sJd5ifxB6H9vp5f0aNdYlI2gMz4DsYPjwkj9LzMQgQ== X-Received: from shakeelb.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:262e]) (user=shakeelb job=sendgmr) by 2002:a0d:f941:0:b0:35f:a7f3:a3da with SMTP id j62-20020a0df941000000b0035fa7f3a3damr185097ywf.69.1667495650844; Thu, 03 Nov 2022 10:14:10 -0700 (PDT) Date: Thu, 3 Nov 2022 17:14:07 +0000 In-Reply-To: Mime-Version: 1.0 References: <20221024052841.3291983-1-shakeelb@google.com> Message-ID: <20221103171407.ydubp43x7tzahriq@google.com> Subject: Re: [PATCH] mm: convert mm's rss stats into percpu_counter From: Shakeel Butt To: Marek Szyprowski Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="us-ascii" ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XO1QzccC; spf=pass (imf13.hostedemail.com: domain of 34vZjYwgKCIExmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=34vZjYwgKCIExmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667495652; a=rsa-sha256; cv=none; b=j2zEDFZF2XvXxWXydlcZCQ/HZaSVIurf6/IhFCmKxS5AXkiTDs+q7mfdFiZKI44Kl/e19g pEdUK75e2HrMQ3tHxbxHdkoJPSFn57hdqTD9wgfYs5yp2xQZtQPl18PsA/SqLKiJkQ29LC 4D5JG2tF6aPVQew3WfUqpWQE53+264k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667495652; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w4dPaNZUF+FLVyD4KDNkcedA4kyLOnqLhZHYhiV98ak=; b=tYD53IA07cF+FuQ0+eYLBOyQVLuW73dAwZje7DN7W5ywIE/qtAOzI1PMOtZ2XEL99bDE+q fNcnf8cMa2IcUZlQ4WB93dqnirrhJRv9uQEISXzsCDasgacoOt5TiejDtWAx1j+sDmDl70 rrKOk0borFPBGJWttqFm46PXa4wDTus= X-Stat-Signature: o7aduhx8uarqxb6notbogca6iyy5dctb X-Rspamd-Queue-Id: ED30F20004 X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XO1QzccC; spf=pass (imf13.hostedemail.com: domain of 34vZjYwgKCIExmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=34vZjYwgKCIExmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-HE-Tag: 1667495651-340328 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 02, 2022 at 10:09:57PM +0100, Marek Szyprowski wrote: > Hi > > On 24.10.2022 07:28, Shakeel Butt wrote: > > Currently mm_struct maintains rss_stats which are updated on page fault > > and the unmapping codepaths. For page fault codepath the updates are > > cached per thread with the batch of TASK_RSS_EVENTS_THRESH which is 64. > > The reason for caching is performance for multithreaded applications > > otherwise the rss_stats updates may become hotspot for such > > applications. > > > > However this optimization comes with the cost of error margin in the rss > > stats. The rss_stats for applications with large number of threads can > > be very skewed. At worst the error margin is (nr_threads * 64) and we > > have a lot of applications with 100s of threads, so the error margin can > > be very high. Internally we had to reduce TASK_RSS_EVENTS_THRESH to 32. > > > > Recently we started seeing the unbounded errors for rss_stats for > > specific applications which use TCP rx0cp. It seems like > > vm_insert_pages() codepath does not sync rss_stats at all. > > > > This patch converts the rss_stats into percpu_counter to convert the > > error margin from (nr_threads * 64) to approximately (nr_cpus ^ 2). > > However this conversion enable us to get the accurate stats for > > situations where accuracy is more important than the cpu cost. Though > > this patch does not make such tradeoffs. > > > > Signed-off-by: Shakeel Butt > > This patch landed recently in linux-next as commit d59f19a7a068 ("mm: > convert mm's rss stats into percpu_counter"). Unfortunately it causes a > regression on my test systems. I've noticed that it triggers a 'BUG: Bad > rss-counter state' warning from time to time for random processes. This > is somehow related to CPU hot-plug and/or system suspend/resume. The > easiest way to reproduce this issue (although not always) on my test > systems (ARM or ARM64 based) is to run the following commands: > > root@target:~# for i in /sys/devices/system/cpu/cpu[1-9]; do echo 0 > >$i/online; > BUG: Bad rss-counter state mm:f04c7160 type:MM_FILEPAGES val:1 > BUG: Bad rss-counter state mm:50f1f502 type:MM_FILEPAGES val:2 > BUG: Bad rss-counter state mm:50f1f502 type:MM_ANONPAGES val:15 > BUG: Bad rss-counter state mm:63660fd0 type:MM_FILEPAGES val:2 > BUG: Bad rss-counter state mm:63660fd0 type:MM_ANONPAGES val:15 > > Let me know if I can help debugging this somehow or testing a fix. > Hi Marek, Thanks for the report. It seems like there is a race between for_each_online_cpu() in __percpu_counter_sum() and percpu_counter_cpu_dead()/cpu-offlining. Normally this race is fine for percpu_counter users but for check_mm() is not happy with this race. Can you please try the following patch: From: Shakeel Butt Date: Thu, 3 Nov 2022 06:05:13 +0000 Subject: [PATCH] mm: percpu_counter: use race free percpu_counter sum interface percpu_counter_sum can race with cpu offlining. Add a new interface which does not race with it and use that for check_mm(). --- include/linux/percpu_counter.h | 11 +++++++++++ kernel/fork.c | 2 +- lib/percpu_counter.c | 24 ++++++++++++++++++------ 3 files changed, 30 insertions(+), 7 deletions(-) diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h index bde6c4c1f405..3070c1043acf 100644 --- a/include/linux/percpu_counter.h +++ b/include/linux/percpu_counter.h @@ -45,6 +45,7 @@ void percpu_counter_set(struct percpu_counter *fbc, s64 amount); void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch); s64 __percpu_counter_sum(struct percpu_counter *fbc); +s64 __percpu_counter_sum_all(struct percpu_counter *fbc); int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch); void percpu_counter_sync(struct percpu_counter *fbc); @@ -85,6 +86,11 @@ static inline s64 percpu_counter_sum(struct percpu_counter *fbc) return __percpu_counter_sum(fbc); } +static inline s64 percpu_counter_sum_all(struct percpu_counter *fbc) +{ + return __percpu_counter_sum_all(fbc); +} + static inline s64 percpu_counter_read(struct percpu_counter *fbc) { return fbc->count; @@ -193,6 +199,11 @@ static inline s64 percpu_counter_sum(struct percpu_counter *fbc) return percpu_counter_read(fbc); } +static inline s64 percpu_counter_sum_all(struct percpu_counter *fbc) +{ + return percpu_counter_read(fbc); +} + static inline bool percpu_counter_initialized(struct percpu_counter *fbc) { return true; diff --git a/kernel/fork.c b/kernel/fork.c index 9c32f593ef11..7d6f510cf397 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -756,7 +756,7 @@ static void check_mm(struct mm_struct *mm) "Please make sure 'struct resident_page_types[]' is updated as well"); for (i = 0; i < NR_MM_COUNTERS; i++) { - long x = percpu_counter_sum(&mm->rss_stat[i]); + long x = percpu_counter_sum_all(&mm->rss_stat[i]); if (unlikely(x)) pr_alert("BUG: Bad rss-counter state mm:%p type:%s val:%ld\n", diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index ed610b75dc32..f26a1a5df399 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -117,11 +117,8 @@ void percpu_counter_sync(struct percpu_counter *fbc) } EXPORT_SYMBOL(percpu_counter_sync); -/* - * Add up all the per-cpu counts, return the result. This is a more accurate - * but much slower version of percpu_counter_read_positive() - */ -s64 __percpu_counter_sum(struct percpu_counter *fbc) +static s64 __percpu_counter_sum_mask(struct percpu_counter *fbc, + const struct cpumask *cpu_mask) { s64 ret; int cpu; @@ -129,15 +126,30 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc) raw_spin_lock_irqsave(&fbc->lock, flags); ret = fbc->count; - for_each_online_cpu(cpu) { + for_each_cpu(cpu, cpu_mask) { s32 *pcount = per_cpu_ptr(fbc->counters, cpu); ret += *pcount; } raw_spin_unlock_irqrestore(&fbc->lock, flags); return ret; } + +/* + * Add up all the per-cpu counts, return the result. This is a more accurate + * but much slower version of percpu_counter_read_positive() + */ +s64 __percpu_counter_sum(struct percpu_counter *fbc) +{ + return __percpu_counter_sum_mask(fbc, cpu_online_mask); +} EXPORT_SYMBOL(__percpu_counter_sum); +s64 __percpu_counter_sum_all(struct percpu_counter *fbc) +{ + return __percpu_counter_sum_mask(fbc, cpu_possible_mask); +} +EXPORT_SYMBOL(__percpu_counter_sum_all); + int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp, struct lock_class_key *key) { -- 2.38.1.431.g37b22c650d-goog