From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51B2BC3DA61 for ; Wed, 24 Jul 2024 23:14:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8E4F6B008C; Wed, 24 Jul 2024 19:13:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A17C66B0092; Wed, 24 Jul 2024 19:13:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B78D6B0093; Wed, 24 Jul 2024 19:13:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6BFD56B008C for ; Wed, 24 Jul 2024 19:13:59 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 13772120C9F for ; Wed, 24 Jul 2024 23:13:59 +0000 (UTC) X-FDA: 82376200998.21.FE3CD57 Received: from mail-ej1-f44.google.com (mail-ej1-f44.google.com [209.85.218.44]) by imf09.hostedemail.com (Postfix) with ESMTP id 3303B140017 for ; Wed, 24 Jul 2024 23:13:56 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Mfx0V4zm; spf=pass (imf09.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.44 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721862789; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xqGMsm+DN4bJglfHn2wnABQeEHoaj0YtiuVT5hiybgU=; b=OKN2HsbUep1PSRx3b/E+ujtIn27QaosGt5LqR56KXTR6djLSKvhPhvNGpdIf03rid06+Pz XBfjzhU2RUT7Lo+UZ89IN/Ogyjgm+jwrhi+7U6kNec5WnnKNd0fLEtIDgfSzys9kyXu49o /0P6TTbohNltxEZ2ZHEKCoIDnLhUxIw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721862789; a=rsa-sha256; cv=none; b=r08lJIdAetcFJxvPicqCgaCsBNxFDUZzLWKd2AsGduViYRl5SW98XVOZ7sBHSZvoY36KKi NxTZZeWxrZ6XXcd7Az0fYSHQ2EMzNsju9jbn9qXkN4Yl5ac0HhqIk+c6iZbXXd/+GOfl7k 3ZjXvBa5n0NKi/MJ/NX8gQ4lIvhNmSo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Mfx0V4zm; spf=pass (imf09.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.44 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ej1-f44.google.com with SMTP id a640c23a62f3a-a7a47e2179dso44502266b.3 for ; Wed, 24 Jul 2024 16:13:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721862835; x=1722467635; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=xqGMsm+DN4bJglfHn2wnABQeEHoaj0YtiuVT5hiybgU=; b=Mfx0V4zmbkCTQZCl1tWxrX+Yr1LDYoNIYBlKutHpihGTjlR8HU0fFMSZVPbx5g5prK wgzF71bC58RY65tZnr8j+yxjs7o1M4jIxgrxXHAIgXm79gSKb/Pnen9jVM1nMDA4OwPI 1hPXV6th3F2lOhnMKhG98Q5CvtDwa5oWsCwaPXJIR+JzID5T9+qFPVI6N4a8TNz/K/7A 7kgaafQMoWKRNnH844jB8ybB6e9dmKKY+9e8+PpnIGWOhetq/bQ9CQafe6WjbPwplAvr LnQ84rcos/Kpdv2pJsTawrYgcvVaWY9ghq9tTv61QpdVxP+V7ywyr6W3A0iUffL6eGzz l1Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721862835; x=1722467635; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xqGMsm+DN4bJglfHn2wnABQeEHoaj0YtiuVT5hiybgU=; b=MN5D1otayKyXibLwTC2G6VnUJM+3XOKTKF6ET+mksHu7Eks33GPUwRTLXYv+nmvU3o NzX+nb716k5TxXikGSIp9FV3B07V02lmxYwb3LdXvRfNmCRbDcYtFpvWOocYAZ8IOqDf o8bSo+0+pvVuJoMv64eZKCettqVvaSEY/I/B0G/xAtLvlBRCbmlIrN4qqdYSWEmEnAid TCtQ2LqZ8YSSDkdMESt8LHbLj8KXFz+jGWeXnjq/w6xL4kkZQN//cOBDtS5DyUDACV7F KMK9MAGyjjOIN8bLZRU1uNPzYAPAR06g8QZJebP0/pRn46cb5bEkbbE/OB+5jYYl6fnn 8z8g== X-Forwarded-Encrypted: i=1; AJvYcCW916MKqixtZGlfL9lWYL1G7Kt6/bbUd+AV4UKq2H07QBGRQx0qnBDVLjVmYu9sJ/rCYKdTZpYaMLSC288TYOjf3sw= X-Gm-Message-State: AOJu0YzZOnB8l0uZKa9oFtBu/Fs2Sv9I2bRTfPBho4RyU7z9cZsCzdqa 4wVGYcw/AhK0+wVVPmUAmiJ2ro5DsbzTa6NUPth0LQTOexa3ZcscDWV9roAuGsp4nzb+g3Q94x3 6nE/Lq4FOXUbGGvK4Nh6UWcFCpTIlW7QXkZJD X-Google-Smtp-Source: AGHT+IHTLmFi/7sXK/A/TyOCcC2QmI6ORY6zfcii8AjERSIKyHssKh83IE3NsBtmc8E9qUtRbVWAGsvl9lbCxkutD44= X-Received: by 2002:a17:907:971f:b0:a7a:8522:5eec with SMTP id a640c23a62f3a-a7ac5076647mr65335266b.53.1721862834987; Wed, 24 Jul 2024 16:13:54 -0700 (PDT) MIME-Version: 1.0 References: <20240724202103.1210065-1-roman.gushchin@linux.dev> <20240724202103.1210065-2-roman.gushchin@linux.dev> In-Reply-To: <20240724202103.1210065-2-roman.gushchin@linux.dev> From: Yosry Ahmed Date: Wed, 24 Jul 2024 16:13:17 -0700 Message-ID: Subject: Re: [PATCH v2 1/5] mm: memcg: don't call propagate_protected_usage() needlessly To: Roman Gushchin Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3303B140017 X-Stat-Signature: z7wada6ghscg9ae8k83btb31yiohcc8x X-HE-Tag: 1721862836-775572 X-HE-Meta: U2FsdGVkX18VTmtdzzmUtUvaGkd0HCYjTJnITs5OcTuxvIjT3YEk1rfMSzJ3PQfquyJ3tZZX5Awiin1A+mT3QD9856lU3VlaSm55xuSpFFSG3aOGDbfkdYgD0oFoiXUnabsde4WWwfHdcy4wRaB/38R7hZRV07KkdDAHWKFi/tbochuJ0lPnJTfo9rvLAJUrCuYcYk5+BkD6FhQrZBEzJ8p5UnCVp1879JuHllLijbqIaMjVwQISwg69xexsEbkZ8W+R7AunpFExufF7Tis2gi+1kkaMls78XNg6QFP7BvhGgnK/CyFltFCod/rpDXpJQqbMn8V1ZoHhrnzEC0J4hOeKP4/R4mprEU1yOeRtYGl8TAKSeaQYm564nZvWa/RwGqO/HfLmFTB+88w3te0+LhGlE+IgONbP6WUnWwSWOhCRD72YtxdRu9QbToZ+E2G+7pXslehp809e5eAYqbsXfYSx0QNd5wXA+cHaHhNnmDY3EeJteUvt7wwp77+tyA/ecMC7VBVCQtm2B4bNU0I34Nbujk3pbJ4QG+3rfZ/qFFnxmRoxbHC3fuMnFx8jzISH6UQ+XRT1gZWURsOq9mYWACd50ZGDEn5NEMOPDvHfhyYNiKFtA7OSX7LnnqveuYd3luSxrKzWAg9qfRds2kqmdjNy/9TbAA2f+o1EHpOYOXJ/D8XkcRCrkveohrjl07bp2GSsRAiygQ4QbVaPQEuNmjEKPVv5pxV1tKMkJYTCPPaZ3VDQG+l5PQ0SlTFSu7MTA+Ui2fwUsBMfF+Mt3UGUjXBHh2TYjcmpYjxgqBldZ9sN8zRbJeENF5xE4UvHRqGKJM085pUubusLMHbMqm3Mldvc3gTR0QNyw3pgNnDfL8THTZ5FGMQsGbPOlrzmmGcIrWxwMseL+QB245PuWWHEVqkFaLPEI7MRCCzsbQAj0dFxyj9YIDDBGweYSSB0fdYG5O/wBMNwXarIo/AkFWS NDEmpej/ WuvpTYg1dr2VWKZG3s/bn+DvQ4RDcQFIRWSm/dfChb/Vst7GjsJC91zs/jgHfLIDc7W0sDrqsgQkV1z3sIhCxNGKXup6i2fKHxpMj8RIQNayXAXLziiZSoP3Hvgp4uSVwjz6nOnnA6rFpJac3NXnqe1ChMphzYwmHk2+XMUsl2zvAJ+ip/+WvzpKALX5UMvwFKrTRRjV8/rP2g1E5qmMl4k0SWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jul 24, 2024 at 1:21=E2=80=AFPM Roman Gushchin wrote: > > Memory protection (min/low) requires a constant tracking of > protected memory usage. propagate_protected_usage() is called > on each page counters update and does a number of operations > even in cases when the actual memory protection functionality > is not supported (e.g. hugetlb cgroups or memcg swap counters). > > It's obviously inefficient and leads to a waste of CPU cycles. > It can be addressed by calling propagate_protected_usage() only > for the counters which do support memory guarantees. As of now > it's only memcg->memory - the unified memory memcg counter. > > Signed-off-by: Roman Gushchin > --- > include/linux/page_counter.h | 8 +++++++- > mm/hugetlb_cgroup.c | 4 ++-- > mm/memcontrol.c | 16 ++++++++-------- > mm/page_counter.c | 16 +++++++++++++--- > 4 files changed, 30 insertions(+), 14 deletions(-) > > diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h > index 860f313182e7..b31fd5b208aa 100644 > --- a/include/linux/page_counter.h > +++ b/include/linux/page_counter.h > @@ -32,6 +32,7 @@ struct page_counter { > /* Keep all the read most fields in a separete cacheline. */ > CACHELINE_PADDING(_pad2_); > > + bool protection_support; > unsigned long min; > unsigned long low; > unsigned long high; > @@ -45,12 +46,17 @@ struct page_counter { > #define PAGE_COUNTER_MAX (LONG_MAX / PAGE_SIZE) > #endif > > +/* > + * Protection is supported only for the first counter (with id 0). > + */ > static inline void page_counter_init(struct page_counter *counter, > - struct page_counter *parent) > + struct page_counter *parent, > + bool protection_support) Would it be better to make this an internal helper (e.g. __page_counter_init()), and add another API function that passes in protection_support=3Dtrue, for example: static inline void page_counter_init_protected(..) { __page_counter_init(.., true); } This will get rid of the naked booleans at the callsites of page_counter_init(), which are more difficult to interpret. It will also reduce the diff as we only need to change the page_counter_init() calls of memcg->memory. WDYT? > { > atomic_long_set(&counter->usage, 0); > counter->max =3D PAGE_COUNTER_MAX; > counter->parent =3D parent; > + counter->protection_support =3D protection_support; > } [..]