From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6210C4332F for ; Tue, 13 Dec 2022 19:53:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2E808E0003; Tue, 13 Dec 2022 14:53:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDF678E0002; Tue, 13 Dec 2022 14:53:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA6CD8E0003; Tue, 13 Dec 2022 14:53:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CACCB8E0002 for ; Tue, 13 Dec 2022 14:53:56 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9176E1A0DA3 for ; Tue, 13 Dec 2022 19:53:56 +0000 (UTC) X-FDA: 80238333672.01.AD3C1B5 Received: from mail-vs1-f46.google.com (mail-vs1-f46.google.com [209.85.217.46]) by imf08.hostedemail.com (Postfix) with ESMTP id F20B4160014 for ; Tue, 13 Dec 2022 19:53:54 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Sm8jgscf; spf=pass (imf08.hostedemail.com: domain of almasrymina@google.com designates 209.85.217.46 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670961235; a=rsa-sha256; cv=none; b=zbZ6rJ9juMl23CAJToFxXlzQsmGX3ph2/LEIm/rfsxNxm4jkNj9QvrvW5Jo6mU8QdPTtX1 DV17JtgNqJ8ci09hWAkHz0lBEge3tjLAIhnkqQQh/Wt7UuVp7B4fFYfgCfDhxxjs3qRBqH 5SsWbN03e/SNNutr+vSRTwZRORS/P6M= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Sm8jgscf; spf=pass (imf08.hostedemail.com: domain of almasrymina@google.com designates 209.85.217.46 as permitted sender) smtp.mailfrom=almasrymina@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670961235; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G68UrwkYoX/Qe/LoWiVKT0TgPWlsfh6ss6fYKw0/u6Q=; b=zSAiSWQnPZtPN4dZaxn1fdAQrACThi9mpV/IlmbWyZYkhbB7YqZVjbpTrtnwWbMbgkrbPL uoxsZcj54aCW3gQAnGdk+LvD/T2F2hvxT3ymv8gwfkNxCchE/OI0PI5aj7GlFvtV+64Tu1 ALpx/ygQHE1IsbZYZMzs4QYSdviL4lM= Received: by mail-vs1-f46.google.com with SMTP id a66so7231323vsa.6 for ; Tue, 13 Dec 2022 11:53:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=G68UrwkYoX/Qe/LoWiVKT0TgPWlsfh6ss6fYKw0/u6Q=; b=Sm8jgscfl0bxKQljesMfB/GLmzJaJD2vpXZGfn0SUkb5+iYT7soSkMXvoJkuAmGEi8 Xf50Z0xzzN+cwQkwCYl+q6vPKzbkk4kB1e8tTanObV8TGNuryvpt+NjUQ2TkXxxqIzpO LYcjE7/r+2zyTzC2NQSG5lb8ylYzAAhsRfiQYBJ4M7f2VPPi5KYBls3PFjee2xiNgiCU +k3NJS+AJ0CdsG4wCv36sgBuDF+yZcefMFa7BEGCR1GbvSbdSINRBb/q5jrbF0UZrufz 3hLaS3wKXAXEwWDVCMxik4Ytq8v6dEUK4LcVueiRfgrW+BpP7dnoSEuoNaVY6dRzSENK AT+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=G68UrwkYoX/Qe/LoWiVKT0TgPWlsfh6ss6fYKw0/u6Q=; b=vgMqP/JvAzmjj1ZpwN3uifmEI5ajblNcMFtVhcT7kMwXMSzmnSROH6bUvDql3759MW 3O2LpC6XxNGHylOcgQoirVVR6n46MZ0aV9qHbE6HIujX7c1lOoobjjtXvu3BOH5JNEJQ 3J80tqxw60F7cRSI/S2oKNJjSkleRGjk7O8gg6bWo1jmX6JJOTd/Y/KoL87d8hyVDZI6 ykCNjINvZ+2IwSp1zy4N+dCX0kUw+MXIwkY9BxPU4lOLHj6rfvp31bEz22CMcSV8O0lR XxCWH9NUOy0w8zsfPjbCZQmlT97r088QCpZoNWOwRFFz8qyrX0YNAf5eTekpyhJXHddK bEdw== X-Gm-Message-State: ANoB5plCkbMjstbmPTaonnLhgJhyiDblD0Q2xUdN2wQARgzYpkAgAEOe 5EezhnvSyeC6WeJFaTfM5Qt93bKWmU8tMHH4w+8QDA== X-Google-Smtp-Source: AA0mqf6neKya6HOBCrEDVGywH2EKFZpIFqYaI1I7eSYfNlwOM6XXk5c5BHVV+2II+AgVkhYA1AerI7HFu29tEnLXKD0= X-Received: by 2002:a05:6102:cd1:b0:3aa:1bff:a8a5 with SMTP id g17-20020a0561020cd100b003aa1bffa8a5mr55883869vst.67.1670961233827; Tue, 13 Dec 2022 11:53:53 -0800 (PST) MIME-Version: 1.0 References: <20221202223533.1785418-1-almasrymina@google.com> In-Reply-To: From: Mina Almasry Date: Tue, 13 Dec 2022 11:53:42 -0800 Message-ID: Subject: Re: [PATCH v3] mm: Add nodes= arg to memory.reclaim To: Johannes Weiner Cc: Michal Hocko , Tejun Heo , Zefan Li , Jonathan Corbet , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Huang Ying , Yang Shi , Yosry Ahmed , weixugc@google.com, fvdl@google.com, bagasdotme@gmail.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: F20B4160014 X-Rspamd-Server: rspam01 X-Stat-Signature: jiz8yqxiu94wutr6qxjr4y1ijzf7ok38 X-HE-Tag: 1670961234-460298 X-HE-Meta: U2FsdGVkX194dcExxyWMCH4sEUGisAn70fr6yFUga88iiqo504h6q/PaA6oGJY1YW2y8KzFnUmGocICAGYmF7w4qHseIrefuLuIUIeF4WSQtEIwxIQQ0XKCby6O6gCbGnKqldv8JwL3/Mp1f2iF38foTRuRFZZLa24CmO3YI8HG7CPV/PVLFkSBl2lLk8k7M6fO2/IaUvJeCK2UqXq6ObGHPTpK6m6/bKMxfSlDbQOn887vHZYXJq6tfvM63Sg5QCWp0KZTkrNN62vV1DsgKdwWfuiQ8iLIPlWPwr1jeenl+bSMLdrB91V3FN2BaB63LQL6BWHre0ZKIi8eBFs+NwdWsAm7k8rlhUOmqQvumBGKW/CUeiBJauPOIIL0oYdvcPB28vmMXvacA4W5ZAztqxMgApJljlDWSn/QQDV47OEgROa4K13DriUm6e/jvP0z1wC1zJ/oCjgq1GtJW20IEV+SxFG7MH/xT2o47FcV0NjeHuKfRim9pbY0Ig1hCjZ0e2urpjrOASpA0eNq2d7eEy5XA35ZAI3cL6fix4iy7wYmhKFPrN3bZDbzwNiPTAwJEm9Y398WxgBwEvaNFLMQt6uLplD5WGSSoBJpYNqgesIF5WMozj0xfXgmfoaX5y6rlgceSr/XeyhFjbSI5TpJ8Ah0k2n+Uo6qmDAX/L7X2A7EXcMJQaHplguQcJ/BRXmpurghiOtsNK+uxmM6rUtnhphwDT5l592VA4nYqBTokISvaE1zwa56d3+mgRiLUGjGPEmxWKEX8dpSpBIoVFbM8jp0XCehMYbfA9A2IcC9o3ciFvrmJBk/HkPxnO+AIkAEK3uQWQWTQ4Pmh+BWOdtg9DJ3SoQ3sC2z/yy/B9kyx2ZJlfNRoJ077EBz/nuk+PUdnsfxBwQCOW+y5uUvyOSXMXPXJCi05K2C7PS2Ny4wLs2qb9ssoJ5mjGACXTWu+cMSmjuq4qYK74JG6y19jzZF VGl38P6D 1/4kyvgpltgAYGLE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 13, 2022 at 7:58 AM Johannes Weiner wrote: > > On Tue, Dec 13, 2022 at 09:33:24AM +0100, Michal Hocko wrote: > > I do recognize your need to control the demotion but I argue that it is > > a bad idea to rely on an implicit behavior of the memory reclaim and an > > interface which is _documented_ to primarily _reclaim_ memory. > > I think memory.reclaim should demote as part of page aging. What I'd > like to avoid is *having* to manually control the aging component in > the interface (e.g. making memory.reclaim *only* reclaim, and > *requiring* a coordinated use of memory.demote to ensure progress.) > > > Really, consider that the current demotion implementation will change > > in the future and based on a newly added heuristic memory reclaim or > > compression would be preferred over migration to a different tier. This > > might completely break your current assumptions and break your usecase > > which relies on an implicit demotion behavior. Do you see that as a > > potential problem at all? What shall we do in that case? Special case > > memory.reclaim behavior? > > Shouldn't that be derived from the distance propertiers in the tier > configuration? > > I.e. if local compression is faster than demoting to a slower node, we > should maybe have a separate tier for that. Ignoring proactive reclaim > or demotion commands for a second: on that node, global memory > pressure should always compress first, while the oldest pages from the > compression cache should demote to the other node(s) - until they > eventually get swapped out. > > However fine-grained we make proactive reclaim control over these > stages, it should at least be possible for the user to request the > default behavior that global pressure follows, without jumping through > hoops or requiring the coordinated use of multiple knobs. So IMO there > is an argument for having a singular knob that requests comprehensive > aging and reclaiming across the configured hierarchy. > > As far as explicit control over the individual stages goes - no idea > if you would call the compression stage demotion or reclaim. The > distinction still does not make much of sense to me, since reclaim is > just another form of demotion. Sure, page faults have a different > access latency than dax to slower memory. But you could also have 3 > tiers of memory where the difference between tier 1 and 2 is much > smaller than the difference between 2 and 3, and you might want to > apply different demotion rates between them as well. > > The other argument is that demotion does not free cgroup memory, > whereas reclaim does. But with multiple memory tiers of vastly > different performance, isn't there also an argument for granting > cgroups different shares of each memory? So that a higher priority > group has access to a bigger share of the fastest memory, and lower > prio cgroups are relegated to lower tiers. If we split those pools, > then "demotion" will actually free memory in a cgroup. > I would also like to say I implemented something in line with that in [1]. In this patch, pages demoted from inside the nodemask to outside the nodemask count as 'reclaimed'. This, in my mind, is a very generic solution to the 'should demoted pages count as reclaim?' problem, and will work in all scenarios as long as the nodemask passed to shrink_folio_list() is set correctly by the call stack. > This is why I liked adding a nodes= argument to memory.reclaim the > best. It doesn't encode a distinction that may not last for long. > > The problem comes from how to interpret the input argument and the > return value, right? Could we solve this by requiring the passed > nodes= to all be of the same memory tier? Then there is no confusion > around what is requested and what the return value means. > I feel like I arrived at a better solution in [1], where pages demoted from inside of the nodemask to outside count as reclaimed and the rest don't. But I think we could solve this by explicit checks that nodes= arg are from the same tier, yes. > And if no nodes are passed, it means reclaim (from the lowest memory > tier) X pages and demote as needed, then return the reclaimed pages. > > > Now to your specific usecase. If there is a need to do a memory > > distribution balancing then fine but this should be a well defined > > interface. E.g. is there a need to not only control demotion but > > promotions as well? I haven't heard anybody requesting that so far > > but I can easily imagine that like outsourcing the memory reclaim to > > the userspace someone might want to do the same thing with the numa > > balancing because $REASONS. Should that ever happen, I am pretty sure > > hooking into memory.reclaim is not really a great idea. > > Should this ever happen, it would seem fair that that be a separate > knob anyway, no? One knob to move the pipeline in one direction > (aging), one knob to move it the other way. [1] https://lore.kernel.org/linux-mm/20221206023406.3182800-1-almasrymina@google.com/