From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6AEDCA9EA0 for ; Tue, 22 Oct 2019 12:42:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B29F321783 for ; Tue, 22 Oct 2019 12:42:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B29F321783 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 632DA6B0007; Tue, 22 Oct 2019 08:42:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E2B36B0008; Tue, 22 Oct 2019 08:42:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D2166B000A; Tue, 22 Oct 2019 08:42:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 2AAAA6B0007 for ; Tue, 22 Oct 2019 08:42:46 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id C235A181AF5CA for ; Tue, 22 Oct 2019 12:42:45 +0000 (UTC) X-FDA: 76071384690.29.story51_181149663e900 X-HE-Tag: story51_181149663e900 X-Filterd-Recvd-Size: 4359 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Oct 2019 12:42:45 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 661F7AE2A; Tue, 22 Oct 2019 12:42:43 +0000 (UTC) Date: Tue, 22 Oct 2019 14:42:41 +0200 From: Michal Hocko To: Hillf Danton Cc: linux-mm , Andrew Morton , linux-kernel , Johannes Weiner , Shakeel Butt , Minchan Kim , Mel Gorman , Vladimir Davydov , Jan Kara Subject: Re: [RFC v1] mm: add page preemption Message-ID: <20191022124241.GM9379@dhcp22.suse.cz> References: <20191020134304.11700-1-hdanton@sina.com> <20191022121439.7164-1-hdanton@sina.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191022121439.7164-1-hdanton@sina.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 22-10-19 20:14:39, Hillf Danton wrote: > > On Mon, 21 Oct 2019 14:27:28 +0200 Michal Hocko wrote: [...] > > Why do we care and which workloads would benefit and how much. > > Page preemption, disabled by default, should be turned on by those > who wish that the performance of their workloads can survive memory > pressure to certain extent. I am sorry but this doesn't say anything to me. How come not all workloads would fit that description? > The number of pp users is supposed near the people who change the > nice value of their apps either to -1 or higher at least once a week, > less than vi users among UK's undergraduates. > > > And last but not least why the existing infrastructure doesn't help > > (e.g. if you have clearly defined workloads with different > > memory consumption requirements then why don't you use memory cgroups to > > reflect the priority). > > Good question:) > > Though pp is implemented by preventing any task from reclaiming as many > pages as possible from other tasks that are higher on priority, it is > trying to introduce prio into page reclaiming, to add a feature. > > Page and memcg are different objects after all; pp is being added at > the page granularity. It should be an option available in environments > without memcg enabled. So do you actually want to establish LRUs per priority? Why using memcgs is not an option? This is the main facility to partition reclaimable memory in the first place. You should really focus on explaining on why a much more fine grained control is needed much more thoroughly. > What is way different from the protections offered by memory cgroup > is that pages protected by memcg:min/low can't be reclaimed regardless > of memory pressure. Such guarantee is not available under pp as it only > suggests an extra factor to consider on deactivating lru pages. Well, low limit can be breached if there is no eliglible memcg to be reclaimed. That means that you can shape some sort of priority by setting the low limit already. [...] > What was added on the reclaimer side is > > 1, kswapd sets pgdat->kswapd_prio, the switch between page reclaimer > and allocator in terms of prio, to the lowest value before taking > a nap. > > 2, any allocator is able to wake up the reclaimer because of the > lowest prio, and it starts reclaiming pages using the waker's prio. > > 3, allocator comes while kswapd is active, its prio is checked and > no-op if kswapd is higher on prio; otherwise switch is updated > with the higher prio. > > 4, every time kswapd raises sc.priority that starts with DEF_PRIORITY, > it is checked if there is pending update of switch; and kswapd's > prio steps up if there is a pending one, thus its prio never steps > down. Nor prio inversion. > > 5, goto 1 when kswapd finishes its work. What about the direct reclaim? What if pages of a lower priority are hard to reclaim? Do you want a process of a higher priority stall more just because it has to wait for those lower priority pages? -- Michal Hocko SUSE Labs