From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FB18C433E0 for ; Wed, 3 Jun 2020 23:02:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1BED221741 for ; Wed, 3 Jun 2020 23:02:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VrbIZ4YD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BED221741 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B492F280069; Wed, 3 Jun 2020 19:02:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF9A3280003; Wed, 3 Jun 2020 19:02:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3850280069; Wed, 3 Jun 2020 19:02:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 8B78C280003 for ; Wed, 3 Jun 2020 19:02:13 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 47288181AEF09 for ; Wed, 3 Jun 2020 23:02:13 +0000 (UTC) X-FDA: 76889425746.30.corn50_306dfdf46b820 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 2CF34180B3AA7 for ; Wed, 3 Jun 2020 23:02:13 +0000 (UTC) X-HE-Tag: corn50_306dfdf46b820 X-Filterd-Recvd-Size: 8771 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:02:12 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 977AE2166E; Wed, 3 Jun 2020 23:02:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225332; bh=FhawD21ER8JjLl85Ttj4VlcGx3b3p7p0v7HSmLypBd0=; h=Date:From:To:Subject:In-Reply-To:From; b=VrbIZ4YD/b+rtwBBHn6fUFUhEO8u6jIN0owPfXSsJUBTL2HYddDI3vHpbymlXdWe2 8kQ4QUBhc51du+4iWVOt02y3xaOs2+fTRSOJl1H0GDr64JgGK+us2s9FXaYkoatDWP F4xqBP/bkmejEt9fYfMAaA9+G0EFAkABJB5w2lo4= Date: Wed, 03 Jun 2020 16:02:11 -0700 From: Andrew Morton To: akpm@linux-foundation.org, alex.shi@linux.alibaba.com, bsingharora@gmail.com, guro@fb.com, hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com, kirill@shutemov.name, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, shakeelb@google.com, torvalds@linux-foundation.org Subject: [patch 097/131] mm: memcontrol: prepare swap controller setup for integration Message-ID: <20200603230211.LhC1jsO1A%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 2CF34180B3AA7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Subject: mm: memcontrol: prepare swap controller setup for integration A few cleanups to streamline the swap controller setup: - Replace the do_swap_account flag with cgroup_memory_noswap. This brings it in line with other functionality that is usually available unless explicitly opted out of - nosocket, nokmem. - Remove the really_do_swap_account flag that stores the boot option and is later used to switch the do_swap_account. It's not clear why this indirection is/was necessary. Use do_swap_account directly. - Minor coding style polishing Link: http://lkml.kernel.org/r/20200508183105.225460-15-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Reviewed-by: Joonsoo Kim Cc: Alex Shi Cc: Hugh Dickins Cc: "Kirill A. Shutemov" Cc: Michal Hocko Cc: Roman Gushchin Cc: Shakeel Butt Cc: Balbir Singh Signed-off-by: Andrew Morton --- include/linux/memcontrol.h | 2 - mm/memcontrol.c | 59 ++++++++++++++++------------------- mm/swap_cgroup.c | 4 +- 3 files changed, 31 insertions(+), 34 deletions(-) --- a/include/linux/memcontrol.h~mm-memcontrol-prepare-swap-controller-setup-for-integration +++ a/include/linux/memcontrol.h @@ -558,7 +558,7 @@ struct mem_cgroup *mem_cgroup_get_oom_gr void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); #ifdef CONFIG_MEMCG_SWAP -extern int do_swap_account; +extern bool cgroup_memory_noswap; #endif struct mem_cgroup *lock_page_memcg(struct page *page); --- a/mm/memcontrol.c~mm-memcontrol-prepare-swap-controller-setup-for-integration +++ a/mm/memcontrol.c @@ -83,10 +83,14 @@ static bool cgroup_memory_nokmem; /* Whether the swap controller is active */ #ifdef CONFIG_MEMCG_SWAP -int do_swap_account __read_mostly; +#ifdef CONFIG_MEMCG_SWAP_ENABLED +bool cgroup_memory_noswap __read_mostly; #else -#define do_swap_account 0 -#endif +bool cgroup_memory_noswap __read_mostly = 1; +#endif /* CONFIG_MEMCG_SWAP_ENABLED */ +#else +#define cgroup_memory_noswap 1 +#endif /* CONFIG_MEMCG_SWAP */ #ifdef CONFIG_CGROUP_WRITEBACK static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq); @@ -95,7 +99,7 @@ static DECLARE_WAIT_QUEUE_HEAD(memcg_cgw /* Whether legacy memory+swap accounting is active */ static bool do_memsw_account(void) { - return !cgroup_subsys_on_dfl(memory_cgrp_subsys) && do_swap_account; + return !cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_noswap; } #define THRESHOLDS_EVENTS_TARGET 128 @@ -6528,18 +6532,19 @@ int mem_cgroup_charge(struct page *page, /* * Every swap fault against a single page tries to charge the * page, bail as early as possible. shmem_unuse() encounters - * already charged pages, too. The USED bit is protected by - * the page lock, which serializes swap cache removal, which + * already charged pages, too. page->mem_cgroup is protected + * by the page lock, which serializes swap cache removal, which * in turn serializes uncharging. */ VM_BUG_ON_PAGE(!PageLocked(page), page); if (compound_head(page)->mem_cgroup) goto out; - if (do_swap_account) { + if (!cgroup_memory_noswap) { swp_entry_t ent = { .val = page_private(page), }; - unsigned short id = lookup_swap_cgroup_id(ent); + unsigned short id; + id = lookup_swap_cgroup_id(ent); rcu_read_lock(); memcg = mem_cgroup_from_id(id); if (memcg && !css_tryget_online(&memcg->css)) @@ -7012,7 +7017,7 @@ int mem_cgroup_try_charge_swap(struct pa struct mem_cgroup *memcg; unsigned short oldid; - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) || !do_swap_account) + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) || cgroup_memory_noswap) return 0; memcg = page->mem_cgroup; @@ -7056,7 +7061,7 @@ void mem_cgroup_uncharge_swap(swp_entry_ struct mem_cgroup *memcg; unsigned short id; - if (!do_swap_account) + if (cgroup_memory_noswap) return; id = swap_cgroup_record(entry, 0, nr_pages); @@ -7079,7 +7084,7 @@ long mem_cgroup_get_nr_swap_pages(struct { long nr_swap_pages = get_nr_swap_pages(); - if (!do_swap_account || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return nr_swap_pages; for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) nr_swap_pages = min_t(long, nr_swap_pages, @@ -7096,7 +7101,7 @@ bool mem_cgroup_swap_full(struct page *p if (vm_swap_full()) return true; - if (!do_swap_account || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; memcg = page->mem_cgroup; @@ -7114,22 +7119,15 @@ bool mem_cgroup_swap_full(struct page *p return false; } -/* for remember boot option*/ -#ifdef CONFIG_MEMCG_SWAP_ENABLED -static int really_do_swap_account __initdata = 1; -#else -static int really_do_swap_account __initdata; -#endif - -static int __init enable_swap_account(char *s) +static int __init setup_swap_account(char *s) { if (!strcmp(s, "1")) - really_do_swap_account = 1; + cgroup_memory_noswap = 0; else if (!strcmp(s, "0")) - really_do_swap_account = 0; + cgroup_memory_noswap = 1; return 1; } -__setup("swapaccount=", enable_swap_account); +__setup("swapaccount=", setup_swap_account); static u64 swap_current_read(struct cgroup_subsys_state *css, struct cftype *cft) @@ -7226,7 +7224,7 @@ static struct cftype swap_files[] = { { } /* terminate */ }; -static struct cftype memsw_cgroup_files[] = { +static struct cftype memsw_files[] = { { .name = "memsw.usage_in_bytes", .private = MEMFILE_PRIVATE(_MEMSWAP, RES_USAGE), @@ -7255,13 +7253,12 @@ static struct cftype memsw_cgroup_files[ static int __init mem_cgroup_swap_init(void) { - if (!mem_cgroup_disabled() && really_do_swap_account) { - do_swap_account = 1; - WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, - swap_files)); - WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, - memsw_cgroup_files)); - } + if (mem_cgroup_disabled() || cgroup_memory_noswap) + return 0; + + WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, swap_files)); + WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, memsw_files)); + return 0; } subsys_initcall(mem_cgroup_swap_init); --- a/mm/swap_cgroup.c~mm-memcontrol-prepare-swap-controller-setup-for-integration +++ a/mm/swap_cgroup.c @@ -171,7 +171,7 @@ int swap_cgroup_swapon(int type, unsigne unsigned long length; struct swap_cgroup_ctrl *ctrl; - if (!do_swap_account) + if (cgroup_memory_noswap) return 0; length = DIV_ROUND_UP(max_pages, SC_PER_PAGE); @@ -209,7 +209,7 @@ void swap_cgroup_swapoff(int type) unsigned long i, length; struct swap_cgroup_ctrl *ctrl; - if (!do_swap_account) + if (cgroup_memory_noswap) return; mutex_lock(&swap_cgroup_mutex); _