From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66052C35242 for ; Fri, 7 Feb 2020 21:18:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1DFF621741 for ; Fri, 7 Feb 2020 21:18:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chrisdown.name header.i=@chrisdown.name header.b="GfoGaC+/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1DFF621741 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chrisdown.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BDA166B0007; Fri, 7 Feb 2020 16:18:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B8B686B0008; Fri, 7 Feb 2020 16:18:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC8866B000A; Fri, 7 Feb 2020 16:18:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 9618B6B0007 for ; Fri, 7 Feb 2020 16:18:10 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 34B5D824999B for ; Fri, 7 Feb 2020 21:18:10 +0000 (UTC) X-FDA: 76464593940.01.linen54_2a255576cf145 X-HE-Tag: linen54_2a255576cf145 X-Filterd-Recvd-Size: 6880 Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 7 Feb 2020 21:18:09 +0000 (UTC) Received: by mail-wr1-f65.google.com with SMTP id u6so579340wrt.0 for ; Fri, 07 Feb 2020 13:18:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chrisdown.name; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=s3/+t3C/iGaUgSEBBn4UBiEXAA3Z2xiPjiTjCVmDGl8=; b=GfoGaC+/D2IbhwLnfJa3biAe5iPkr+OuRXLKcIZgOZFgYGtGR/dc32KDVeiUu1TXk0 eWPSWbZikyLpSAEg8CB0va7Ozm1hhXJZLf0Y/bQn/fhxnQL9eZUGeWjN+xq/u9jl7qtc WAnaA0kz6pznkybaVC/dd561pV315aJX4mGaU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=s3/+t3C/iGaUgSEBBn4UBiEXAA3Z2xiPjiTjCVmDGl8=; b=UQP1IpxV6fstOUYR7Z2b+Jj6a1W1Gk8vu9GhkmGmRjwZDLlgpSZq/qeREumlOB6HFc MYuUim4bZQf+6XNDhvH95QlVba86ikmb4QliL5cD2CUgP31qdIB2z1cS5KVsN28gj1oo fzUYB7tI8tRuKVhtIbd6QQMuTGB3rFgcWhR0E+q1XZ2HzvLXwRoyFpXo1CP89++/iBDm oc0jOSrslOJUWd9Lxcg9GmhwT0xuVbokzuXgqL9l8fes4cw1fKSOyEm4sBthUtONoQ83 aXHeuGSgJ7uB970OwvgUSEyPlT2GiRYbVtMGTjE6QSntA9/2EtT3lSh1hw+bCBExAMI7 Jasw== X-Gm-Message-State: APjAAAUPoHu459fIiUdJLZM29XMb0KhqvlR1/qP6N7e/zUy4fPYnbPoe ZaFyQSOtVZLequzVrt6EBFzaug== X-Google-Smtp-Source: APXvYqxec00hS/YwBZ499yu47bZwrCh/oZd5f0f9T0ew/bFtZ3FCmODRG59pIZ1Nw2V66uA6cc49Eg== X-Received: by 2002:adf:e88f:: with SMTP id d15mr912024wrm.186.1581110288231; Fri, 07 Feb 2020 13:18:08 -0800 (PST) Received: from localhost ([2a01:4b00:8432:8a00:63de:dd93:20be:f460]) by smtp.gmail.com with ESMTPSA id c9sm4672886wme.41.2020.02.07.13.18.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Feb 2020 13:18:07 -0800 (PST) Date: Fri, 7 Feb 2020 21:18:07 +0000 From: Chris Down To: Dan Schatzberg Cc: Jens Axboe , Tejun Heo , Li Zefan , Johannes Weiner , Michal Hocko , Vladimir Davydov , Hugh Dickins , Andrew Morton , Roman Gushchin , Shakeel Butt , Thomas Gleixner , "open list:BLOCK LAYER" , open list , "open list:CONTROL GROUP (CGROUP)" , "open list:CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)" Subject: Re: [PATCH v2 2/3] mm: Charge active memcg when no mm is set Message-ID: <20200207211807.GA138184@chrisdown.name> References: <8e41630b9d1c5d00f92a00f998285fa6003af5eb.1581088326.git.dschatzberg@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <8e41630b9d1c5d00f92a00f998285fa6003af5eb.1581088326.git.dschatzberg@fb.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Dan Schatzberg writes: >This is a dependency for 3/3 This can be omitted, since "3" won't mean anything in the change history (and patch series are generally considered as a unit unless there are explicit requests to split them out). >memalloc_use_memcg() worked for kernel allocations but was silently >ignored for user pages. > >This patch establishes a precedence order for who gets charged: > >1. If there is a memcg associated with the page already, that memcg is > charged. This happens during swapin. > >2. If an explicit mm is passed, mm->memcg is charged. This happens > during page faults, which can be triggered in remote VMs (eg gup). > >3. Otherwise consult the current process context. If it has configured > a current->active_memcg, use that. Otherwise, current->mm->memcg. > >Signed-off-by: Dan Schatzberg >Acked-by: Johannes Weiner Thanks, this seems reasonable. One (minor and optional) suggestion would be to make the title more clear that this is a change in try_charge/memalloc_use_memcg behaviour overall rather than a charge site, since this wasn't what I expected to find when I saw the patch title :-) I only have one other question about behaviour when there is no active_memcg and mm/memcg in try_charge are NULL below, but assuming that's been checked: Acked-by: Chris Down >--- > mm/memcontrol.c | 11 ++++++++--- > mm/shmem.c | 2 +- > 2 files changed, 9 insertions(+), 4 deletions(-) > >diff --git a/mm/memcontrol.c b/mm/memcontrol.c >index f7da3ff135ed..69935d166bdb 100644 >--- a/mm/memcontrol.c >+++ b/mm/memcontrol.c >@@ -6812,7 +6812,8 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, > * @compound: charge the page as compound or small page > * > * Try to charge @page to the memcg that @mm belongs to, reclaiming >- * pages according to @gfp_mask if necessary. >+ * pages according to @gfp_mask if necessary. If @mm is NULL, try to >+ * charge to the active memcg. > * > * Returns 0 on success, with *@memcgp pointing to the charged memcg. > * Otherwise, an error code is returned. >@@ -6856,8 +6857,12 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, > } > } > >- if (!memcg) >- memcg = get_mem_cgroup_from_mm(mm); >+ if (!memcg) { >+ if (!mm) >+ memcg = get_mem_cgroup_from_current(); >+ else >+ memcg = get_mem_cgroup_from_mm(mm); >+ } Just to do due diligence, did we double check whether this results in any unintentional shift in accounting for those passing in both mm and memcg as NULL with no current->active_memcg set, since previously we never even tried to consult current->mm and always used root_mem_cgroup in get_mem_cgroup_from_mm? It's entirely possible that this results in exactly the same outcome as before just by different means, but with the number of try_charge callsites I'm not totally certain of that. > > ret = try_charge(memcg, gfp_mask, nr_pages, false); > >diff --git a/mm/shmem.c b/mm/shmem.c >index ca74ede9e40b..70aabd9aba1a 100644 >--- a/mm/shmem.c >+++ b/mm/shmem.c >@@ -1748,7 +1748,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > } > > sbinfo = SHMEM_SB(inode->i_sb); >- charge_mm = vma ? vma->vm_mm : current->mm; >+ charge_mm = vma ? vma->vm_mm : NULL; > > page = find_lock_entry(mapping, index); > if (xa_is_value(page)) { >-- >2.17.1 >