From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 325CCC433ED for ; Thu, 15 Apr 2021 19:06:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AFC9B610A1 for ; Thu, 15 Apr 2021 19:06:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AFC9B610A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 263D36B0036; Thu, 15 Apr 2021 15:06:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 239516B006C; Thu, 15 Apr 2021 15:06:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B1766B0070; Thu, 15 Apr 2021 15:06:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id CB28D6B0036 for ; Thu, 15 Apr 2021 15:06:34 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8C49B1E03 for ; Thu, 15 Apr 2021 19:06:34 +0000 (UTC) X-FDA: 78035532708.18.49CFF6C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 8A3EBC0001EE for ; Thu, 15 Apr 2021 19:06:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618513593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gEe76+1lfitRLX7ayQ138JdnojjAHhZEepE6oK4fMzM=; b=H67Muo1X73tlPmia4K7LglApEuG1s2zaApj3eA+e1bLPoY2vDLqovBatsNJ2gPs4FCBX5f 3zHO7uRxEjPLDuJcLrW/2LlPK8VBAj6W6ZI9u+NpaRZRogJd8AEGWXPpRgXNTM2/8VSMl4 W2KweDrwHJGG5ICPwclJkX+acPy/w5c= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-310-2gdwJ5t8N2KOt4eL93dm0g-1; Thu, 15 Apr 2021 15:06:32 -0400 X-MC-Unique: 2gdwJ5t8N2KOt4eL93dm0g-1 Received: by mail-qk1-f200.google.com with SMTP id c129so2189266qkd.6 for ; Thu, 15 Apr 2021 12:06:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=gEe76+1lfitRLX7ayQ138JdnojjAHhZEepE6oK4fMzM=; b=IKS5BX/RqP3Li8wlGb+qzFFyG+5u3ezGaLaQUqFBf3AlcOOTda9PurbOHqRNcvG64W 5lx+V3k1N4ljt8KDKj5JMMmMVZ+nDbheNQhpxCjULAQHo0S80253Fv2D7w0WMxP1bBSZ xmiyrVPFcduFpIFTkpNRmoSNpTwIHUrL7ORs/XazB5mokE1au60PT/iUKXhl0k/tJC/Z Io2NYIv2falRw0Dvb1s1TBlyodir55/w9r2PXf81ZDf7Zwc5lY/8BxyC6lBFsjALLc4J chW3X4jnhW7ELNoWtnKwmd0fXzf43StokkQStjtxvgf2kWs0BEOJ1C36K0b6mY1MWs0x NgwA== X-Gm-Message-State: AOAM530vkmGmv8WmPN7bcg7CYjHpRz7x26k6bTpAqPk0OU/cFBfOJJ/r UYMu2VZYd6ug9X/ezLH4zWsg64nbzCwkYSmk3065UtxpfLnFTqNnCzM4xbh4UzsYthyKY+1+znc m6/16KeIwh+E= X-Received: by 2002:ad4:50d0:: with SMTP id e16mr4732762qvq.6.1618513591647; Thu, 15 Apr 2021 12:06:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwuVQaHY7IISH7UyW+PwaDWQOamBxU2GgKPhE5CUuu6sk1Maoqcddj2cyAcylO6CivOxCKntA== X-Received: by 2002:ad4:50d0:: with SMTP id e16mr4732719qvq.6.1618513591332; Thu, 15 Apr 2021 12:06:31 -0700 (PDT) Received: from llong.remote.csb ([2601:191:8500:76c0::cdbc]) by smtp.gmail.com with ESMTPSA id o189sm2659335qkd.60.2021.04.15.12.06.28 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 15 Apr 2021 12:06:29 -0700 (PDT) From: Waiman Long X-Google-Original-From: Waiman Long Subject: Re: [PATCH v3 5/5] mm/memcg: Optimize user context object stock access To: Johannes Weiner , Waiman Long Cc: Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun References: <20210414012027.5352-1-longman@redhat.com> <20210414012027.5352-6-longman@redhat.com> <8dbd3505-9c51-362f-82d8-5efa5773e020@redhat.com> Message-ID: Date: Thu, 15 Apr 2021 15:06:28 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=llong@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Rspamd-Queue-Id: 8A3EBC0001EE X-Stat-Signature: 7hdwn979w5fht36hdxrryaerwijqbpqi X-Rspamd-Server: rspam02 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618513590-1544 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/15/21 2:53 PM, Johannes Weiner wrote: > On Thu, Apr 15, 2021 at 02:16:17PM -0400, Waiman Long wrote: >> On 4/15/21 1:53 PM, Johannes Weiner wrote: >>> On Tue, Apr 13, 2021 at 09:20:27PM -0400, Waiman Long wrote: >>>> Most kmem_cache_alloc() calls are from user context. With instrumentation >>>> enabled, the measured amount of kmem_cache_alloc() calls from non-task >>>> context was about 0.01% of the total. >>>> >>>> The irq disable/enable sequence used in this case to access content >>>> from object stock is slow. To optimize for user context access, there >>>> are now two object stocks for task context and interrupt context access >>>> respectively. >>>> >>>> The task context object stock can be accessed after disabling preemption >>>> which is cheap in non-preempt kernel. The interrupt context object stock >>>> can only be accessed after disabling interrupt. User context code can >>>> access interrupt object stock, but not vice versa. >>>> >>>> The mod_objcg_state() function is also modified to make sure that memcg >>>> and lruvec stat updates are done with interrupted disabled. >>>> >>>> The downside of this change is that there are more data stored in local >>>> object stocks and not reflected in the charge counter and the vmstat >>>> arrays. However, this is a small price to pay for better performance. >>>> >>>> Signed-off-by: Waiman Long >>>> Acked-by: Roman Gushchin >>>> Reviewed-by: Shakeel Butt >>> This makes sense, and also explains the previous patch a bit >>> better. But please merge those two. >> The reason I broke it into two is so that the patches are individually >> easier to review. I prefer to update the commit log of patch 4 to explain >> why the obj_stock structure is introduced instead of merging the two. > Well I did not find them easier to review separately. > >>>> @@ -2327,7 +2365,9 @@ static void drain_local_stock(struct work_struct *dummy) >>>> local_irq_save(flags); >>>> stock = this_cpu_ptr(&memcg_stock); >>>> - drain_obj_stock(&stock->obj); >>>> + drain_obj_stock(&stock->irq_obj); >>>> + if (in_task()) >>>> + drain_obj_stock(&stock->task_obj); >>>> drain_stock(stock); >>>> clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags); >>>> @@ -3183,7 +3223,7 @@ static inline void mod_objcg_state(struct obj_cgroup *objcg, >>>> memcg = obj_cgroup_memcg(objcg); >>>> if (pgdat) >>>> lruvec = mem_cgroup_lruvec(memcg, pgdat); >>>> - __mod_memcg_lruvec_state(memcg, lruvec, idx, nr); >>>> + mod_memcg_lruvec_state(memcg, lruvec, idx, nr); >>>> rcu_read_unlock(); >>> This is actually a bug introduced in the earlier patch, isn't it? >>> Calling __mod_memcg_lruvec_state() without irqs disabled... >>> >> Not really, in patch 3, mod_objcg_state() is called only in the stock update >> context where interrupt had already been disabled. But now, that is no >> longer the case, that is why i need to update mod_objcg_state() to make sure >> irq is disabled before updating vmstat data array. > Oh, I see it now. Man, that's subtle. We've had several very hard to > track down preemption bugs in those stats, because they manifest as > counter imbalances and you have no idea if there is a leak somewhere. > > The convention for these functions is that the __ prefix indicates > that preemption has been suitably disabled. Please always follow this > convention, even if the semantic change is temporary. I see. I will fix that in the next version. > > Btw, is there a reason why the stock caching isn't just part of > mod_objcg_state()? Why does the user need to choose if they want the > caching or not? It's not like we ask for this when charging, either. > Yes, I can revert it to make mod_objcg_state() to call mod_obj_stock_state() internally instead of the other way around. Cheers, Longman