From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28BD9C433B4 for ; Thu, 15 Apr 2021 19:45:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B735461153 for ; Thu, 15 Apr 2021 19:45:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B735461153 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 013126B0036; Thu, 15 Apr 2021 15:45:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F05D36B006C; Thu, 15 Apr 2021 15:45:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D80576B0070; Thu, 15 Apr 2021 15:45:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id BAA756B0036 for ; Thu, 15 Apr 2021 15:45:02 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 72B0345A8 for ; Thu, 15 Apr 2021 19:45:02 +0000 (UTC) X-FDA: 78035629644.05.B730548 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 646F4A0003A5 for ; Thu, 15 Apr 2021 19:44:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618515901; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MzuzLCNcYk+OU/vK/NWC4xqjTEKnID3tXG3/okG0iZw=; b=h0y7NPKb7TCnnHoWRh5AFM0oTI++aw3kojoUAGyb95eriU6kYXNSj7kVdZJ3gP5GH/JKm9 3s2+05qvfojiG08tfBebSjPUZcLiqP4+y0QnfDPbrOPDKyEmSJ4upVgTr3fuN39cYXI36x T0aqZc7nf3KjKpM5N03l2vhZGidVVUE= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-94-alzPfEnDNA6usX7QVRbiUQ-1; Thu, 15 Apr 2021 15:44:59 -0400 X-MC-Unique: alzPfEnDNA6usX7QVRbiUQ-1 Received: by mail-qv1-f71.google.com with SMTP id f17-20020a05621400d1b029019a6fc802f7so2511508qvs.7 for ; Thu, 15 Apr 2021 12:44:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=MzuzLCNcYk+OU/vK/NWC4xqjTEKnID3tXG3/okG0iZw=; b=V6JBuXTRVcHhdwH+rw0AuuEc4HWOCc5NfvFvHaZr4PeatdZ6/5tFsMhbfBq8YMlQ07 EQz0M0SwK+W/MUG2+a8+oNYTcRVAEbJeOb/oZJPwMxcMhg6jiTqcIfNUO2T4kmTzoN9o fWpnhkBWKT1UnyZqbrnQ38+bwk5NwgN1p73oV67jqD4Yb5+NfVVP7t77SV544TeNHRCs FkIQU/RJjjyhPcLNWdh2Iu7hjPCL0u+lIPZmqWX5U7XVGxOnB1Nto6Da5OoJrAEnCHk1 EclAeGIr26YVg490lHlwkpZAXcCBKazRVyFsDIupXyM2bjN1sXG4aIT20oozaZW2NQkW 5oZw== X-Gm-Message-State: AOAM533Lc+OenizXwkPLSOta9xVdEUqahjKw3wWjmjjRL0qZhFdYeopl W9ZaY3uUTIczsKzKZbyYfYIXqTcLdwiYensxaxv4z7mUdqq+Wg0vOEXFg4ep4CqMt8aHU2GKzGH BITBg42O78V4= X-Received: by 2002:a05:620a:4093:: with SMTP id f19mr1176800qko.136.1618515899002; Thu, 15 Apr 2021 12:44:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw+EBr54ugjJ7Z8pc6qrdlkkGfICL7wFBIgBC32OsPmXGrZ6ZuIucba5mmAo/PQQafgfBHuwg== X-Received: by 2002:a05:620a:4093:: with SMTP id f19mr1176786qko.136.1618515898806; Thu, 15 Apr 2021 12:44:58 -0700 (PDT) Received: from llong.remote.csb ([2601:191:8500:76c0::cdbc]) by smtp.gmail.com with ESMTPSA id b3sm547487qkd.29.2021.04.15.12.44.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 15 Apr 2021 12:44:58 -0700 (PDT) From: Waiman Long X-Google-Original-From: Waiman Long Subject: Re: [PATCH v3 2/5] mm/memcg: Introduce obj_cgroup_uncharge_mod_state() To: Johannes Weiner , Waiman Long Cc: Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun References: <20210414012027.5352-1-longman@redhat.com> <20210414012027.5352-3-longman@redhat.com> <1c85e8f6-e8b9-33e1-e29b-81fbadff959f@redhat.com> <8a104fd5-64c7-3f41-981c-9cfa977c78a6@redhat.com> Message-ID: Date: Thu, 15 Apr 2021 15:44:56 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=llong@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 646F4A0003A5 X-Stat-Signature: gud81quo4mjwzkuxyoania9uqsam5rj5 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618515895-587737 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/15/21 3:40 PM, Johannes Weiner wrote: > On Thu, Apr 15, 2021 at 02:47:31PM -0400, Waiman Long wrote: >> On 4/15/21 2:10 PM, Johannes Weiner wrote: >>> On Thu, Apr 15, 2021 at 12:35:45PM -0400, Waiman Long wrote: >>>> On 4/15/21 12:30 PM, Johannes Weiner wrote: >>>>> On Tue, Apr 13, 2021 at 09:20:24PM -0400, Waiman Long wrote: >>>>>> In memcg_slab_free_hook()/pcpu_memcg_free_hook(), obj_cgroup_uncharge() >>>>>> is followed by mod_objcg_state()/mod_memcg_state(). Each of these >>>>>> function call goes through a separate irq_save/irq_restore cycle. That >>>>>> is inefficient. Introduce a new function obj_cgroup_uncharge_mod_state() >>>>>> that combines them with a single irq_save/irq_restore cycle. >>>>>> >>>>>> @@ -3292,6 +3296,25 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) >>>>>> refill_obj_stock(objcg, size); >>>>>> } >>>>>> +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, >>>>>> + struct pglist_data *pgdat, int idx) >>>>> The optimization makes sense. >>>>> >>>>> But please don't combine independent operations like this into a >>>>> single function. It makes for an unclear parameter list, it's a pain >>>>> in the behind to change the constituent operations later on, and it >>>>> has a habit of attracting more random bools over time. E.g. what if >>>>> the caller already has irqs disabled? What if it KNOWS that irqs are >>>>> enabled and it could use local_irq_disable() instead of save? >>>>> >>>>> Just provide an __obj_cgroup_uncharge() that assumes irqs are >>>>> disabled, combine with the existing __mod_memcg_lruvec_state(), and >>>>> bubble the irq handling up to those callsites which know better. >>>>> >>>> That will also work. However, the reason I did that was because of patch 5 >>>> in the series. I could put the get_obj_stock() and put_obj_stock() code in >>>> slab.h and allowed them to be used directly in various places, but hiding in >>>> one function is easier. >>> Yeah it's more obvious after getting to patch 5. >>> >>> But with the irq disabling gone entirely, is there still an incentive >>> to combine the atomic section at all? Disabling preemption is pretty >>> cheap, so it wouldn't matter to just do it twice. >>> >>> I.e. couldn't the final sequence in slab code simply be >>> >>> objcg_uncharge() >>> mod_objcg_state() >>> >>> again and each function disables preemption (and in the rare case >>> irqs) as it sees fit? >>> >>> You lose the irqsoff batching in the cold path, but as you say, hit >>> rates are pretty good, and it doesn't seem worth complicating the code >>> for the cold path. >>> >> That does make sense, though a little bit of performance may be lost. I will >> try that out to see how it work out performance wise. > Thanks. > > Even if we still end up doing it, it's great to have that cost > isolated, so we know how much extra code complexity corresponds to how > much performance gain. It seems the task/irq split could otherwise be > a pretty localized change with no API implications. > I still want to move mod_objcg_state() function to memcontrol.c though as I don't want to put any obj_stock stuff in mm/slab.h. Cheers, Longman