From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17A6FC433B4 for ; Mon, 19 Apr 2021 15:19:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA96B6113C for ; Mon, 19 Apr 2021 15:19:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA96B6113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 11A986B006C; Mon, 19 Apr 2021 11:19:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A40E6B006E; Mon, 19 Apr 2021 11:19:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E11006B0070; Mon, 19 Apr 2021 11:19:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id C00A26B006C for ; Mon, 19 Apr 2021 11:19:41 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 818D91E1E for ; Mon, 19 Apr 2021 15:19:41 +0000 (UTC) X-FDA: 78049476162.02.4E9BBE7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf22.hostedemail.com (Postfix) with ESMTP id BA71EC0007F0 for ; Mon, 19 Apr 2021 15:19:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618845579; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2yvjPSj/WyukvFAWpkfjx20IkLdkkzXTL7sCO1L7DLw=; b=bWgCAkzCjrv9KnyR0d4m3l9KkXWnQZZK6EI3SrG1EWLmmhtwhakubG708MbC5fGVxrW6rq gkT4MErSSWFgoVb9wX3xPLSbPIEg47gozzjHn0hBFRmWJ/TOtVRnRRhkPh82KZpKTF48Gi 8+JADeJzaAJBxKn5ieQnYj8zCFS/1q8= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-98-c0lRSgi1NC2dgwksbmCwcQ-1; Mon, 19 Apr 2021 11:19:37 -0400 X-MC-Unique: c0lRSgi1NC2dgwksbmCwcQ-1 Received: by mail-qv1-f72.google.com with SMTP id p2-20020ad452e20000b0290177fba4b9d5so9515371qvu.6 for ; Mon, 19 Apr 2021 08:19:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=2yvjPSj/WyukvFAWpkfjx20IkLdkkzXTL7sCO1L7DLw=; b=GUa3mnxEzWeB/1t9p55HJSCiG5+hQzS0E1KwzzrNjE24gEwLkPzJzrG0zfVaXv8Rce pgNZD4h679D9aU9QzFvyfcIaN/NKTb9RmM+NryVegj5Glvyrb7AZyD8QK/uh2w9s31XB RXNZQgr0ZSiCf4lLXC0vzdsroTXXAjRMYpIR9YMrtgGSljjeaz7oT9gjikOgdfrEwbO+ c/v4DOvM3FdVj1/Z99XjA2SJjN8NDJo9tkrmusT6juFwk9HGaZXAUQD8Vglrv3J+W1MX 1U1GYi9G34Nxv4zKw0D6TQkU/CvxRSP01iSrjReKjZAq/QGmiwRhhJmFbm8TyOfc54yb 8y4g== X-Gm-Message-State: AOAM531meGeZeh+4J8lXmDumoCH+vfOfvxm5lgWHNyFvS8ttJqHfXzwr s7q6dNbwr4X7TnLI3XvL4eEY7beZdTgdAPapFDNExg7m0yqtvWVW6O11pCLWzdCtbCLi4Ga2e2o /SfMUKe5Mawg= X-Received: by 2002:ae9:e015:: with SMTP id m21mr11885701qkk.420.1618845577019; Mon, 19 Apr 2021 08:19:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2YwZcRNUHjYxJNmq3jr7pmKRv0FT8z5DBqyMAE2VIkjgWcdRJCxjdxaHVNnbGoUIGDhI5fw== X-Received: by 2002:ae9:e015:: with SMTP id m21mr11885665qkk.420.1618845576749; Mon, 19 Apr 2021 08:19:36 -0700 (PDT) Received: from llong.remote.csb ([2601:191:8500:76c0::cdbc]) by smtp.gmail.com with ESMTPSA id j129sm9858703qkf.110.2021.04.19.08.19.35 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 19 Apr 2021 08:19:36 -0700 (PDT) From: Waiman Long X-Google-Original-From: Waiman Long Subject: Re: [External] [PATCH v4 5/5] mm/memcg: Improve refill_obj_stock() performance To: Shakeel Butt , Muchun Song Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , LKML , Cgroups , Linux Memory Management List , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun , Matthew Wilcox References: <20210419000032.5432-1-longman@redhat.com> <20210419000032.5432-6-longman@redhat.com> Message-ID: Date: Mon, 19 Apr 2021 11:19:34 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=llong@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: BA71EC0007F0 X-Stat-Signature: tqakqqafrjf1q4g9e4rgskdeocz71egz Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618845574-298492 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/19/21 11:00 AM, Shakeel Butt wrote: > On Sun, Apr 18, 2021 at 11:07 PM Muchun Song wrote: >> On Mon, Apr 19, 2021 at 8:01 AM Waiman Long wrote: >>> There are two issues with the current refill_obj_stock() code. First of >>> all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to >>> atomically flush out remaining bytes to obj_cgroup, clear cached_objcg >>> and do a obj_cgroup_put(). It is likely that the same obj_cgroup will >>> be used again which leads to another call to drain_obj_stock() and >>> obj_cgroup_get() as well as atomically retrieve the available byte from >>> obj_cgroup. That is costly. Instead, we should just uncharge the excess >>> pages, reduce the stock bytes and be done with it. The drain_obj_stock() >>> function should only be called when obj_cgroup changes. >>> >>> Secondly, when charging an object of size not less than a page in >>> obj_cgroup_charge(), it is possible that the remaining bytes to be >>> refilled to the stock will overflow a page and cause refill_obj_stock() >>> to uncharge 1 page. To avoid the additional uncharge in this case, >>> a new overfill flag is added to refill_obj_stock() which will be set >>> when called from obj_cgroup_charge(). >>> >>> Signed-off-by: Waiman Long >>> --- >>> mm/memcontrol.c | 23 +++++++++++++++++------ >>> 1 file changed, 17 insertions(+), 6 deletions(-) >>> >>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >>> index a6dd18f6d8a8..d13961352eef 100644 >>> --- a/mm/memcontrol.c >>> +++ b/mm/memcontrol.c >>> @@ -3357,23 +3357,34 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, >>> return false; >>> } >>> >>> -static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes) >>> +static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes, >>> + bool overfill) >>> { >>> unsigned long flags; >>> struct obj_stock *stock = get_obj_stock(&flags); >>> + unsigned int nr_pages = 0; >>> >>> if (stock->cached_objcg != objcg) { /* reset if necessary */ >>> - drain_obj_stock(stock); >>> + if (stock->cached_objcg) >>> + drain_obj_stock(stock); >>> obj_cgroup_get(objcg); >>> stock->cached_objcg = objcg; >>> stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0); >>> } >>> stock->nr_bytes += nr_bytes; >>> >>> - if (stock->nr_bytes > PAGE_SIZE) >>> - drain_obj_stock(stock); >>> + if (!overfill && (stock->nr_bytes > PAGE_SIZE)) { >>> + nr_pages = stock->nr_bytes >> PAGE_SHIFT; >>> + stock->nr_bytes &= (PAGE_SIZE - 1); >>> + } >>> >>> put_obj_stock(flags); >>> + >>> + if (nr_pages) { >>> + rcu_read_lock(); >>> + __memcg_kmem_uncharge(obj_cgroup_memcg(objcg), nr_pages); >>> + rcu_read_unlock(); >>> + } >> It is not safe to call __memcg_kmem_uncharge() under rcu lock >> and without holding a reference to memcg. More details can refer >> to the following link. >> >> https://lore.kernel.org/linux-mm/20210319163821.20704-2-songmuchun@bytedance.com/ >> >> In the above patchset, we introduce obj_cgroup_uncharge_pages to >> uncharge some pages from object cgroup. You can use this safe >> API. >> > I would recommend just rebase the patch series over the latest mm tree. > I see, I will rebase it to the latest mm tree. Thanks, Longman