From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B160C47089 for ; Fri, 28 May 2021 03:44:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 083D2613AB for ; Fri, 28 May 2021 03:44:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 083D2613AB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A3936B006C; Thu, 27 May 2021 23:44:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5545F6B006E; Thu, 27 May 2021 23:44:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F43E6B0071; Thu, 27 May 2021 23:44:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 0C07B6B006C for ; Thu, 27 May 2021 23:44:10 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A5DBDBF15 for ; Fri, 28 May 2021 03:44:10 +0000 (UTC) X-FDA: 78189246660.30.0994AA3 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf09.hostedemail.com (Postfix) with ESMTP id C13B66000ED2 for ; Fri, 28 May 2021 03:44:04 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id b15-20020a17090a550fb029015dad75163dso1767036pji.0 for ; Thu, 27 May 2021 20:44:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=lh+EYCLoUcN5j6D/Y+n/b8ai24Cn5mqFXxFcteegjFo=; b=vozsVGeqo0Ckn0+Y2O/TK9Dbw7JR8a9ngc9iFrch/sN3pXDdiz1RJXvthFLjcFcuQi zBas4i1V6UykahH0BLxKklVImiDhHYXKsfSFOTNVYQmzhXbBYtKHBnvMHb/Fjw5QAXWu zH+nhDvQ1EE3mJlbDtCL+Q+Um2Pb3HKG6XKNKgGbBk3TtZnAVjCsJZUjx0RQmmCC0yB2 gOrpVaL4wBbOpQ1kt03atdfWrbQa2ebWu2gEQfzxhgxZpdhqJGrUbCQX1AqTMQHVwU+Z OF4Jwrk94JM5/+5AzDpuAS8OR39XhxeAFBFbayLMucDFunIhBkTkm6ZkRcMsZxbpPs7g BdjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=lh+EYCLoUcN5j6D/Y+n/b8ai24Cn5mqFXxFcteegjFo=; b=J9lO6wVYXDtaBAdhcQiIb7f0x6XJu1pP9ZG27RiO8PekOblYI6w0W2G2lDK+U2OMvX DngyAHVWCLs09uCRAquYRuqWonyrAk/E3GPGLld1OQB8SJ4smJhnAD9+QsaSjvXKDEMM eWvZnjhXm82fCx53Xdf09jGt111CtJPdnpBkfI8nLR4Ot1r03hRppxZDufUm6toy1sCT VgWQdvffAt1xBQbb8+/eMWTgcwQd7KJIYiOvD/LOwf72iMeM5fNWC2pfFlMIHuLTw9TG gAZ9QaCxHvZ2awLzDELPuQ9AKBkg/u5VjSwNi2U6EHoQyOmVlJ2jqb/rDqUl3clImk2O 5pYw== X-Gm-Message-State: AOAM5320pKwDEhgut5RDP/CY+wf4JNe9ccKixAfB2PrTuOW+e57Q7Q3b 7gl3evhfn5l6xaPVDv2X5I9ZX8OZ5a64/f2FF0Zrdg== X-Google-Smtp-Source: ABdhPJxVg/yaMOcm8kfjg+lyfSkRngQO04M367IWDQJgFRhkfJgKJsl/ryGstDi7um2yVreMtfAldziCbcHmzYltWGM= X-Received: by 2002:a17:902:d2c8:b029:fe:cd9a:a6bb with SMTP id n8-20020a170902d2c8b02900fecd9aa6bbmr5516453plc.34.1622173448368; Thu, 27 May 2021 20:44:08 -0700 (PDT) MIME-Version: 1.0 References: <20210527062148.9361-1-songmuchun@bytedance.com> <20210527062148.9361-18-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Fri, 28 May 2021 11:43:29 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v2 17/21] mm: list_lru: replace linear array with xarray To: Matthew Wilcox Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov , Shakeel Butt , Roman Gushchin , Yang Shi , Alex Shi , Wei Yang , Dave Chinner , trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, linux-fsdevel , LKML , Linux Memory Management List , linux-nfs@vger.kernel.org, zhengqi.arch@bytedance.com, Xiongchun duan , fam.zheng@bytedance.com Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=vozsVGeq; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf09.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C13B66000ED2 X-Stat-Signature: wp5dw3z384fzpqcmh5qkqb4uh9d1khnn X-HE-Tag: 1622173444-231052 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 27, 2021 at 8:08 PM Matthew Wilcox wrote: > > On Thu, May 27, 2021 at 02:21:44PM +0800, Muchun Song wrote: > > If we run 10k containers in the system, the size of the > > list_lru_memcg->lrus can be ~96KB per list_lru. When we decrease the > > number containers, the size of the array will not be shrinked. It is > > not scalable. The xarray is a good choice for this case. We can save > > a lot of memory when there are tens of thousands continers in the > > system. If we use xarray, we also can remove the logic code of > > resizing array, which can simplify the code. > > I am all for this, in concept. Some thoughts below ... > > > @@ -56,10 +51,8 @@ struct list_lru { > > #ifdef CONFIG_MEMCG_KMEM > > struct list_head list; > > int shrinker_id; > > - /* protects ->memcg_lrus->lrus[i] */ > > - spinlock_t lock; > > /* for cgroup aware lrus points to per cgroup lists, otherwise NULL */ > > - struct list_lru_memcg __rcu *memcg_lrus; > > + struct xarray *xa; > > #endif > > Normally, we embed an xarray in its containing structure instead of > allocating it. It's only a pointer, int and spinlock, so generally > 16 bytes, as opposed to the 8 bytes for the pointer and a 16 byte > allocation. There is a minor wrinkle in that currently 'NULL' is > used to indicate "is not cgroup aware". Maybe there's another way > to indicate that? Sure. I can drop patch 8 in this series. In that case, we can use ->memcg_aware to indicate that. > > > @@ -51,22 +51,12 @@ static int lru_shrinker_id(struct list_lru *lru) > > static inline struct list_lru_one * > > list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) > > { > > - struct list_lru_memcg *memcg_lrus; > > - struct list_lru_node *nlru = &lru->node[nid]; > > + if (list_lru_memcg_aware(lru) && idx >= 0) { > > + struct list_lru_per_memcg *mlru = xa_load(lru->xa, idx); > > > > - /* > > - * Either lock or RCU protects the array of per cgroup lists > > - * from relocation (see memcg_update_list_lru). > > - */ > > - memcg_lrus = rcu_dereference_check(lru->memcg_lrus, > > - lockdep_is_held(&nlru->lock)); > > - if (memcg_lrus && idx >= 0) { > > - struct list_lru_per_memcg *mlru; > > - > > - mlru = rcu_dereference_check(memcg_lrus->lrus[idx], true); > > return mlru ? &mlru->nodes[nid] : NULL; > > } > > - return &nlru->lru; > > + return &lru->node[nid].lru; > > } > > ... perhaps we move the xarray out from under the #ifdef and use index 0 > for non-memcg-aware lrus? The XArray is specially optimised for arrays > which only have one entry at 0. Sounds like a good idea. I can do a try. > > > int list_lru_memcg_alloc(struct list_lru *lru, struct mem_cgroup *memcg, gfp_t gfp) > > { > > + XA_STATE(xas, lru->xa, 0); > > unsigned long flags; > > - struct list_lru_memcg *memcg_lrus; > > - int i; > > + int i, ret = 0; > > > > struct list_lru_memcg_table { > > struct list_lru_per_memcg *mlru; > > @@ -601,22 +522,45 @@ int list_lru_memcg_alloc(struct list_lru *lru, struct mem_cgroup *memcg, gfp_t g > > } > > } > > > > - spin_lock_irqsave(&lru->lock, flags); > > - memcg_lrus = rcu_dereference_protected(lru->memcg_lrus, true); > > + xas_lock_irqsave(&xas, flags); > > while (i--) { > > int index = memcg_cache_id(table[i].memcg); > > struct list_lru_per_memcg *mlru = table[i].mlru; > > > > - if (index < 0 || rcu_dereference_protected(memcg_lrus->lrus[index], true)) > > + xas_set(&xas, index); > > +retry: > > + if (unlikely(index < 0 || ret || xas_load(&xas))) { > > kfree(mlru); > > - else > > - rcu_assign_pointer(memcg_lrus->lrus[index], mlru); > > + } else { > > + ret = xa_err(xas_store(&xas, mlru)); > > This is mixing advanced and normal XArray concepts ... sorry to have > confused you. I think what you meant to do here was: > > xas_store(&xas, mlru); > ret = xas_error(&xas); Sure. Thanks for pointing it out. It's my bad usage. > > Or you can avoid introducing 'ret' at all, and keep your errors in the > xa_state. You're kind of mirroring the xa_state errors into 'ret' > anyway, so that seems easier to understand? Make sense. I will do this in the next version. Thanks for your all suggestions. > > > - memcg_id = memcg_alloc_cache_id(); > > + memcg_id = ida_simple_get(&memcg_cache_ida, 0, MEMCG_CACHES_MAX_SIZE, > > + GFP_KERNEL); > > memcg_id = ida_alloc_max(&memcg_cache_ida, > MEMCG_CACHES_MAX_SIZE - 1, GFP_KERNEL); > > ... although i think there's actually a fencepost error, and this really > should be MEMCG_CACHES_MAX_SIZE. Totally agree. I have fixed this issue in patch 19. > > > objcg = obj_cgroup_alloc(); > > if (!objcg) { > > - memcg_free_cache_id(memcg_id); > > + ida_simple_remove(&memcg_cache_ida, memcg_id); > > ida_free(&memcg_cache_ida, memcg_id); I Will update to this new API. >