From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4064CC47404 for ; Mon, 7 Oct 2019 14:30:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0B6D62070B for ; Mon, 7 Oct 2019 14:30:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B6D62070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B52BD8E0006; Mon, 7 Oct 2019 10:30:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B03EC8E0003; Mon, 7 Oct 2019 10:30:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A19428E0006; Mon, 7 Oct 2019 10:30:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 7F9278E0003 for ; Mon, 7 Oct 2019 10:30:33 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 229EA181AC9B6 for ; Mon, 7 Oct 2019 14:30:33 +0000 (UTC) X-FDA: 76017224346.18.meal27_5c0fdc513633f X-HE-Tag: meal27_5c0fdc513633f X-Filterd-Recvd-Size: 3394 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Oct 2019 14:30:32 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 813DFAFC6; Mon, 7 Oct 2019 14:30:31 +0000 (UTC) Date: Mon, 7 Oct 2019 16:30:30 +0200 From: Michal Hocko To: Vlastimil Babka Cc: Yang Shi , kirill.shutemov@linux.intel.com, ktkhai@virtuozzo.com, hannes@cmpxchg.org, hughd@google.com, shakeelb@google.com, rientjes@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: thp: move deferred split queue to memcg's nodeinfo Message-ID: <20191007143030.GN2381@dhcp22.suse.cz> References: <1569968203-64647-1-git-send-email-yang.shi@linux.alibaba.com> <20191002084304.GI15624@dhcp22.suse.cz> <30421920-4fdb-767a-6ef2-60187932c414@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <30421920-4fdb-767a-6ef2-60187932c414@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 07-10-19 16:19:59, Vlastimil Babka wrote: > On 10/2/19 10:43 AM, Michal Hocko wrote: > > On Wed 02-10-19 06:16:43, Yang Shi wrote: > >> The commit 87eaceb3faa59b9b4d940ec9554ce251325d83fe ("mm: thp: make > >> deferred split shrinker memcg aware") makes deferred split queue per > >> memcg to resolve memcg pre-mature OOM problem. But, all nodes end up > >> sharing the same queue instead of one queue per-node before the commit. > >> It is not a big deal for memcg limit reclaim, but it may cause global > >> kswapd shrink THPs from a different node. > >> > >> And, 0-day testing reported -19.6% regression of stress-ng's madvise > >> test [1]. I didn't see that much regression on my test box (24 threads, > >> 48GB memory, 2 nodes), with the same test (stress-ng --timeout 1 > >> --metrics-brief --sequential 72 --class vm --exclude spawn,exec), I saw > >> average -3% (run the same test 10 times then calculate the average since > >> the test itself may have most 15% variation according to my test) > >> regression sometimes (not every time, sometimes I didn't see regression > >> at all). > >> > >> This might be caused by deferred split queue lock contention. With some > >> configuration (i.e. just one root memcg) the lock contention my be worse > >> than before (given 2 nodes, two locks are reduced to one lock). > >> > >> So, moving deferred split queue to memcg's nodeinfo to make it NUMA > >> aware again. > >> > >> With this change stress-ng's madvise test shows average 4% improvement > >> sometimes and I didn't see degradation anymore. > > > > My concern about this getting more and more complex > > (http://lkml.kernel.org/r/20191002084014.GH15624@dhcp22.suse.cz) holds > > here even more. Can we step back and reconsider the whole thing please? > > What about freeing immediately after split via workqueue and also have a > synchronous version called before going oom? Maybe there would be also > other things that would benefit from this scheme instead of traditional > reclaim and shrinkers? That is exactly what we have discussed some time ago. -- Michal Hocko SUSE Labs