From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6852BC352A4 for ; Mon, 10 Feb 2020 23:56:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E237F20673 for ; Mon, 10 Feb 2020 23:56:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BfhGABaK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E237F20673 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EAD26B019E; Mon, 10 Feb 2020 18:56:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 39BED6B019F; Mon, 10 Feb 2020 18:56:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28B5B6B01A0; Mon, 10 Feb 2020 18:56:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0158.hostedemail.com [216.40.44.158]) by kanga.kvack.org (Postfix) with ESMTP id 1183E6B019E for ; Mon, 10 Feb 2020 18:56:07 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BE38B249C for ; Mon, 10 Feb 2020 23:56:06 +0000 (UTC) X-FDA: 76475878332.11.metal55_8c9b56d51c244 X-HE-Tag: metal55_8c9b56d51c244 X-Filterd-Recvd-Size: 8552 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 10 Feb 2020 23:56:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1581378965; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WH/i8YZRYoS+s3/BsVN3iir9apojjyd+nns1R1t6tNU=; b=BfhGABaKUuZVEND5DjK9o7OZ17fg0Yp9J9VCKezIYTu0PKnq71NYmCFaSIOXJFpGU/Y6jk dkgPwFa+JE5Az0jBocA0XGySXbtjmcs1B6BOiNyZ/CvQ9rcO52zB7v1iWTndyFmRXG0eQn 0Uk2x/EPlnLV35d1UvpLaeKrf/HEH8g= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-360-5OlgOu-RMH-4EKQqx24yXQ-1; Mon, 10 Feb 2020 18:56:01 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 58FC413E6; Mon, 10 Feb 2020 23:56:00 +0000 (UTC) Received: from localhost.localdomain (vpn2-54-51.bne.redhat.com [10.64.54.51]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A15755D9CA; Mon, 10 Feb 2020 23:55:55 +0000 (UTC) Reply-To: Gavin Shan Subject: Re: [RFC PATCH] mm/vmscan: Don't round up scan size for online memory cgroup To: Roman Gushchin Cc: linux-mm@kvack.org, drjones@redhat.com, david@redhat.com, bhe@redhat.com, hannes@cmpxchg.org References: <20200210121445.711819-1-gshan@redhat.com> <20200210161721.GA167254@tower.DHCP.thefacebook.com> From: Gavin Shan Message-ID: <9919b674-244d-0a55-c842-b0661585f9e2@redhat.com> Date: Tue, 11 Feb 2020 10:55:53 +1100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0 MIME-Version: 1.0 In-Reply-To: <20200210161721.GA167254@tower.DHCP.thefacebook.com> Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-MC-Unique: 5OlgOu-RMH-4EKQqx24yXQ-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Roman, On 2/11/20 3:17 AM, Roman Gushchin wrote: > Hello, Gavin! > > On Mon, Feb 10, 2020 at 11:14:45PM +1100, Gavin Shan wrote: >> commit 68600f623d69 ("mm: don't miss the last page because of round-off >> error") makes the scan size round up to @denominator regardless of the >> memory cgroup's state, online or offline. This affects the overall >> reclaiming behavior: The corresponding LRU list is eligible for reclaiming >> only when its size logically right shifted by @sc->priority is bigger than >> zero in the former formula (non-roundup one). > > Not sure I fully understand, but wasn't it so before 68600f623d69 too? > It's correct that "(non-roundup one)" is typo and should have been dropped. Will be corrected in v2 if needed. >> For example, the inactive >> anonymous LRU list should have at least 0x4000 pages to be eligible for >> reclaiming when we have 60/12 for swappiness/priority and without taking >> scan/rotation ratio into account. After the roundup is applied, the >> inactive anonymous LRU list becomes eligible for reclaiming when its >> size is bigger than or equal to 0x1000 in the same condition. >> >> (0x4000 >> 12) * 60 / (60 + 140 + 1) = 1 >> ((0x1000 >> 12) * 60) + 200) / (60 + 140 + 1) = 1 >> >> aarch64 has 512MB huge page size when the base page size is 64KB. The >> memory cgroup that has a huge page is always eligible for reclaiming in >> that case. The reclaiming is likely to stop after the huge page is >> reclaimed, meaing the subsequent @sc->priority and memory cgroups will be >> skipped. It changes the overall reclaiming behavior. This fixes the issue >> by applying the roundup to offlined memory cgroups only, to give more >> preference to reclaim memory from offlined memory cgroup. It sounds >> reasonable as those memory is likely to be useless. > > So is the problem that relatively small memory cgroups are getting reclaimed > on default prio, however before they were skipped? > Yes, you're correct. There are two dimensions for global reclaim: priority (sc->priority) and memory cgroup. The scan/reclaim is carried out by iterating from these two dimensions until the reclaimed pages are enough. If the roundup is applied to current memory cgroup and occasionally helps to reclaim enough memory, the subsequent priority and memory cgroup will be skipped. >> >> The issue was found by starting up 8 VMs on a Ampere Mustang machine, >> which has 8 CPUs and 16 GB memory. Each VM is given with 2 vCPUs and 2GB >> memory. 784MB swap space is consumed after these 8 VMs are completely up. >> Note that KSM is disable while THP is enabled in the testing. With this >> applied, the consumed swap space decreased to 60MB. >> >> total used free shared buff/cache available >> Mem: 16196 10065 2049 16 4081 3749 >> Swap: 8175 784 7391 >> total used free shared buff/cache available >> Mem: 16196 11324 3656 24 1215 2936 >> Swap: 8175 60 8115 > > Does it lead to any performance regressions? Or it's only about increased > swap usage? > Apart from swap usage, it also had performance downgrade for my case. With your patch (68600f623d69) included, it took 264 seconds to bring up 8 VMs. However, 236 seconds are used to do same thing with my patch applied on top of yours. There is 10% performance downgrade. It's the reason why I had a stable tag. >> >> Fixes: 68600f623d69 ("mm: don't miss the last page because of round-off error") >> Cc: # v4.20+ >> Signed-off-by: Gavin Shan >> --- >> mm/vmscan.c | 9 ++++++--- >> 1 file changed, 6 insertions(+), 3 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index c05eb9efec07..876370565455 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -2415,10 +2415,13 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, >> /* >> * Scan types proportional to swappiness and >> * their relative recent reclaim efficiency. >> - * Make sure we don't miss the last page >> - * because of a round-off error. >> + * Make sure we don't miss the last page on >> + * the offlined memory cgroups because of a >> + * round-off error. >> */ >> - scan = DIV64_U64_ROUND_UP(scan * fraction[file], >> + scan = mem_cgroup_online(memcg) ? >> + div64_u64(scan * fraction[file], denominator) : >> + DIV64_U64_ROUND_UP(scan * fraction[file], >> denominator); > > It looks a bit strange to round up for offline and basically down for > everything else. So maybe it's better to return to something like > the very first version of the patch: > https://www.spinics.net/lists/kernel/msg2883146.html ? > For memcg reclaim reasons we do care only about an edge case with few pages. > > But overall it's not obvious to me, why rounding up is worse than rounding down. > Maybe we should average down but accumulate the reminder? > Creating an implicit bias for small memory cgroups sounds groundless. > I don't think v1 path works for me either. The logic in v1 isn't too much different from commit 68600f623d69. v1 has selective roundup, but current code is having a forced roundup. With 68600f623d69 reverted and your v1 patch applied, it took 273 seconds to bring up 8 VMs and 1752MB swap is used. It looks more worse than 68600f623d69. Yeah, it's not reasonable to have a bias on all memory cgroups regardless their states. I do think it's still right to give bias to offlined memory cgroups. So the point is we need take care of the memory cgroup's state and apply the bias to offlined ones only. The offlined memory cgroup is going to die and has been dead. It's unlikely for its memory to be used by someone, but still possible. So it's reasonable to hardly squeeze the used memory of offlined memory cgroup if possible. Thanks, Gavin