From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77625C433DF for ; Tue, 4 Aug 2020 09:12:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 50C14206DA for ; Tue, 4 Aug 2020 09:12:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50C14206DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8ACF98D0134; Tue, 4 Aug 2020 05:12:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 837058D0081; Tue, 4 Aug 2020 05:12:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 700878D0134; Tue, 4 Aug 2020 05:12:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id 553BF8D0081 for ; Tue, 4 Aug 2020 05:12:57 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 045B7180AD81D for ; Tue, 4 Aug 2020 09:12:57 +0000 (UTC) X-FDA: 77112321594.13.bell32_1b1183926fa5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id CAB9818140B60 for ; Tue, 4 Aug 2020 09:12:56 +0000 (UTC) X-HE-Tag: bell32_1b1183926fa5 X-Filterd-Recvd-Size: 4502 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 09:12:56 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B0900B1B9; Tue, 4 Aug 2020 09:13:10 +0000 (UTC) Subject: Re: [PATCH] mm: sort freelist by rank number To: Cho KyongHo Cc: David Hildenbrand , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, hyesoo.yu@samsung.com, janghyuck.kim@samsung.com References: <1596435031-41837-1-git-send-email-pullip.cho@samsung.com> <5f41af0f-4593-3441-12f4-5b0f7e6999ac@redhat.com> <20200804023548.GA186735@KEI> From: Vlastimil Babka Message-ID: <947a09ba-968b-4c4d-68bb-d13de9c885a1@suse.cz> Date: Tue, 4 Aug 2020 11:12:55 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200804023548.GA186735@KEI> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: CAB9818140B60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/4/20 4:35 AM, Cho KyongHo wrote: > On Mon, Aug 03, 2020 at 05:45:55PM +0200, Vlastimil Babka wrote: >> On 8/3/20 9:57 AM, David Hildenbrand wrote: >> > On 03.08.20 08:10, pullip.cho@samsung.com wrote: >> >> From: Cho KyongHo >> >> >> >> LPDDR5 introduces rank switch delay. If three successive DRAM accesses >> >> happens and the first and the second ones access one rank and the last >> >> access happens on the other rank, the latency of the last access will >> >> be longer than the second one. >> >> To address this panelty, we can sort the freelist so that a specific >> >> rank is allocated prior to another rank. We expect the page allocator >> >> can allocate the pages from the same rank successively with this >> >> change. It will hopefully improves the proportion of the consecutive >> >> memory accesses to the same rank. >> > >> > This certainly needs performance numbers to justify ... and I am sorry, >> > "hopefully improves" is not a valid justification :) >> > >> > I can imagine that this works well initially, when there hasn't been a >> > lot of memory fragmentation going on. But quickly after your system is >> > under stress, I doubt this will be very useful. Proof me wrong. ;) >> >> Agreed. The implementation of __preferred_rank() seems to be very simple and >> optimistic. > > DRAM rank is selected by CS bits from DRAM controllers. In the most systems > CS bits are alloated to specific bit fields in BUS address. For example, > If CS bit is allocated to bit[16] in bus (physical) address in two rank > system, all 16KiB with bit[16] = 1 are in the rank 1 and the others are > in the rank 0. > This patch is not beneficial to other system than the mobile devices > with LPDDR5. That is why the default behavior of this patch is noop. Hmm, the patch requires at least pageblock_nr_pages, which is 2MB on x86 (dunno about ARM), so 16KiB would be way too small. What are the actual granularities then? >> I think these systems could perhaps better behave as NUMA with (interleaved) >> nodes for each rank, then you immediately have all the mempolicies support etc >> to achieve what you need? Of course there's some cost as well, but not the costs >> of adding hacks to page allocator core? > > Thank you for the proposal. NUMA will be helpful to allocate pages from > a specific rank programmatically. I should consider NUMA if rank > affinity should be also required. > However, page allocation overhead by this policy (page migration and > reclamation ect.) will give the users worse responsiveness. The intend > of this patch is to reduce rank switch delay optimistically without > hurting page allocation speed. The problem is, without some control of page migration and reclaim, the simple preference approach will not work after some uptime, as David suggested. It will just mean that the preferred rank will be allocated first, then the non-preferred rank (Linux will fill all unused memory with page cache if possible), then reclaim will free memory from both ranks without any special care, and new allocations will thus come from both ranks.