From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8FC0C433EF for ; Sun, 24 Apr 2022 13:30:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C79746B0074; Sun, 24 Apr 2022 09:30:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C29D86B0075; Sun, 24 Apr 2022 09:30:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B17F36B0078; Sun, 24 Apr 2022 09:30:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id A2CF86B0074 for ; Sun, 24 Apr 2022 09:30:33 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 7311181305 for ; Sun, 24 Apr 2022 13:30:33 +0000 (UTC) X-FDA: 79391857146.21.B8B3D2A Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf26.hostedemail.com (Postfix) with ESMTP id 77EAC14002A for ; Sun, 24 Apr 2022 13:30:30 +0000 (UTC) Received: from kwepemi500009.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KmTRp1tvnzCs6Z; Sun, 24 Apr 2022 21:25:58 +0800 (CST) Received: from kwepemm600004.china.huawei.com (7.193.23.242) by kwepemi500009.china.huawei.com (7.221.188.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sun, 24 Apr 2022 21:30:27 +0800 Received: from [10.174.177.238] (10.174.177.238) by kwepemm600004.china.huawei.com (7.193.23.242) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sun, 24 Apr 2022 21:30:27 +0800 Message-ID: <77b76283-cec5-94a8-9bfe-34ea24c55b82@huawei.com> Date: Sun, 24 Apr 2022 21:30:26 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.0.2 Subject: Re: Questions about folio allocation. To: Matthew Wilcox CC: , , , References: <20220424113543.456342-1-guoxuenan@huawei.com> From: Guo Xuenan In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.238] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600004.china.huawei.com (7.193.23.242) X-CFilter-Loop: Reflected Authentication-Results: imf26.hostedemail.com; dkim=none; spf=pass (imf26.hostedemail.com: domain of guoxuenan@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=guoxuenan@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 77EAC14002A X-Stat-Signature: 1eun4mcx9tbn1zgt5pr54mbkxya3bzum X-HE-Tag: 1650807030-110148 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Matthew, In 2022/4/24 19:37, Matthew Wilcox Wrote: > On Sun, Apr 24, 2022 at 07:35:43PM +0800, Guo Xuenan wrote: >> Hi Matthew, >> >> You have done a lot of work on folio, many folio related patches have been >> incorporated into the mainline. I'm very interested in your excellent work >> and did some sequential read test (using fixed read length, testing on a >> 10G file), and found something. >> 1. different read length may effect folio order >> using 100KB read length during sequentital read, when readahead folio >> order may always 0, so there always allocate folios with 0 or 2. > Hmm. Looks like we're foiling readahead somehow. I'll look into this. > > root@pepe-kvm:~# mkfs.xfs /dev/sdb > root@pepe-kvm:~# mount /dev/sdb /mnt/ > root@pepe-kvm:~# truncate -s 10G /mnt/bigfile > root@pepe-kvm:~# echo 1 >/sys/kernel/tracing/events/filemap/mm_filemap_add_to_page_cache/enable > root@pepe-kvm:~# dd if=/mnt/bigfile of=/dev/null bs=100K count=4 > root@pepe-kvm:~# cat /sys/kernel/tracing/trace > [...] > dd-286 [000] ..... 175.495258: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b0c ofs=0 order=2 > dd-286 [000] ..... 175.495266: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b10 ofs=16384 order=2 > dd-286 [000] ..... 175.495267: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b14 ofs=32768 order=2 > dd-286 [000] ..... 175.495268: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b18 ofs=49152 order=2 > dd-286 [000] ..... 175.495269: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b1c ofs=65536 order=2 > dd-286 [000] ..... 175.495270: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b20 ofs=81920 order=2 > dd-286 [000] ..... 175.495271: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b24 ofs=98304 order=2 > dd-286 [000] ..... 175.495272: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b28 ofs=114688 order=2 > dd-286 [000] ..... 175.495485: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x1048a3 ofs=135168 order=0 > dd-286 [000] ..... 175.495486: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x1036eb ofs=139264 order=0 > dd-286 [000] ..... 175.495486: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103f4a ofs=143360 order=0 > dd-286 [000] ..... 175.495487: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b2c ofs=147456 order=2 > dd-286 [000] ..... 175.495490: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b48 ofs=163840 order=3 > dd-286 [000] ..... 175.495491: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b30 ofs=196608 order=2 > dd-286 [000] ..... 175.495492: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103f76 ofs=212992 order=0 > dd-286 [000] ..... 175.495666: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103f79 ofs=131072 order=0 > dd-286 [000] ..... 175.495669: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103f5b ofs=217088 order=0 > dd-286 [000] ..... 175.495669: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103c99 ofs=221184 order=0 > dd-286 [000] ..... 175.495670: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x1037a0 ofs=225280 order=0 > dd-286 [000] ..... 175.495673: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103f45 ofs=229376 order=0 > dd-286 [000] ..... 175.495674: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103f44 ofs=233472 order=0 > dd-286 [000] ..... 175.495675: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x10378c ofs=237568 order=0 > dd-286 [000] ..... 175.495675: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103fde ofs=241664 order=0 > dd-286 [000] ..... 175.495676: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103fdd ofs=245760 order=0 > dd-286 [000] ..... 175.495677: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103fe1 ofs=249856 order=0 > dd-286 [000] ..... 175.495677: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103fe2 ofs=253952 order=0 > dd-286 [000] ..... 175.495678: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x103fa7 ofs=258048 order=0 > dd-286 [000] ..... 175.495687: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b34 ofs=262144 order=2 > dd-286 [000] ..... 175.495690: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b38 ofs=278528 order=2 > dd-286 [000] ..... 175.495691: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b3c ofs=294912 order=2 > dd-286 [000] ..... 175.495692: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b40 ofs=311296 order=2 > dd-286 [000] ..... 175.495693: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b44 ofs=327680 order=2 > dd-286 [000] ..... 175.495701: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b80 ofs=344064 order=2 > dd-286 [000] ..... 175.495703: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b84 ofs=360448 order=2 > dd-286 [000] ..... 175.495704: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106b88 ofs=376832 order=2 > dd-286 [000] ..... 175.495894: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106bc0 ofs=393216 order=4 > dd-286 [000] ..... 175.495896: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106bd0 ofs=458752 order=4 > dd-286 [000] ..... 175.496096: mm_filemap_add_to_page_cache: dev 8:16 ino 83 pfn=0x106be0 ofs=524288 order=5 > > We do eventually get up to an order=5 allocation (128kB), but we should > get there far sooner. Hmm.. sorry my expression is not rigorous enough, but i think you have got it partly. Read the whole file but not only 100k * 4, in most case page order is 2, which means that in this way of reading,the order of folio with readahead flag is 0 in most case. [root@localhost ]# echo 4096 > /sys/block/vdb/queue/read_ahead_kb [root@localhost ]# echo 4096 > /sys/block/vdb/queue/max_sectors_kb [root@localhost ]# bpftrace bpf.bt  > 100K [root@localhost ]# cat 100K | awk '{print $11}' | sort | uniq -c     884 0    55945 2      1 3      14 4      2 5      5 6 According to the readahead code, the inital order is from current folio with readahead flag, may the inital order based on size of readadhead window is better? (eg: ra->size big enough and considering index alignment then set the order?) >> 2. folio order can not reach MAX_PAGECACHE_ORDER, when read length is small. >> (eg, less than 32KB) > I'm less concerned about that. It's not necessarily a good idea to go > all the way to an order-9 page; 2MB pages are pretty big. > >> As you have mentationed here[1], >> "The heuristic for choosing which folio sizes will surely need some tuning" >> I wonder (1) why the folio order need align with page index. is this >> necessary or there are some certain restrictions? > That's partly because of the limitations of the radix tree used by > the page cache. With an aligned folio, an order-6 folio will take > up a single entry; if it were unaligned, we'd need two nodes. Worse, > we'd have to constantly be walking around the tree in order to find > all the entries associated with an unaligned folio. > > Partly, it's to try to use CPU resources more effectively. For the > pages which are mapped to userspace, we set up the alignments so we can > use things like PMD-sized TLB entries and ARM's 64KiB TLB entries. Thank you for your patient answer. I understand your consideration. :) in addition, make alignment for PMD and ARM's 64KiB TLB is very resonable. but it may difficult to fully realize the alignment as we expect, since there will always decrease the order. can i make some improvements based on this? 514     while (index <= limit) { 515         unsigned int order = new_order; 516 517         /* Align with smaller pages if needed */ 518         if (index & ((1UL << order) - 1)) { 519             order = __ffs(index); 520             if (order == 1) 521                 order = 0; 522         } 523         /* Don't allocate pages past EOF */ 524         while (index + (1UL << order) - 1 > limit) { 525             if (--order == 1) 526                 order = 0; 527         } 528         err = ra_alloc_folio(ractl, index, mark, order, gfp); 529         if (err) 530             break; 531         index += 1UL << order; 532     } >> (2) for pagecache, by using large folio, it saving loops for allocating pages, >> and i also did some test on dropcache, it shows that dropcache costs less time. >> there are twenty times performance improvement when drop the 10G file's cache. >> so, can i concluded that pagecache should tend to use large order of folio? > Well, dropping the pagecache isn't a performance path ;-) But as a > proxy for doing page reclaim under memory pressure, yes, that kind of > performance gain is what I'd expect and was one of the major motivators > for this work (shortening the LRU list and keeping memory unfragmented). Thank you so much :) > .