From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDE13C3DA6E for ; Wed, 20 Dec 2023 06:39:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BD446B0092; Wed, 20 Dec 2023 01:39:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 56CF36B0093; Wed, 20 Dec 2023 01:39:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 435A86B0095; Wed, 20 Dec 2023 01:39:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 329196B0092 for ; Wed, 20 Dec 2023 01:39:14 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 103EAA0493 for ; Wed, 20 Dec 2023 06:39:14 +0000 (UTC) X-FDA: 81586244628.05.EC74764 Received: from mail-ed1-f45.google.com (mail-ed1-f45.google.com [209.85.208.45]) by imf01.hostedemail.com (Postfix) with ESMTP id 1A8AB4000E for ; Wed, 20 Dec 2023 06:39:11 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LiHS9uCb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of yuzhao@google.com designates 209.85.208.45 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703054352; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DNGh4EwmjM9kTNUi59vPIRfSRP34dgjzQ7qADyPYofA=; b=lXwtmsG3xooxfGk/ABSFUJfqnUJcNEClMGHzB5siSZSjR7yU91XG07Y/Qn/3Wcmm70BeJQ B96B8cU1UmlxL7Qqn69GRuCa2qFGvuNq1cinXm7+vqqONKc7bOu4MUUvfjmu4tb8RhEhwj VuKYoOna2fFtxXCi6ME0IPFVqic/EAk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=LiHS9uCb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of yuzhao@google.com designates 209.85.208.45 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703054352; a=rsa-sha256; cv=none; b=4v7o5ye049H3Nr/l5CTN+gxvHcLD/TGp+NoJuxuAcHnK0mr8aOR+Ctf3rMTS/msJtZpT4p Mdxg2wSE6nuJufObZeCjzF0B54D5Gp/hDb7ZiIcKYZKY1FEKJIdpN5b+9HlMcsbuUCpqcM gVOuiuKpr4Av4bzIkov+BZ1JIDwih5M= Received: by mail-ed1-f45.google.com with SMTP id 4fb4d7f45d1cf-54744e66d27so9323a12.0 for ; Tue, 19 Dec 2023 22:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703054350; x=1703659150; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=DNGh4EwmjM9kTNUi59vPIRfSRP34dgjzQ7qADyPYofA=; b=LiHS9uCbIsYhuaCx5YegF7DyEIHEwfItT3v8o15cWPTsKDozoKbpMHetCblM9cXT4m A9KbvYZGpftL9AEO0DD7/76j1qrRJG+H9M1kKcLj7fN1u6klPYUd4LUVt2gQceE00Ybv RQCjQQ7ormsHjlND+nUly6qHjdOM1G31wbU6TTuegKQ33wWdxDjeg7uc4pVOGavP5nU/ +u9EKRiJPi6K1Ys38nKZ5xvdv4of8B2VmNrfv0IHN11jq7pelPRFGnMoW1sFqYoPUAEu lkgy/rfHAsyjZQtXzKXZsA+WCqpwXdl1XZj9D+3OOH3vMXQT0DgNhQ4EnluPwQhXDUj1 3PDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703054350; x=1703659150; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DNGh4EwmjM9kTNUi59vPIRfSRP34dgjzQ7qADyPYofA=; b=DlMw+hvbHZq3zESwmv3FTFoltQiPCJEAB7Qlgro/BNdabVpNMw0lBYCf4dHiYGHqqw Cd2UjE6dotOJNYJdG97qr6E32Oc/lfjsiC4pscMBDy4DQOz6diSqepjd2UHnSNhFX3Ta tOtus7q2Gy8IpqAiY3/ByDIdGdPWW9LbpXSbEjxbUjziE2Jbl3YCVInWWg0lRl4lzyCA kpYmKLTRqDUX73s3AUI2ztPr3ki3q7CpFMFjrgDj9g6FjYIMm5cIaKYOPyU/IfyLT0RV sQT9O5uwq1htL5V183vPdvzfJIV5hOHIw+BxY5yRKmQS/cTfaOf2LwXA+yHxaXgdC/S2 nQNA== X-Gm-Message-State: AOJu0YzMRlQDSR+k/ZH1iYx5gbYqugOuC+nB0nJ8AMemBoctGyVib6Js lmBJ6iSXIyKZmoeAHNLILcVSvCBdq8va9jvS2CM2TfbRQHWm X-Google-Smtp-Source: AGHT+IF+BRiszYtIiJO3xgKZdi7RXKURKbNzKS/oqM7p0ZtlMeVFgCGgNY9UVRfDBqH96DlS+Ay5YAjz4uQhnTHbHoI= X-Received: by 2002:a50:9f24:0:b0:553:5578:2fc9 with SMTP id b33-20020a509f24000000b0055355782fc9mr84108edf.5.1703054350212; Tue, 19 Dec 2023 22:39:10 -0800 (PST) MIME-Version: 1.0 References: <20231208061407.2125867-1-yuzhao@google.com> In-Reply-To: From: Yu Zhao Date: Tue, 19 Dec 2023 23:38:31 -0700 Message-ID: Subject: Re: [PATCH mm-unstable v1 1/4] mm/mglru: fix underprotected page cache To: Kairui Song Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Charan Teja Kalla , Kalesh Singh , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 1A8AB4000E X-Stat-Signature: dzhwap8bkundrc39oufz4jqkeyh1wsu5 X-Rspam-User: X-HE-Tag: 1703054351-855729 X-HE-Meta: U2FsdGVkX18iriloaljATPEqc87erIpcAe9MYlRnF2R5Oo+ZukAopoBs1f8g56raZB/M9AVkvLwvaX1u8DgLDohyMac2msjCgE4X3zrXKTJfGta7uqTY9nEOCoJvPUJihOYWHXJb9gN+ZKaKoChh/zO9ZKRchLhyBKAQVQ3i3ad/BxokZKVbHpdVmMun2qYMG8TPVIORVYleRhOFF93lvfOpa7kIC5OaPVVoqLKUzmGPsCj0m6+XnLYDGLNsG143IH0erKa1Cdqm21C4+Cq7izYamUbqtEcfrDYmm5keuo8n9OJyj3g6Rz7rfSiLYyjFzSfFgJOIkrZXxH3nHBwzS3ujk5R1YkqHiDY5tcHsjxhqRU7tx2SJqS7Kyla1Lm7MjkdlpnpYIEMUQPI07YKouI7sErb1dyvBNTy7WyrEKynUTtBgjrsJIo/pwwkvhM2dm1ahbL+LGGeTwRzi/7bf4/yFmZT0nZ5AednMrgVsAVKr9gWYKQDNTs2q2juBQExUzVnTqX6Tjqh7AA6hBcMVYjqgb67qmvvt8rBUj0fY7iT4YsPIvSEF/KN8Sd47CKMLR3gVdLqc4EbAZblyMiz9pJ03xWt7KqoNgICnElr06VLBys120140SQeo5K2gui42ycZNdTo4foLEhGdCwWu35YcAtiYHLbUaY/Q8WGebOInl+foZaYUxve4hRc02nJ90eCdDu9g/4IUIfPbiALfayU2g//eYCXtPdSywEPyoqU7CU6NYgqiWAKw+0vddvgFt2y+M9x4bkBTeT055+SZvMl6x2Xre/aQVyQISPb3+CaPyxh3TVfrRDosCWmv4mnaR7pmhOv9BnxRzpGcc7/LJQtk0ad5bhO9u4oqFgX1wBC6dQloTjOZqwp3sbLuWINGnO10Z6VTKgL5YFrZnGzjgqUzeBAKglq8X0R3+MfjvL4egN16EDl7ZexSIuUUypUUAjb8ZrtOM/1U3+Ch0wtM UrDuoyPz SCF66V2y64lBOoulKdlOZdirsH1Q0SReTbYLeJRJcvT1yKXoa7w0PDFe3fJywjoXy6MsmgPLVUuh3WloBISU18hQAw+hvgOT2PNNwW1qy1CNylWBNvkrvhqFexdDC2RC+RkHSbe/svhVyJbKxVZ4u8LFeT+VO9+lUcV06xwMc+6RVHerUvsNw/E8yaCkGE5bxUGRA9XZvoUwID2zoca+BgQY7lfNwZkoZcoujkY8vZjmHRxigA2KVZ2s8P3MplU39tNkfbk2miRndhn0wjVUSpxi/Tpsu5R5EberHuGGIbU35MvFKuBIUcmvPTM+jeKhJ007mWwswMO9AcIZadTY/XBbOiGqK6V4awbEG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Dec 19, 2023 at 11:58=E2=80=AFAM Kairui Song wro= te: > > Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6=9C=8819=E6=97=A5= =E5=91=A8=E4=BA=8C 11:45=E5=86=99=E9=81=93=EF=BC=9A > > > > On Mon, Dec 18, 2023 at 8:21=E2=80=AFPM Yu Zhao wro= te: > > > > > > On Mon, Dec 18, 2023 at 11:05=E2=80=AFAM Kairui Song wrote: > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6=9C=8815=E6= =97=A5=E5=91=A8=E4=BA=94 12:56=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > > > On Thu, Dec 14, 2023 at 04:51:00PM -0700, Yu Zhao wrote: > > > > > > On Thu, Dec 14, 2023 at 11:38=E2=80=AFAM Kairui Song wrote: > > > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6=9C=88= 14=E6=97=A5=E5=91=A8=E5=9B=9B 11:09=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > On Wed, Dec 13, 2023 at 12:59:14AM -0700, Yu Zhao wrote: > > > > > > > > > On Tue, Dec 12, 2023 at 8:03=E2=80=AFPM Kairui Song wrote: > > > > > > > > > > > > > > > > > > > > Kairui Song =E4=BA=8E2023=E5=B9=B412= =E6=9C=8812=E6=97=A5=E5=91=A8=E4=BA=8C 14:52=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > > > > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9=B412= =E6=9C=8812=E6=97=A5=E5=91=A8=E4=BA=8C 06:07=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Dec 8, 2023 at 1:24=E2=80=AFAM Kairui Song = wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9= =B412=E6=9C=888=E6=97=A5=E5=91=A8=E4=BA=94 14:14=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > > > > > > > > > > > > > > > > > > > > > Unmapped folios accessed through file descripto= rs can be > > > > > > > > > > > > > > underprotected. Those folios are added to the o= ldest generation based > > > > > > > > > > > > > > on: > > > > > > > > > > > > > > 1. The fact that they are less costly to reclai= m (no need to walk the > > > > > > > > > > > > > > rmap and flush the TLB) and have less impact= on performance (don't > > > > > > > > > > > > > > cause major PFs and can be non-blocking if n= eeded again). > > > > > > > > > > > > > > 2. The observation that they are likely to be s= ingle-use. E.g., for > > > > > > > > > > > > > > client use cases like Android, its apps pars= e configuration files > > > > > > > > > > > > > > and store the data in heap (anon); for serve= r use cases like MySQL, > > > > > > > > > > > > > > it reads from InnoDB files and holds the cac= hed data for tables in > > > > > > > > > > > > > > buffer pools (anon). > > > > > > > > > > > > > > > > > > > > > > > > > > > > However, the oldest generation can be very shor= t lived, and if so, it > > > > > > > > > > > > > > doesn't provide the PID controller with enough = time to respond to a > > > > > > > > > > > > > > surge of refaults. (Note that the PID controlle= r uses weighted > > > > > > > > > > > > > > refaults and those from evicted generations onl= y take a half of the > > > > > > > > > > > > > > whole weight.) In other words, for a short live= d generation, the > > > > > > > > > > > > > > moving average smooths out the spike quickly. > > > > > > > > > > > > > > > > > > > > > > > > > > > > To fix the problem: > > > > > > > > > > > > > > 1. For folios that are already on LRU, if they = can be beyond the > > > > > > > > > > > > > > tracking range of tiers, i.e., five accesses= through file > > > > > > > > > > > > > > descriptors, move them to the second oldest = generation to give them > > > > > > > > > > > > > > more time to age. (Note that tiers are used = by the PID controller > > > > > > > > > > > > > > to statistically determine whether folios ac= cessed multiple times > > > > > > > > > > > > > > through file descriptors are worth protectin= g.) > > > > > > > > > > > > > > 2. When adding unmapped folios to LRU, adjust t= he placement of them so > > > > > > > > > > > > > > that they are not too close to the tail. The= effect of this is > > > > > > > > > > > > > > similar to the above. > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Android, launching 55 apps sequentially: > > > > > > > > > > > > > > Before After = Change > > > > > > > > > > > > > > workingset_refault_anon 25641024 25598972 = 0% > > > > > > > > > > > > > > workingset_refault_file 115016834 106178438= -8% > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Yu, > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks you for your amazing works on MGLRU. > > > > > > > > > > > > > > > > > > > > > > > > > > I believe this is the similar issue I was trying = to resolve previously: > > > > > > > > > > > > > https://lwn.net/Articles/945266/ > > > > > > > > > > > > > The idea is to use refault distance to decide if = the page should be > > > > > > > > > > > > > place in oldest generation or some other gen, whi= ch per my test, > > > > > > > > > > > > > worked very well, and we have been using refault = distance for MGLRU in > > > > > > > > > > > > > multiple workloads. > > > > > > > > > > > > > > > > > > > > > > > > > > There are a few issues left in my previous RFC se= ries, like anon pages > > > > > > > > > > > > > in MGLRU shouldn't be considered, I wanted to col= lect feedback or test > > > > > > > > > > > > > cases, but unfortunately it seems didn't get too = much attention > > > > > > > > > > > > > upstream. > > > > > > > > > > > > > > > > > > > > > > > > > > I think both this patch and my previous series ar= e for solving the > > > > > > > > > > > > > file pages underpertected issue, and I did a quic= k test using this > > > > > > > > > > > > > series, for mongodb test, refault distance seems = still a better > > > > > > > > > > > > > solution (I'm not saying these two optimization a= re mutually exclusive > > > > > > > > > > > > > though, just they do have some conflicts in imple= mentation and solving > > > > > > > > > > > > > similar problem): > > > > > > > > > > > > > > > > > > > > > > > > > > Previous result: > > > > > > > > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > > > > > > > > > > Execution Results after 905 seconds > > > > > > > > > > > > > -------------------------------------------------= ----------------- > > > > > > > > > > > > > Executed Time (=C2=B5s) = Rate > > > > > > > > > > > > > STOCK_LEVEL 2542 27121571486.2 = 0.09 txn/s > > > > > > > > > > > > > -------------------------------------------------= ----------------- > > > > > > > > > > > > > TOTAL 2542 27121571486.2 = 0.09 txn/s > > > > > > > > > > > > > > > > > > > > > > > > > > This patch: > > > > > > > > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > > > > > > > > > > Execution Results after 900 seconds > > > > > > > > > > > > > -------------------------------------------------= ----------------- > > > > > > > > > > > > > Executed Time (=C2=B5s) = Rate > > > > > > > > > > > > > STOCK_LEVEL 1594 27061522574.4 = 0.06 txn/s > > > > > > > > > > > > > -------------------------------------------------= ----------------- > > > > > > > > > > > > > TOTAL 1594 27061522574.4 = 0.06 txn/s > > > > > > > > > > > > > > > > > > > > > > > > > > Unpatched version is always around ~500. > > > > > > > > > > > > > > > > > > > > > > > > Thanks for the test results! > > > > > > > > > > > > > > > > > > > > > > > > > I think there are a few points here: > > > > > > > > > > > > > - Refault distance make use of page shadow so it = can better > > > > > > > > > > > > > distinguish evicted pages of different access pat= tern (re-access > > > > > > > > > > > > > distance). > > > > > > > > > > > > > - Throttled refault distance can help hold part o= f workingset when > > > > > > > > > > > > > memory is too small to hold the whole workingset. > > > > > > > > > > > > > > > > > > > > > > > > > > So maybe part of this patch and the bits of previ= ous series can be > > > > > > > > > > > > > combined to work better on this issue, how do you= think? > > > > > > > > > > > > > > > > > > > > > > > > I'll try to find some time this week to look at you= r RFC. It'd be a > > > > > > > > > > > > > > > > > > > > Hi Yu, > > > > > > > > > > > > > > > > > > > > I'm working on V4 of the RFC now, which just update som= e comments, and > > > > > > > > > > skip anon page re-activation in refault path for mglru = which was not > > > > > > > > > > very helpful, only some tiny adjustment. > > > > > > > > > > And I found it easier to test with fio, using following= test script: > > > > > > > > > > > > > > > > > > > > #!/bin/bash > > > > > > > > > > swapoff -a > > > > > > > > > > > > > > > > > > > > modprobe brd rd_nr=3D1 rd_size=3D16777216 > > > > > > > > > > mkfs.ext4 /dev/ram0 > > > > > > > > > > mount /dev/ram0 /mnt > > > > > > > > > > > > > > > > > > > > mkdir -p /sys/fs/cgroup/benchmark > > > > > > > > > > cd /sys/fs/cgroup/benchmark > > > > > > > > > > > > > > > > > > > > echo 4G > memory.max > > > > > > > > > > echo $$ > cgroup.procs > > > > > > > > > > echo 3 > /proc/sys/vm/drop_caches > > > > > > > > > > > > > > > > > > > > fio -name=3Dmglru --numjobs=3D12 --directory=3D/mnt --s= ize=3D1024m \ > > > > > > > > > > --buffered=3D1 --ioengine=3Dio_uring --iodept= h=3D128 \ > > > > > > > > > > --iodepth_batch_submit=3D32 --iodepth_batch_c= omplete=3D32 \ > > > > > > > > > > --rw=3Drandread --random_distribution=3Dzipf:= 0.5 --norandommap \ > > > > > > > > > > --time_based --ramp_time=3D5m --runtime=3D5m = --group_reporting > > > > > > > > > > > > > > > > > > > > zipf:0.5 is used here to simulate a cached read with sl= ight bias > > > > > > > > > > towards certain pages. > > > > > > > > > > Unpatched 6.7-rc4: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D6548MiB/s (6866MB/s), 6548MiB/s-6548MiB/s > > > > > > > > > > (6866MB/s-6866MB/s), io=3D1918GiB (2060GB), run=3D30000= 1-300001msec > > > > > > > > > > > > > > > > > > > > Patched with RFC v4: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D7270MiB/s (7623MB/s), 7270MiB/s-7270MiB/s > > > > > > > > > > (7623MB/s-7623MB/s), io=3D2130GiB (2287GB), run=3D30000= 1-300001msec > > > > > > > > > > > > > > > > > > > > Patched with this series: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D7098MiB/s (7442MB/s), 7098MiB/s-7098MiB/s > > > > > > > > > > (7442MB/s-7442MB/s), io=3D2079GiB (2233GB), run=3D30000= 2-300002msec > > > > > > > > > > > > > > > > > > > > MGLRU off: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D6525MiB/s (6842MB/s), 6525MiB/s-6525MiB/s > > > > > > > > > > (6842MB/s-6842MB/s), io=3D1912GiB (2052GB), run=3D30000= 2-300002msec > > > > > > > > > > > > > > > > > > > > - If I change zipf:0.5 to random: > > > > > > > > > > Unpatched 6.7-rc4: > > > > > > > > > > Patched with this series: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D5975MiB/s (6265MB/s), 5975MiB/s-5975MiB/s > > > > > > > > > > (6265MB/s-6265MB/s), io=3D1750GiB (1879GB), run=3D30000= 2-300002msec > > > > > > > > > > > > > > > > > > > > Patched with RFC v4: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D5987MiB/s (6278MB/s), 5987MiB/s-5987MiB/s > > > > > > > > > > (6278MB/s-6278MB/s), io=3D1754GiB (1883GB), run=3D30000= 1-300001msec > > > > > > > > > > > > > > > > > > > > Patched with this series: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D5839MiB/s (6123MB/s), 5839MiB/s-5839MiB/s > > > > > > > > > > (6123MB/s-6123MB/s), io=3D1711GiB (1837GB), run=3D30000= 1-300001msec > > > > > > > > > > > > > > > > > > > > MGLRU off: > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > READ: bw=3D5689MiB/s (5965MB/s), 5689MiB/s-5689MiB/s > > > > > > > > > > (5965MB/s-5965MB/s), io=3D1667GiB (1790GB), run=3D30000= 3-300003msec > > > > > > > > > > > > > > > > > > > > fio uses ramdisk so LRU accuracy will have smaller impa= ct. The Mongodb > > > > > > > > > > test I provided before uses a SATA SSD so it will have = a much higher > > > > > > > > > > impact. I'll provides a script to setup the test case a= nd run it, it's > > > > > > > > > > more complex to setup than fio since involving setting = up multiple > > > > > > > > > > replicas and auth and hundreds of GB of test fixtures, = I'm currently > > > > > > > > > > occupied by some other tasks but will try best to send = them out as > > > > > > > > > > soon as possible. > > > > > > > > > > > > > > > > > > Thanks! Apparently your RFC did show better IOPS with bot= h access > > > > > > > > > patterns, which was a surprise to me because it had highe= r refaults > > > > > > > > > and usually higher refautls result in worse performance. > > > > > > > > > > And thanks for providing the refaults I requested for -- your dat= a > > > > > below confirms what I mentioned above: > > > > > > > > > > For fio: > > > > > Your RFC This series Change > > > > > workingset_refault_file 628192729 596790506 -5% > > > > > IOPS 1862k 1830k -2% > > > > > > > > > > For MongoDB: > > > > > Your RFC This series Change > > > > > workingset_refault_anon 10512 35277 +30% > > > > > workingset_refault_file 22751782 20335355 -11% > > > > > total 22762294 20370632 -11% > > > > > TPS 0.09 0.06 -33% > > > > > > > > > > For MongoDB, this series should be a big win (but apparently it's= not), > > > > > especially when using zram, since an anon refault should be a lot > > > > > cheaper than a file refault. > > > > > > > > > > So, I'm baffled... > > > > > > > > > > One important detail I forgot to mention: based on your data from > > > > > lru_gen_full, I think there is another difference between our Kco= nfigs: > > > > > > > > > > Your Kconfig My Kconfig Max possible > > > > > LRU_REFS_WIDTH 1 2 2 > > > > > > > > Hi Yu, > > > > > > > > Thanks for the info, my fault, I forgot to update my config as I wa= s > > > > testing some other features. > > > > Buf after I changed LRU_REFS_WIDTH to 2 by disabling IDLE_PAGE, thi= ng > > > > got much worse for MongoDB test: > > > > > > > > With LRU_REFS_WIDTH =3D=3D 2: > > > > > > > > This patch: > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > Execution Results after 919 seconds > > > > ------------------------------------------------------------------ > > > > Executed Time (=C2=B5s) Rate > > > > STOCK_LEVEL 488 27598136201.9 0.02 txn/s > > > > ------------------------------------------------------------------ > > > > TOTAL 488 27598136201.9 0.02 txn/s > > > > > > > > memcg 86 /system.slice/docker-1c3a90be9f0a072f5719332419550cd0e1= 455f2cd5863bc2780ca4d3f913ece5.scope > > > > node 0 > > > > 1 948187 0x 0x > > > > 0 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 1 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 2 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 3 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 2 948187 0 6051788=C2=B7 > > > > 0 0r 0e 0p 119= 16r > > > > 66442e 0p > > > > 1 0r 0e 0p 9= 03r > > > > 16888e 0p > > > > 2 0r 0e 0p 4= 59r > > > > 9764e 0p > > > > 3 0r 0e 0p = 0r > > > > 0e 2874p > > > > 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 3 948187 1353160 6351=C2=B7 > > > > 0 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 1 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 2 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 3 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 4 73045 23573 12=C2=B7 > > > > 0 0R 0T 0 34986= 07R > > > > 4868605T 0=C2=B7 > > > > 1 0R 0T 0 30122= 46R > > > > 3270261T 0=C2=B7 > > > > 2 0R 0T 0 24986= 08R > > > > 2839104T 0=C2=B7 > > > > 3 0R 0T 0 = 0R > > > > 1983947T 0=C2=B7 > > > > 1486579L 0O 1380614Y 29= 45N > > > > 2945F 2734A > > > > > > > > workingset_refault_anon 0 > > > > workingset_refault_file 18130598 > > > > > > > > total used free shared buff/cache= available > > > > Mem: 31978 6705 312 20 24960= 24786 > > > > Swap: 31977 4 31973 > > > > > > > > RFC: > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > Execution Results after 908 seconds > > > > ------------------------------------------------------------------ > > > > Executed Time (=C2=B5s) Rate > > > > STOCK_LEVEL 2252 27159962888.2 0.08 txn/s > > > > ------------------------------------------------------------------ > > > > TOTAL 2252 27159962888.2 0.08 txn/s > > > > > > > > workingset_refault_anon 22585 > > > > workingset_refault_file 22715256 > > > > > > > > memcg 66 /system.slice/docker-0989446ff78106e32d3f400a0cf371c9a7= 03281bded86d6d6bb1af706ebb25da.scope > > > > node 0 > > > > 22 563007 2274 1198225=C2=B7 > > > > 0 0r 1e 0p = 0r > > > > 697076e 0p > > > > 1 0r 0e 0p = 0r > > > > 0e 325661p > > > > 2 0r 0e 0p = 0r > > > > 0e 888728p > > > > 3 0r 0e 0p = 0r > > > > 0e 3602238p > > > > 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 23 532222 7525 4948747=C2=B7 > > > > 0 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 1 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 2 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 3 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 24 500367 1214667 3292=C2=B7 > > > > 0 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 1 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 2 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 3 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 0 0 0 = 0 > > > > 0 0=C2=B7 > > > > 25 469692 40797 466=C2=B7 > > > > 0 0R 271T 0 = 0R > > > > 1162165T 0=C2=B7 > > > > 1 0R 0T 0 7740= 28R > > > > 1205332T 0=C2=B7 > > > > 2 0R 0T 0 = 0R > > > > 932484T 0=C2=B7 > > > > 3 0R 1T 0 = 0R > > > > 4252158T 0=C2=B7 > > > > 25178380L 156515O 23953602Y 592= 34N > > > > 49391F 48664A > > > > > > > > total used free shared buff/cache= available > > > > Mem: 31978 6968 338 5 24671= 24555 > > > > Swap: 31977 1533 30444 > > > > > > > > Using same mongodb config (a 3 replica cluster using the same confi= g): > > > > { > > > > "net": { > > > > "bindIpAll": true, > > > > "ipv6": false, > > > > "maxIncomingConnections": 10000, > > > > }, > > > > "setParameter": { > > > > "disabledSecureAllocatorDomains": "*" > > > > }, > > > > "replication": { > > > > "oplogSizeMB": 10480, > > > > "replSetName": "issa-tpcc_0" > > > > }, > > > > "security": { > > > > "keyFile": "/data/db/keyfile" > > > > }, > > > > "storage": { > > > > "dbPath": "/data/db/", > > > > "syncPeriodSecs": 60, > > > > "directoryPerDB": true, > > > > "wiredTiger": { > > > > "engineConfig": { > > > > "cacheSizeGB": 5 > > > > } > > > > } > > > > }, > > > > "systemLog": { > > > > "destination": "file", > > > > "logAppend": true, > > > > "logRotate": "rename", > > > > "path": "/data/db/mongod.log", > > > > "verbosity": 0 > > > > } > > > > } > > > > > > > > The test environment have 32g memory and 16 core. > > > > > > > > Per my analyze, the access pattern for the mongodb test is that pag= e > > > > will be re-access long after it's evicted so PID controller won't > > > > protect higher tier. That RFC will make use of the long existing > > > > shadow to do feedback to PID/Gen so the result will be much better. > > > > Still need more adjusting though, will try to do a rebase on top of > > > > mm-unstable which includes your patch. > > > > > > > > I've no idea why the workingset_refault_* is higher in the better > > > > case, this a clearly an IO bound workload, Memory and IO is busy wh= ile > > > > CPU is not full... > > > > > > > > I've uploaded my local reproducer here: > > > > https://github.com/ryncsn/emm-test-project/tree/master/mongo-cluste= r > > > > https://github.com/ryncsn/py-tpcc > > > > > > Thanks for the repos -- I'm trying them right now. Which MongoDB > > > version did you use? setup.sh didn't seem to install it. > > > > > > Also do you have a QEMU image? It'd be a lot easier for me to > > > duplicate the exact environment by looking into it. > > > > I ended up using docker.io/mongodb/mongodb-community-server:latest, > > and it's not working: > > > > # docker exec -it mongo-r1 mongosh --eval \ > > '"rs.initiate({ > > _id: "issa-tpcc_0", > > members: [ > > {_id: 0, host: "mongo-r1"}, > > {_id: 1, host: "mongo-r2"}, > > {_id: 2, host: "mongo-r3"} > > ] > > })"' > > Emulate Docker CLI using podman. Create /etc/containers/nodocker to qui= et msg. > > Error: can only create exec sessions on running containers: container > > state improper > > Hi Yu, > > I've updated the test repo: > https://github.com/ryncsn/emm-test-project/tree/master/mongo-cluster > > I've tested it on top of latest Fedora Cloud Image 39 and it worked > well for me, the README now contains detailed and not hard to follow > steps to reproduce this test. Thanks. I was following the instructions down to the letter and it fell apart again at line 46 (./tpcc.py). Were you able to successfully run the benchmark on a fresh VM by following the instructions? If not, I'd appreciate it if you could do so and document all the missing steps.