From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D624FC3DA6E for ; Wed, 20 Dec 2023 08:24:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60D978D0005; Wed, 20 Dec 2023 03:24:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BD368D0001; Wed, 20 Dec 2023 03:24:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45E0A8D0005; Wed, 20 Dec 2023 03:24:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 355378D0001 for ; Wed, 20 Dec 2023 03:24:51 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 087521409FC for ; Wed, 20 Dec 2023 08:24:51 +0000 (UTC) X-FDA: 81586510782.10.D73E76A Received: from mail-lj1-f172.google.com (mail-lj1-f172.google.com [209.85.208.172]) by imf20.hostedemail.com (Postfix) with ESMTP id E754A1C0011 for ; Wed, 20 Dec 2023 08:24:48 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BwD1IEHl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703060689; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wdh5prs3lYVDHgC97PujrB+flkqI5ZXhbAkovsBqoNQ=; b=KODV8GiENwbaQtdwIrtkJxFFXlw7txvS//OsW84f0Cq0qu0W+Ioy1tP6uL4MwOJcta+KER jKdqqSbno0/E6VoEEnjE8MX7HVB4jQdxRlo3ALHQ/VCh6WnIPlHHxzBAvqx7dU+KkA1vrL oUxcrAR6/GcEpJt6mCqlZUDM4dfKP98= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BwD1IEHl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703060689; a=rsa-sha256; cv=none; b=YlREfTJntuKvsq6JupMwedJnlxa6MK8p2hFdxD5CcMdnzFlOXDuNLu9xAWDPPyHvlknIsi OUoFUBXuMO2k8lTRu0zrGOpFKb4cuZfxvg52oAq7r7aNREf4KmAiURGzSXCXKkns8yBxNK n2KeT7yBOqmnRBWTmTFFDsgEBrf+nH0= Received: by mail-lj1-f172.google.com with SMTP id 38308e7fff4ca-2cc6ea4452cso41083411fa.1 for ; Wed, 20 Dec 2023 00:24:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703060687; x=1703665487; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Wdh5prs3lYVDHgC97PujrB+flkqI5ZXhbAkovsBqoNQ=; b=BwD1IEHl9rTN+Gf9W0RETiHGI6MIyKGZbwRQypqPhg7CA7ljEXkH0n9uFxUGJdbmub s0OtwxA/wDTBQfy+yfWrLe2J086kpFQ04xACyi2zjIEQ5wGXGfwOaKGALtPAeg7ByLcr LuNS7xf49VVh2iA1EEIVRKL5QUg89j4N6XW/jPJhkQy/tSrkeHqWgJpsNYmNoXPIzf1h +XLCr39jTXimQVaXu6sVhE57Hsoffe9Z/km77zLUxzX04JWyToDyVeqNFJCG2Dff/0xF 4nedq6yOzpmDbDeNyNQPZ1H7G3veE4ZDMrum505LL70L66qc7Z2Mdg3fzEsxniXCmV7B 46Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703060687; x=1703665487; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Wdh5prs3lYVDHgC97PujrB+flkqI5ZXhbAkovsBqoNQ=; b=L453Z6uRlHmAJGXzph8Yaxz4otXezS0pi3VluSTi0K3LqBnFx7w6NDb5Pg9fQ1iBi6 yicXPWdTSHEgFXH7UX9Wnm2QeLqEzQdgZUxsZU6RFKosNHnJa+F3GSrYLF4xI3bEhsSn /1wlff2NpnaveXpKN4wLEKA496t/SYdK2QnzcBW/rQ+jd1ddepFoMdmP/3bRw8DHMkbN ajQp2CJ6e13cZ4hoq0rsNI2yf3Zajo4MsyBqCZb5dOYK7DcxnJNyOLDQw8dVemJiQhJY xXtaD3bBl4mzEnc8l52C63y9uZewTn0adTNzTr6XRaxKfwtVhM6ByKwrKw+k4IvGpfiu 4Rvw== X-Gm-Message-State: AOJu0YwTmjVyztTyyG6WzApkdnP9JMO9ubxXEGU9F2OCO0fL3as751Zr BqCNKB1Bi5yxt6ADcXKGJ7xBA9VpQQgGE7aqquU= X-Google-Smtp-Source: AGHT+IFd6ueD7Y3mqg8k+Bb2YMiFSJM3Bjn9H4ppAYJnY8TTkYCQA7uttsVdnQ8PFy8F1mG9i6nGDyN/a1nu6RvYu38= X-Received: by 2002:a2e:b889:0:b0:2cc:3da1:8e34 with SMTP id r9-20020a2eb889000000b002cc3da18e34mr5840198ljp.54.1703060686584; Wed, 20 Dec 2023 00:24:46 -0800 (PST) MIME-Version: 1.0 References: <20231208061407.2125867-1-yuzhao@google.com> In-Reply-To: From: Kairui Song Date: Wed, 20 Dec 2023 16:24:28 +0800 Message-ID: Subject: Re: [PATCH mm-unstable v1 1/4] mm/mglru: fix underprotected page cache To: Yu Zhao Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Charan Teja Kalla , Kalesh Singh , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: E754A1C0011 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ejxpx5zdsknu7wm3j8beyohbotjynnjz X-HE-Tag: 1703060688-561683 X-HE-Meta: U2FsdGVkX1+/CxgfTkG8lW5cjVu1h9AMkY7aTpZydITyfAq6WJczH9Jl4CtAtAu5MMWh5arVnj+7f1r1puD9Vo1Wx0OI2JTVrcbQIBGX7cnMvkZG4H8WdVnOtQrinO1PB+Hon4qbbXwAdZVRhiomwuO8NpEdBqw/lil09cdG7tvfr7r0DppB2ptitl8J9idNg/Gz5zWKLUw4sbVDLv1MiBkczyHTA4fOOhCRWm3vRafb4uVLtEbl7qnzsibP2WBNDEW/D8pgF4BqQ9kcbqcyNzNvzlFSfZw40MtwKJQloT/2YLpfwkh/xO0nfRz64G0Cz0m+t3/Na9tWsTjw4PyrAftVQlpMGD4Ugx0u5F9fJTO3njqXEc9x3YAts1UYo7ueRhlMshc7RJOyd4sRF0jF7TeH/O2IH8NsE/qBoMUAvU7YTZ9Gf9bqtg3pFjHVciq1qQMa4P5q9bKEEMcYo8ESrot0/VETnQ7mor+1BMImETiTgoCottLeQBvWai5PKcYQYK0H9DDBSy/zQvTTLQWAVQnnodoB9eYtzrpZYNMm06A+ByWU/Bca8Xd8OmwP2hnsGdqP8I3cQ3F5n+5mFLOOlz4Uaz596wdwjlv21sjdie3vV0ZetAVACwzwyvA+KWgIA1IKvjAA88N1vtHKpPcBBt4QJPs67gaFNxg0CR6m64pDPYQJ0bE5gCX44kRhKjXCSYXufAzhjkXgNb0o4+71/oJJLTMXZdwY/YWo2WcJNUMeyqDAOdyhXqddCWTDcZjgS+WYwpwWygj4+IF4u/Lw7cYc06NPexrYbGOhm6XLB9Tb8ufmVNWvK86P19NT+cdnKHNlens1yS+luoD1moHPJFNVjredoS4yUN82XwbkJMbEehEoARWThzDCH9xGzL2Vgt24BrWxpDwz1GbkwFVzoVkniWrR5ROYqfEaFmi+igWUK/SzhI1ydsTLnalKX2wrNRLdXB4UFRA/R6vK0F2 +HH0KMHz U0a0RoBvqbqYdcBv1Iyb3B1JsyAAvNeZWNlrZepbGm8nBOQaI4GcA9XePzeVB3xCaawNySjPs387+ijgeH+8zl/TfUkhcrqkJT336tEepbfhXEEO8CByfsH21wp6xmAeDn4MTKLljyrc1sElclHnSlieInpPJGy1dDXnzuoL7k+gL7c3Q+MunpWaBGxHDgBaANmq/NbRmQLrn/1gYeqOu/JIAvzKQbRQvsBKuIrszy6q3Cx9P4X8ltK/rxicdNE4WY8h/lDbsz/kEQ5BqrYB07Y8RBl1Fr2Y/3VMFO9X7BpSlGKq1pEBpBhFPooCeLLdg5in+A1k8/jFV34OE9r5JR3PPC8zUAIYf8jVU2WbIy7YckzWxZVzzZns7c47drQXUk0TRYlILChkqyP2Oj0038BzmSVy4PAM4Fe5a/IPrYh1bV351J8Bcn3KCuJwvHpOmYd0oZY8m6p2j2bAMbsq4OHjy/0atmxP++K9t X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6=9C=8820=E6=97=A5=E5= =91=A8=E4=B8=89 16:17=E5=86=99=E9=81=93=EF=BC=9A > > On Tue, Dec 19, 2023 at 11:38=E2=80=AFPM Yu Zhao wrot= e: > > > > On Tue, Dec 19, 2023 at 11:58=E2=80=AFAM Kairui Song = wrote: > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6=9C=8819=E6=97= =A5=E5=91=A8=E4=BA=8C 11:45=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > On Mon, Dec 18, 2023 at 8:21=E2=80=AFPM Yu Zhao = wrote: > > > > > > > > > > On Mon, Dec 18, 2023 at 11:05=E2=80=AFAM Kairui Song wrote: > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6=9C=8815= =E6=97=A5=E5=91=A8=E4=BA=94 12:56=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > > > > > > > On Thu, Dec 14, 2023 at 04:51:00PM -0700, Yu Zhao wrote: > > > > > > > > On Thu, Dec 14, 2023 at 11:38=E2=80=AFAM Kairui Song wrote: > > > > > > > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9=B412=E6= =9C=8814=E6=97=A5=E5=91=A8=E5=9B=9B 11:09=E5=86=99=E9=81=93=EF=BC=9A > > > > > > > > > > On Wed, Dec 13, 2023 at 12:59:14AM -0700, Yu Zhao wrote= : > > > > > > > > > > > On Tue, Dec 12, 2023 at 8:03=E2=80=AFPM Kairui Song <= ryncsn@gmail.com> wrote: > > > > > > > > > > > > > > > > > > > > > > > > Kairui Song =E4=BA=8E2023=E5=B9= =B412=E6=9C=8812=E6=97=A5=E5=91=A8=E4=BA=8C 14:52=E5=86=99=E9=81=93=EF=BC= =9A > > > > > > > > > > > > > > > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5=B9= =B412=E6=9C=8812=E6=97=A5=E5=91=A8=E4=BA=8C 06:07=E5=86=99=E9=81=93=EF=BC= =9A > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Dec 8, 2023 at 1:24=E2=80=AFAM Kairui S= ong wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Yu Zhao =E4=BA=8E2023=E5= =B9=B412=E6=9C=888=E6=97=A5=E5=91=A8=E4=BA=94 14:14=E5=86=99=E9=81=93=EF=BC= =9A > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Unmapped folios accessed through file descr= iptors can be > > > > > > > > > > > > > > > > underprotected. Those folios are added to t= he oldest generation based > > > > > > > > > > > > > > > > on: > > > > > > > > > > > > > > > > 1. The fact that they are less costly to re= claim (no need to walk the > > > > > > > > > > > > > > > > rmap and flush the TLB) and have less im= pact on performance (don't > > > > > > > > > > > > > > > > cause major PFs and can be non-blocking = if needed again). > > > > > > > > > > > > > > > > 2. The observation that they are likely to = be single-use. E.g., for > > > > > > > > > > > > > > > > client use cases like Android, its apps = parse configuration files > > > > > > > > > > > > > > > > and store the data in heap (anon); for s= erver use cases like MySQL, > > > > > > > > > > > > > > > > it reads from InnoDB files and holds the= cached data for tables in > > > > > > > > > > > > > > > > buffer pools (anon). > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However, the oldest generation can be very = short lived, and if so, it > > > > > > > > > > > > > > > > doesn't provide the PID controller with eno= ugh time to respond to a > > > > > > > > > > > > > > > > surge of refaults. (Note that the PID contr= oller uses weighted > > > > > > > > > > > > > > > > refaults and those from evicted generations= only take a half of the > > > > > > > > > > > > > > > > whole weight.) In other words, for a short = lived generation, the > > > > > > > > > > > > > > > > moving average smooths out the spike quickl= y. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > To fix the problem: > > > > > > > > > > > > > > > > 1. For folios that are already on LRU, if t= hey can be beyond the > > > > > > > > > > > > > > > > tracking range of tiers, i.e., five acce= sses through file > > > > > > > > > > > > > > > > descriptors, move them to the second old= est generation to give them > > > > > > > > > > > > > > > > more time to age. (Note that tiers are u= sed by the PID controller > > > > > > > > > > > > > > > > to statistically determine whether folio= s accessed multiple times > > > > > > > > > > > > > > > > through file descriptors are worth prote= cting.) > > > > > > > > > > > > > > > > 2. When adding unmapped folios to LRU, adju= st the placement of them so > > > > > > > > > > > > > > > > that they are not too close to the tail.= The effect of this is > > > > > > > > > > > > > > > > similar to the above. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Android, launching 55 apps sequentially: > > > > > > > > > > > > > > > > Before After= Change > > > > > > > > > > > > > > > > workingset_refault_anon 25641024 25598= 972 0% > > > > > > > > > > > > > > > > workingset_refault_file 115016834 10617= 8438 -8% > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hi Yu, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks you for your amazing works on MGLRU. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I believe this is the similar issue I was try= ing to resolve previously: > > > > > > > > > > > > > > > https://lwn.net/Articles/945266/ > > > > > > > > > > > > > > > The idea is to use refault distance to decide= if the page should be > > > > > > > > > > > > > > > place in oldest generation or some other gen,= which per my test, > > > > > > > > > > > > > > > worked very well, and we have been using refa= ult distance for MGLRU in > > > > > > > > > > > > > > > multiple workloads. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > There are a few issues left in my previous RF= C series, like anon pages > > > > > > > > > > > > > > > in MGLRU shouldn't be considered, I wanted to= collect feedback or test > > > > > > > > > > > > > > > cases, but unfortunately it seems didn't get = too much attention > > > > > > > > > > > > > > > upstream. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I think both this patch and my previous serie= s are for solving the > > > > > > > > > > > > > > > file pages underpertected issue, and I did a = quick test using this > > > > > > > > > > > > > > > series, for mongodb test, refault distance se= ems still a better > > > > > > > > > > > > > > > solution (I'm not saying these two optimizati= on are mutually exclusive > > > > > > > > > > > > > > > though, just they do have some conflicts in i= mplementation and solving > > > > > > > > > > > > > > > similar problem): > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Previous result: > > > > > > > > > > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > > > > > > > > > > > > > > > Execution Results after 905 seconds > > > > > > > > > > > > > > > ---------------------------------------------= --------------------- > > > > > > > > > > > > > > > Executed Time (=C2= =B5s) Rate > > > > > > > > > > > > > > > STOCK_LEVEL 2542 27121571486= .2 0.09 txn/s > > > > > > > > > > > > > > > ---------------------------------------------= --------------------- > > > > > > > > > > > > > > > TOTAL 2542 27121571486= .2 0.09 txn/s > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This patch: > > > > > > > > > > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > > > > > > > > > > > > > > > Execution Results after 900 seconds > > > > > > > > > > > > > > > ---------------------------------------------= --------------------- > > > > > > > > > > > > > > > Executed Time (=C2= =B5s) Rate > > > > > > > > > > > > > > > STOCK_LEVEL 1594 27061522574= .4 0.06 txn/s > > > > > > > > > > > > > > > ---------------------------------------------= --------------------- > > > > > > > > > > > > > > > TOTAL 1594 27061522574= .4 0.06 txn/s > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Unpatched version is always around ~500. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks for the test results! > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I think there are a few points here: > > > > > > > > > > > > > > > - Refault distance make use of page shadow so= it can better > > > > > > > > > > > > > > > distinguish evicted pages of different access= pattern (re-access > > > > > > > > > > > > > > > distance). > > > > > > > > > > > > > > > - Throttled refault distance can help hold pa= rt of workingset when > > > > > > > > > > > > > > > memory is too small to hold the whole working= set. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > So maybe part of this patch and the bits of p= revious series can be > > > > > > > > > > > > > > > combined to work better on this issue, how do= you think? > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'll try to find some time this week to look at= your RFC. It'd be a > > > > > > > > > > > > > > > > > > > > > > > > Hi Yu, > > > > > > > > > > > > > > > > > > > > > > > > I'm working on V4 of the RFC now, which just update= some comments, and > > > > > > > > > > > > skip anon page re-activation in refault path for mg= lru which was not > > > > > > > > > > > > very helpful, only some tiny adjustment. > > > > > > > > > > > > And I found it easier to test with fio, using follo= wing test script: > > > > > > > > > > > > > > > > > > > > > > > > #!/bin/bash > > > > > > > > > > > > swapoff -a > > > > > > > > > > > > > > > > > > > > > > > > modprobe brd rd_nr=3D1 rd_size=3D16777216 > > > > > > > > > > > > mkfs.ext4 /dev/ram0 > > > > > > > > > > > > mount /dev/ram0 /mnt > > > > > > > > > > > > > > > > > > > > > > > > mkdir -p /sys/fs/cgroup/benchmark > > > > > > > > > > > > cd /sys/fs/cgroup/benchmark > > > > > > > > > > > > > > > > > > > > > > > > echo 4G > memory.max > > > > > > > > > > > > echo $$ > cgroup.procs > > > > > > > > > > > > echo 3 > /proc/sys/vm/drop_caches > > > > > > > > > > > > > > > > > > > > > > > > fio -name=3Dmglru --numjobs=3D12 --directory=3D/mnt= --size=3D1024m \ > > > > > > > > > > > > --buffered=3D1 --ioengine=3Dio_uring --io= depth=3D128 \ > > > > > > > > > > > > --iodepth_batch_submit=3D32 --iodepth_bat= ch_complete=3D32 \ > > > > > > > > > > > > --rw=3Drandread --random_distribution=3Dz= ipf:0.5 --norandommap \ > > > > > > > > > > > > --time_based --ramp_time=3D5m --runtime= =3D5m --group_reporting > > > > > > > > > > > > > > > > > > > > > > > > zipf:0.5 is used here to simulate a cached read wit= h slight bias > > > > > > > > > > > > towards certain pages. > > > > > > > > > > > > Unpatched 6.7-rc4: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D6548MiB/s (6866MB/s), 6548MiB/s-6548M= iB/s > > > > > > > > > > > > (6866MB/s-6866MB/s), io=3D1918GiB (2060GB), run=3D3= 00001-300001msec > > > > > > > > > > > > > > > > > > > > > > > > Patched with RFC v4: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D7270MiB/s (7623MB/s), 7270MiB/s-7270M= iB/s > > > > > > > > > > > > (7623MB/s-7623MB/s), io=3D2130GiB (2287GB), run=3D3= 00001-300001msec > > > > > > > > > > > > > > > > > > > > > > > > Patched with this series: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D7098MiB/s (7442MB/s), 7098MiB/s-7098M= iB/s > > > > > > > > > > > > (7442MB/s-7442MB/s), io=3D2079GiB (2233GB), run=3D3= 00002-300002msec > > > > > > > > > > > > > > > > > > > > > > > > MGLRU off: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D6525MiB/s (6842MB/s), 6525MiB/s-6525M= iB/s > > > > > > > > > > > > (6842MB/s-6842MB/s), io=3D1912GiB (2052GB), run=3D3= 00002-300002msec > > > > > > > > > > > > > > > > > > > > > > > > - If I change zipf:0.5 to random: > > > > > > > > > > > > Unpatched 6.7-rc4: > > > > > > > > > > > > Patched with this series: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D5975MiB/s (6265MB/s), 5975MiB/s-5975M= iB/s > > > > > > > > > > > > (6265MB/s-6265MB/s), io=3D1750GiB (1879GB), run=3D3= 00002-300002msec > > > > > > > > > > > > > > > > > > > > > > > > Patched with RFC v4: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D5987MiB/s (6278MB/s), 5987MiB/s-5987M= iB/s > > > > > > > > > > > > (6278MB/s-6278MB/s), io=3D1754GiB (1883GB), run=3D3= 00001-300001msec > > > > > > > > > > > > > > > > > > > > > > > > Patched with this series: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D5839MiB/s (6123MB/s), 5839MiB/s-5839M= iB/s > > > > > > > > > > > > (6123MB/s-6123MB/s), io=3D1711GiB (1837GB), run=3D3= 00001-300001msec > > > > > > > > > > > > > > > > > > > > > > > > MGLRU off: > > > > > > > > > > > > Run status group 0 (all jobs): > > > > > > > > > > > > READ: bw=3D5689MiB/s (5965MB/s), 5689MiB/s-5689M= iB/s > > > > > > > > > > > > (5965MB/s-5965MB/s), io=3D1667GiB (1790GB), run=3D3= 00003-300003msec > > > > > > > > > > > > > > > > > > > > > > > > fio uses ramdisk so LRU accuracy will have smaller = impact. The Mongodb > > > > > > > > > > > > test I provided before uses a SATA SSD so it will h= ave a much higher > > > > > > > > > > > > impact. I'll provides a script to setup the test ca= se and run it, it's > > > > > > > > > > > > more complex to setup than fio since involving sett= ing up multiple > > > > > > > > > > > > replicas and auth and hundreds of GB of test fixtur= es, I'm currently > > > > > > > > > > > > occupied by some other tasks but will try best to s= end them out as > > > > > > > > > > > > soon as possible. > > > > > > > > > > > > > > > > > > > > > > Thanks! Apparently your RFC did show better IOPS with= both access > > > > > > > > > > > patterns, which was a surprise to me because it had h= igher refaults > > > > > > > > > > > and usually higher refautls result in worse performan= ce. > > > > > > > > > > > > > > And thanks for providing the refaults I requested for -- your= data > > > > > > > below confirms what I mentioned above: > > > > > > > > > > > > > > For fio: > > > > > > > Your RFC This series Change > > > > > > > workingset_refault_file 628192729 596790506 -5% > > > > > > > IOPS 1862k 1830k -2% > > > > > > > > > > > > > > For MongoDB: > > > > > > > Your RFC This series Change > > > > > > > workingset_refault_anon 10512 35277 +30% > > > > > > > workingset_refault_file 22751782 20335355 -11% > > > > > > > total 22762294 20370632 -11% > > > > > > > TPS 0.09 0.06 -33% > > > > > > > > > > > > > > For MongoDB, this series should be a big win (but apparently = it's not), > > > > > > > especially when using zram, since an anon refault should be a= lot > > > > > > > cheaper than a file refault. > > > > > > > > > > > > > > So, I'm baffled... > > > > > > > > > > > > > > One important detail I forgot to mention: based on your data = from > > > > > > > lru_gen_full, I think there is another difference between our= Kconfigs: > > > > > > > > > > > > > > Your Kconfig My Kconfig Max possible > > > > > > > LRU_REFS_WIDTH 1 2 2 > > > > > > > > > > > > Hi Yu, > > > > > > > > > > > > Thanks for the info, my fault, I forgot to update my config as = I was > > > > > > testing some other features. > > > > > > Buf after I changed LRU_REFS_WIDTH to 2 by disabling IDLE_PAGE,= thing > > > > > > got much worse for MongoDB test: > > > > > > > > > > > > With LRU_REFS_WIDTH =3D=3D 2: > > > > > > > > > > > > This patch: > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > > > Execution Results after 919 seconds > > > > > > ---------------------------------------------------------------= --- > > > > > > Executed Time (=C2=B5s) Rate > > > > > > STOCK_LEVEL 488 27598136201.9 0.02 txn/s > > > > > > ---------------------------------------------------------------= --- > > > > > > TOTAL 488 27598136201.9 0.02 txn/s > > > > > > > > > > > > memcg 86 /system.slice/docker-1c3a90be9f0a072f5719332419550c= d0e1455f2cd5863bc2780ca4d3f913ece5.scope > > > > > > node 0 > > > > > > 1 948187 0x 0x > > > > > > 0 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 1 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 2 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 3 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 2 948187 0 6051788=C2=B7 > > > > > > 0 0r 0e 0p = 11916r > > > > > > 66442e 0p > > > > > > 1 0r 0e 0p = 903r > > > > > > 16888e 0p > > > > > > 2 0r 0e 0p = 459r > > > > > > 9764e 0p > > > > > > 3 0r 0e 0p = 0r > > > > > > 0e 2874p > > > > > > 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 3 948187 1353160 6351=C2=B7 > > > > > > 0 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 1 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 2 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 3 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 4 73045 23573 12=C2=B7 > > > > > > 0 0R 0T 0 3= 498607R > > > > > > 4868605T 0=C2=B7 > > > > > > 1 0R 0T 0 3= 012246R > > > > > > 3270261T 0=C2=B7 > > > > > > 2 0R 0T 0 2= 498608R > > > > > > 2839104T 0=C2=B7 > > > > > > 3 0R 0T 0 = 0R > > > > > > 1983947T 0=C2=B7 > > > > > > 1486579L 0O 1380614Y = 2945N > > > > > > 2945F 2734A > > > > > > > > > > > > workingset_refault_anon 0 > > > > > > workingset_refault_file 18130598 > > > > > > > > > > > > total used free shared buff/c= ache available > > > > > > Mem: 31978 6705 312 20 2= 4960 24786 > > > > > > Swap: 31977 4 31973 > > > > > > > > > > > > RFC: > > > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > > > Execution Results after 908 seconds > > > > > > ---------------------------------------------------------------= --- > > > > > > Executed Time (=C2=B5s) Rate > > > > > > STOCK_LEVEL 2252 27159962888.2 0.08 txn/s > > > > > > ---------------------------------------------------------------= --- > > > > > > TOTAL 2252 27159962888.2 0.08 txn/s > > > > > > > > > > > > workingset_refault_anon 22585 > > > > > > workingset_refault_file 22715256 > > > > > > > > > > > > memcg 66 /system.slice/docker-0989446ff78106e32d3f400a0cf371= c9a703281bded86d6d6bb1af706ebb25da.scope > > > > > > node 0 > > > > > > 22 563007 2274 1198225=C2=B7 > > > > > > 0 0r 1e 0p = 0r > > > > > > 697076e 0p > > > > > > 1 0r 0e 0p = 0r > > > > > > 0e 325661p > > > > > > 2 0r 0e 0p = 0r > > > > > > 0e 888728p > > > > > > 3 0r 0e 0p = 0r > > > > > > 0e 3602238p > > > > > > 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 23 532222 7525 4948747=C2=B7 > > > > > > 0 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 1 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 2 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 3 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 24 500367 1214667 3292=C2=B7 > > > > > > 0 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 1 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 2 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 3 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 0 0 0 = 0 > > > > > > 0 0=C2=B7 > > > > > > 25 469692 40797 466=C2=B7 > > > > > > 0 0R 271T 0 = 0R > > > > > > 1162165T 0=C2=B7 > > > > > > 1 0R 0T 0 = 774028R > > > > > > 1205332T 0=C2=B7 > > > > > > 2 0R 0T 0 = 0R > > > > > > 932484T 0=C2=B7 > > > > > > 3 0R 1T 0 = 0R > > > > > > 4252158T 0=C2=B7 > > > > > > 25178380L 156515O 23953602Y = 59234N > > > > > > 49391F 48664A > > > > > > > > > > > > total used free shared buff/c= ache available > > > > > > Mem: 31978 6968 338 5 2= 4671 24555 > > > > > > Swap: 31977 1533 30444 > > > > > > > > > > > > Using same mongodb config (a 3 replica cluster using the same c= onfig): > > > > > > { > > > > > > "net": { > > > > > > "bindIpAll": true, > > > > > > "ipv6": false, > > > > > > "maxIncomingConnections": 10000, > > > > > > }, > > > > > > "setParameter": { > > > > > > "disabledSecureAllocatorDomains": "*" > > > > > > }, > > > > > > "replication": { > > > > > > "oplogSizeMB": 10480, > > > > > > "replSetName": "issa-tpcc_0" > > > > > > }, > > > > > > "security": { > > > > > > "keyFile": "/data/db/keyfile" > > > > > > }, > > > > > > "storage": { > > > > > > "dbPath": "/data/db/", > > > > > > "syncPeriodSecs": 60, > > > > > > "directoryPerDB": true, > > > > > > "wiredTiger": { > > > > > > "engineConfig": { > > > > > > "cacheSizeGB": 5 > > > > > > } > > > > > > } > > > > > > }, > > > > > > "systemLog": { > > > > > > "destination": "file", > > > > > > "logAppend": true, > > > > > > "logRotate": "rename", > > > > > > "path": "/data/db/mongod.log", > > > > > > "verbosity": 0 > > > > > > } > > > > > > } > > > > > > > > > > > > The test environment have 32g memory and 16 core. > > > > > > > > > > > > Per my analyze, the access pattern for the mongodb test is that= page > > > > > > will be re-access long after it's evicted so PID controller won= 't > > > > > > protect higher tier. That RFC will make use of the long existin= g > > > > > > shadow to do feedback to PID/Gen so the result will be much bet= ter. > > > > > > Still need more adjusting though, will try to do a rebase on to= p of > > > > > > mm-unstable which includes your patch. > > > > > > > > > > > > I've no idea why the workingset_refault_* is higher in the bett= er > > > > > > case, this a clearly an IO bound workload, Memory and IO is bus= y while > > > > > > CPU is not full... > > > > > > > > > > > > I've uploaded my local reproducer here: > > > > > > https://github.com/ryncsn/emm-test-project/tree/master/mongo-cl= uster > > > > > > https://github.com/ryncsn/py-tpcc > > > > > > > > > > Thanks for the repos -- I'm trying them right now. Which MongoDB > > > > > version did you use? setup.sh didn't seem to install it. > > > > > > > > > > Also do you have a QEMU image? It'd be a lot easier for me to > > > > > duplicate the exact environment by looking into it. > > > > > > > > I ended up using docker.io/mongodb/mongodb-community-server:latest, > > > > and it's not working: > > > > > > > > # docker exec -it mongo-r1 mongosh --eval \ > > > > '"rs.initiate({ > > > > _id: "issa-tpcc_0", > > > > members: [ > > > > {_id: 0, host: "mongo-r1"}, > > > > {_id: 1, host: "mongo-r2"}, > > > > {_id: 2, host: "mongo-r3"} > > > > ] > > > > })"' > > > > Emulate Docker CLI using podman. Create /etc/containers/nodocker to= quiet msg. > > > > Error: can only create exec sessions on running containers: contain= er > > > > state improper > > > > > > Hi Yu, > > > > > > I've updated the test repo: > > > https://github.com/ryncsn/emm-test-project/tree/master/mongo-cluster > > > > > > I've tested it on top of latest Fedora Cloud Image 39 and it worked > > > well for me, the README now contains detailed and not hard to follow > > > steps to reproduce this test. > > > > Thanks. I was following the instructions down to the letter and it > > fell apart again at line 46 (./tpcc.py). > > I think you just broke it by > https://github.com/ryncsn/py-tpcc/commit/7b9b380d636cb84faa5b11b5562e531f= 924eeb7e > > (But it's also possible you actually wanted me to use this latest > commit but forgot to account for it in your instructions.) > > > Were you able to successfully run the benchmark on a fresh VM by > > following the instructions? If not, I'd appreciate it if you could do > > so and document all the missing steps. Ah, you are right, I attempted to convert it to Python3 but found it only brought more trouble, so I gave up and the instruction is still using Python2. However I accidentally pushed the WIP python3 convert commit... I've reset the repo to https://github.com/ryncsn/py-tpcc/commit/86e862c5cf3b2d1f51e0297742fa837c7a= 99ebf8, this is working well. Sorry for the inconvenient.