From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFC46E77197 for ; Tue, 7 Jan 2025 04:42:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C3778D0005; Mon, 6 Jan 2025 23:42:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 272848D0001; Mon, 6 Jan 2025 23:42:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09E188D0005; Mon, 6 Jan 2025 23:42:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DF7A08D0001 for ; Mon, 6 Jan 2025 23:42:31 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 92F11A0680 for ; Tue, 7 Jan 2025 04:42:31 +0000 (UTC) X-FDA: 82979409702.28.9C87203 Received: from mail-vk1-f179.google.com (mail-vk1-f179.google.com [209.85.221.179]) by imf04.hostedemail.com (Postfix) with ESMTP id CBBD840003 for ; Tue, 7 Jan 2025 04:42:29 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=DKny811v; spf=pass (imf04.hostedemail.com: domain of yuzhao@google.com designates 209.85.221.179 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736224949; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1reJXIEpRMt5GPyH6edWUDEYoi83u/RDUtcwCss5Q0I=; b=loaK1BS7x7z5rNe1Xm/xcw0hGmuhtzuf4VAQWlz6hQuukKeqkTKAs4w+k1/xRxcSMEnTCA TRCADtvYs6s+bur9DtE2qdoKCC1adEY7582nQCx+DGRAIay9a3LkTEVzjcoejtpxezlbaJ hM7jU/ur4NKPu5R+0WvXC8Y964N+PD0= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=DKny811v; spf=pass (imf04.hostedemail.com: domain of yuzhao@google.com designates 209.85.221.179 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736224949; a=rsa-sha256; cv=none; b=JE7q79Rd/6s8P3RoUncQ4j3KLX3973W2vIBbIKoZN13pa8DRdL/pBoWdwy8jdV7RAHWcJ1 qscJlOYDE/If8MhaCZWj0SlroLBIxkumlTdSBhjkhSdVbc7jSsRtAVTfiYGVjk3PN/DbOc f4A9ya2Mu7A7j3JtdD7e4TXAHbtiljM= Received: by mail-vk1-f179.google.com with SMTP id 71dfb90a1353d-5161d5b8650so4643657e0c.3 for ; Mon, 06 Jan 2025 20:42:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736224949; x=1736829749; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=1reJXIEpRMt5GPyH6edWUDEYoi83u/RDUtcwCss5Q0I=; b=DKny811vCmfll2FzTSV/Q2inq7AZfAh3eydLAoaz173hV1/6ETxrEmFMT7ROc5hCmp uFtEG3y6ks8Sbp5NT9HrNorKiyC8xuTl+1vI+ivJf98/T741+5V+S3J7WtsWAh7LfrE6 GM0c4dPw7HpqT877QCdUeEidoh/g6l7ImmbWn4GhLigLJvn/c4guMNdDd2qeFBv7zsIr 35J6DsCTGywdhpGrbVOb4ydMdsNYSG6kXgw772fFm6nUZPUmE1eylYpQ87rMbg6CAsgL s1TtPVzOkOpMRHU6EK9XxAxViwFmlzEZm8ZQZFOkGmTIsouJFZZxfrUPqa72ieQbJm+l txqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736224949; x=1736829749; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1reJXIEpRMt5GPyH6edWUDEYoi83u/RDUtcwCss5Q0I=; b=mPIXZ23WZ5ApDct0GRebDMnDkOBerB5tkKd0n+Wv4iB3xXWaM73gL2ZskJ+Zu1fNma +XUAgGwpy2oceKyAbmyzEmpXs4vSd2Gu99lCNGGUkS0HGyUR1hf07K66U5qqz+tax29U eqLgBDt1yeciLuKkzSGY4cOmUKBobgcjP+OmW92QcPpS05/DwweWwQtK742Rhd/7eAEq GklalD9dNOtPPlrDjhG6Nsn8KAsCVUURrF6nthcj1NG0oQNrtyCWQr9CFdABH32qs6/D 1eTSe8euCf+TxIeTSkxcmVEVtvpmWWXnKDsMFjgf+a4pm+cha3qCEp9bHTaLicZ4xXNQ kvOA== X-Forwarded-Encrypted: i=1; AJvYcCVIqCn3TuUWE3vDR4f06flUxPToe/NWSLh2LfdBHCqJ3y0sQXEmAzIYsBjic6GY4HHxoNTQILCJ1A==@kvack.org X-Gm-Message-State: AOJu0YwUmzEnT7bz8VP9odpmcrT06tznZkQ330jg+hXxGNpXrXDuXeLP c3ggJN/EK5UrlPH7PZq+f9se763iW6qvSmyYuo2n+ZFwEvGJ8sCK2CUPpNqNtcgZkXs9vuiQqDc MZg/GzqEIkjqt5DOvNu4GzDlnLPwU1SeoR+ro X-Gm-Gg: ASbGncvgQidVluoLdvOqhGsX79SbS3hxgc/dO5qibZur5xCgAqpgngKpYkWZbjnIDcB VIiC2/vXxw0Od9+Lnc0a7MaDPllsSw3CeZcmbdpCOYQCjkcsaX4mcqkCCj7y3v27zY4p1uXk= X-Google-Smtp-Source: AGHT+IFLTIc5Z6zpEp8TBLUVmjcE4pk3kAT4gDG1ShnXWJHp2ADparmKlodHVxu/7MbSKKPphYGLTU6z06xQAGlF/L8= X-Received: by 2002:a05:6102:291e:b0:4b0:ccec:c9de with SMTP id ada2fe7eead31-4b2cc49cb48mr42971915137.24.1736224948613; Mon, 06 Jan 2025 20:42:28 -0800 (PST) MIME-Version: 1.0 References: <20241207082931.1707465-1-mjguzik@gmail.com> <94d0dcbe-2001-4a9c-a767-b337b688b616@redhat.com> <606fbf9a-c9ba-4f08-a708-db38fe6065ce@redhat.com> In-Reply-To: From: Yu Zhao Date: Mon, 6 Jan 2025 21:41:51 -0700 Message-ID: Subject: Re: [PATCH] mm: remove an avoidable load of page refcount in page_ref_add_unless To: David Hildenbrand , Mateusz Guzik Cc: akpm@linux-foundation.org, willy@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: CBBD840003 X-Rspamd-Server: rspam12 X-Stat-Signature: cj8xrrbpif46nfa3udnawip41aybra45 X-Rspam-User: X-HE-Tag: 1736224949-922113 X-HE-Meta: U2FsdGVkX19eKeAe50YFGVZFpuIukoJm0bzB9k7tP9By5E8RmGc5ZRof+RK+kI/lG1rRRXpSUpAgv8Ynnrxao1cRT6KQA5LkuS/y4m9LdpRK+RcpWxfyBdPVd9noaIamgH8sNcxit3hYyt/H7ndTLmUno05XZBUfZMNlmmtL66ePTyUOQAdXpnfXklaGnfOhWrXgTO6zwBxJ3++iCdGxugKhc7Jjwd7LR0KWk3FKALgVN2T0NmE3jh4bgobOfe7z6VwwviPQ9U+f0pRH2/4czUR3BtjrLqr+66krIh5QHEAKjyHChGGv4pEywjydqbwxmuReXrZZFXRiGYXJUsA6+Cvnj6Vn4XFT4gbNJ2AquYfSUibZ3a3eFIHpJ5EzLYurAMeXit2TjKdIq2uyziVGropwnRt2XAt4P1yi4Yumd2R2Ppju/vtGQRGSExSMDyMwLkg2224w+V/EwjN1a6zXx7KK1bo20YmeaEdfANRziVCzEAW778cLp/awPi5C5MAKLtSfzWskWEdyFZe2sTHd0zF5FseQtLwK6unOuBvAuLocgrXyjo/XMwLmygSuDdQEjLurUtB395RJpKAamnHJhmJCVk/BEB2XB8c/E0MU9TPYD5gUv2+89OJAr6rrOFV0Lwg6k6isumixV2DEd8iBCUjrt+KUOMCpuJOy/5O8FD4TAv0FPsZQogg7quVUtMqvGxLPHmy9Aql3m1OdaG+UrGizTQOKMdGOlqsKbk1iu+pX/C2XfFednIYdXYpGAKZXblqgcl8dLu1Pvau9IQ0W4vxjLkk1t9UejI1fN+UDhDMQ3R8+tRjgAiYq1gjcEfwfbcs8DlADXK6hbv/cy/m7ffJYtIXW9H74U/hcXrBouIg2pbNG/b/7XKJBYEpiAF+TR0cAun7uiRSojvogmpHxsPcmMNNOk/t7kPoNxeVaZWgfI2pQLFjzsR+BnRbhHapQDw/qE0Q66a+3KnaHuUg kM+FrBji YeX4TsF8QDug4IwkkI/7BCj38hxEK8Iyl65qOSQumJb1ZMDVPeRH1d1IQYvUX5Gr3KgxgDh095grUALzolUXKKRJDA8+YtrlPKaklgeVmq5gW+ifeoB6bN821F4NEURSpSwc0SARAkgqnFHfJQPKvclQDWm+AgNZnEn50uP68SbgaaH3pLhjEWE3XPMfOkOSMjPSdh46/zQGH7LFVOmvvHFnD3mOvS+0bMKN5U/1kREGdoPE3fG66KUOo7JsO6hrSzkru4si9iR54318zCivcrok9mjwRrAG2uQu9pSa5rWVHVm/LYb9ZTBfg7p5ZHdxXm+6ojEeoFCFn/RlGW6oL+UuIf+SCMtPp0hNm4RJmmi04ApI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Dec 9, 2024 at 7:46=E2=80=AFAM David Hildenbrand = wrote: > > On 09.12.24 15:30, Mateusz Guzik wrote: > > On Mon, Dec 9, 2024 at 3:22=E2=80=AFPM David Hildenbrand wrote: > >> > >> On 09.12.24 13:33, Mateusz Guzik wrote: > >>> That is to say I think this thread just about exhausted the time > >>> warranted by this patch. No hard feelz if it gets dropped, but then I > >>> do strongly suggest adding a justification to the extra load. > >> > >> Maybe it's sufficient for now to simply do your change with a comment: > >> > >> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h > >> index 8c236c651d1d6..1efc992ad5687 100644 > >> --- a/include/linux/page_ref.h > >> +++ b/include/linux/page_ref.h > >> @@ -234,7 +234,13 @@ static inline bool page_ref_add_unless(struct pag= e *page, int nr, int u) > >> > >> rcu_read_lock(); > >> /* avoid writing to the vmemmap area being remapped */ > >> - if (!page_is_fake_head(page) && page_ref_count(page) !=3D u) > >> + if (!page_is_fake_head(page)) > >> + /* > >> + * atomic_add_unless() will currently never modify the= value > >> + * if it already is u. If that ever changes, we'd have= to have > >> + * a separate check here, such that we won't be writin= g to > >> + * write-protected vmemmap areas. > >> + */ > >> ret =3D atomic_add_unless(&page->_refcount, nr, u); > >> rcu_read_unlock(); > >> > >> > >> It would bail out during testing ... hopefully, such that we can detec= t any such change. > >> > > > > Not my call to make, but looks good. ;) > > > > fwiw I don't need any credit and I would be more than happy if you > > just submitted the thing as your own without me being mentioned. *No* > > cc would also be appreciated. > > Likely Andrew can add the comment as a fixup. Sorry for not getting back to you sooner: the page_ref_count() test actually is a must, however, I didn't do it in the right order. I only realized my carelessness after Will asked me about it here [1] -- I just posted a fix here [2], CC'ing both of you. If any of you have concerns about it, please let me know in that thread. Otherwise I'll ask Andew to drop this patch. Thank you. [1] https://lore.kernel.org/linux-mm/20241128142028.GA3506@willie-the-truck= / [2] https://lore.kernel.org/linux-mm/20250107043505.351925-1-yuzhao@google.= com/