From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3A4DC3ABAA for ; Mon, 5 May 2025 20:32:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB3E96B0099; Mon, 5 May 2025 16:32:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B62546B009A; Mon, 5 May 2025 16:32:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A50A56B009B; Mon, 5 May 2025 16:32:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 850F56B0099 for ; Mon, 5 May 2025 16:32:54 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6CF69C08FB for ; Mon, 5 May 2025 20:32:54 +0000 (UTC) X-FDA: 83410003068.25.B881DDC Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by imf16.hostedemail.com (Postfix) with ESMTP id 8AC5C180013 for ; Mon, 5 May 2025 20:32:52 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="MZTRd+/x"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of surenb@google.com designates 209.85.160.170 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746477172; a=rsa-sha256; cv=none; b=Z0IhLrNI5YXo7z5yjZ0oQgfv4ZHHHJsqJrcAeCDoliyl4ffo5pCAyk93+ACXmaI45V3qPC IDD/61iGqyUEwFNOxJUl95dx+zSp0GphdYVnKWhtLpUYYR6sZt1VHJRrZl6D11MXmaVh64 Im5XZoRPUJ5jDtVxlpUsZqbTbeKwwAc= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="MZTRd+/x"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of surenb@google.com designates 209.85.160.170 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746477172; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b60aNM1IIW2gHu1u1BXDFkyrKkhL+q5sIU+nsJ/h7nc=; b=Wtg5expeXLknYomfQWh+CqWAUswix+k9Xm6A85UM0u6TeY4VkHfynDCDCWr5veMPLYHnwO i334xAmP7oOTdunnkp/UOHI+3k+3IHCzD+rdCWyQ9kYn2WAdB/1w+Ghn1eJGxQPfDhv1O5 deVwTXbwFyWaHXHswj6oCDORxMW+prU= Received: by mail-qt1-f170.google.com with SMTP id d75a77b69052e-47e9fea29easo16141cf.1 for ; Mon, 05 May 2025 13:32:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746477172; x=1747081972; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=b60aNM1IIW2gHu1u1BXDFkyrKkhL+q5sIU+nsJ/h7nc=; b=MZTRd+/xkoL7Xlttc+tIRML2zSCj6RHF1xo8E4jSIQihmRavyid41479DirQDOBVP7 cDAn/AhnJPl+WOt1F8OkyKCu+kIpX7J2GUcWO2NYzBvXnz2ipqDo/SXlYPE8bWC8WlVa +XkcJxAm4hortLAEO8bICnGvK0iPb9+M8tbcrL0QISslJxna85oYnZDEFkNI8HBeFM3X pxjWOcd73w43aqbTqRxg9j8ODHiV2g2nsyCLDKgwDkYx/MTqqOMUI8DUaCnj+p7Uv/zI ZgkIjZIG/2DVk42zO9OzAAt73JMoxFIbEF7t2CcbxkiT1TO7sY3yBsKL5J9g1t9RFLaU 9A0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746477172; x=1747081972; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b60aNM1IIW2gHu1u1BXDFkyrKkhL+q5sIU+nsJ/h7nc=; b=nv3+My0d5FtAwyhUdb8hJKBYbArBXaID016sUV8OWghYqzFiAOHK6tTEZISngIu57U 65MMLf0tbZFvVBNesCUEhMUjX0RkZyf9GQM/iPKHZmNWDxign8fqrZnet9k+n3L6DzhE 17rGOQZNNfgfAnQluOITLo+UerlcixaIyvwaRPsoOrOxGyM85A96rYYVd7czweykT6zs MeLy47UpTpVlqtBqV3map5m0qI4jIJ0M/h1mZIpnteFi+PhoBD6BcrhdyL2n5OALlXtS dNaS1P9LZJ9/uzasRPNA/5BDvTIaqUa1eLC600xQzynQp3LMNDsyYn+dvBCVksTsVMQw KMDg== X-Forwarded-Encrypted: i=1; AJvYcCXz3fYGjCdhYHfDBgUWAYdBPP1Th1E/wFNy/V0+enjWRmSF+/zaasnLaUakMRbuEy3cLYtahgshaQ==@kvack.org X-Gm-Message-State: AOJu0Yxw2GcjDoLgxgAIJMrn5aKd/CVtUQVgsPOzbvS6DdPN/Mrhtj4/ jzA6MIjqHg/QrEcIi1Je7Zmk5zkj6T9M7XHxtpYQhAGi1RDEDIV33rPQHpXRInmFvcBCEK7K9nO k8s1yGMJRMglVFYv+A+TkFNL70fe21mxhZRjp X-Gm-Gg: ASbGncttC+1af2fcPSqeZiR1GWeNbw8aVKYH9do8khgT8mobZucE6TNsNUPnIH6bY/H MlcKW2RjS177Ldhgfkm2fT+6GGon8qN0Vfvna6jTQwLu5HwkDl4tZ+H5QxPOmb4d0rBorJvhgZ4 Sxyl4G0CY20a9SHMsK1Nod X-Google-Smtp-Source: AGHT+IExD0+D/Gh/tiVJG2j616FTjgtLyRweoYuOWrc3ahMviTXJxn6NhAoONE9V6H0pFC4tZSEH/nxOUU4E7hwnAiU= X-Received: by 2002:ac8:5f4e:0:b0:489:7ea9:4263 with SMTP id d75a77b69052e-490f47299d0mr1248191cf.10.1746477171425; Mon, 05 May 2025 13:32:51 -0700 (PDT) MIME-Version: 1.0 References: <20250505193034.91682-1-00107082@163.com> In-Reply-To: <20250505193034.91682-1-00107082@163.com> From: Suren Baghdasaryan Date: Mon, 5 May 2025 13:32:40 -0700 X-Gm-Features: ATxdqUEhMlfOZ2E56ONd4kM08PcZfCSALKCQlhEP-XpTwtnndpLMoHYz45yeM5k Message-ID: Subject: Re: [PATCH v3] mm/codetag: move tag retrieval back upfront in __free_pages() To: David Wang <00107082@163.com> Cc: akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 8AC5C180013 X-Stat-Signature: asgenzwnjz45x8kcz4d7jegie441wsfu X-Rspam-User: X-HE-Tag: 1746477172-520847 X-HE-Meta: U2FsdGVkX19huCxg9j1QbrXqjJa75eogyC8eE9UHt5ZiwYi0bGLpXmWjuZjQH6c5oitQvaULN16fNWOAV37O+aq0zBKDryCM+YZVGNZVXUaMfFUNMii5tkg8e1MXdGSiiocN147zxVaRCX+JGBieDxWWQoy2vrXxNecEYLm6uC4Pz7Pkik2awy/I40t4N2zex2FyRHKImRUS8xBmnwNduh/tutTdZRFn88dvkl118GFM2UZJntIY7VupWyClf3qFyamrxEFyO+g/l6NzvYCaFdoiE8Q6xbJlCppseEkhu2CL9iOakJWvC/sxmkLoWorJRMmy6muFrG1ryI2YGLkFsRXkR/W3RH2/iGYNY47Nv7PNaFf1eR476EyliDLw+wY2PA7OJRMQMWQCQnfjXwjhspOXSWkTDxt2/vi9e04byO19I5XG8wyJnLBsSE5YSf7ctBb4d9GbDoFxGEmzkKclyprcqtItMDCpW4lm1m0MVSipUr28fL3gJ37uhVpearK4P1Aa0GtT7lE8z38HPDTmvvEhMJuPO+GCX5rfMlQmNMLiq91qNWPZDAFXxbS7Ai2/EeLYNzOp7RI8WM1PbAUWI9Y/mOjqGqgRtbc3metzuOpYGmAxyUeJV2Ov0ok4lljeHx84gXsXQlA8Gw46dgk21oE80GL7o/yUv6QYLORZcstXs3d9HD+uV8VIfjT1V6pVHIS+SMdUurzHvy2QLhAnE1qHPhIxtIaggGkk0vJgDi9ul1NLSDRP/b29syvNqAvQQOaO2yvQSUT/Iop0ULycXOFwR1FhUh0hEOhV8r+E0d/kSA3s1u/ZznITvnr9b6wdxOHB+rYOidptemybvzXgVagYC/ZHgd2DczJr6JHhZzsDEl2tNfvsy6lMMuZXpIVqwxSWJ951wrciwjM72JCC/buQn4bE7RJemRAnUWWCaxe0SFU+LuIGEohFMnINZoRRkC2FN/BsLGEZJxpVtjX So9IKmin Hk4s7P19Km4suY7eiaqBxSsPQXZHZtXthglWK7JpLWDJ8mNPFq1Tt6htgsaPTEL+1pLTiTJo8e+CpcuRxpcCbY6iAp37wQnHN1SrmI2mLBs5I2PWP3Y/ulxwTdVb4S+nIpH0ms2Y08lVw9KMsHth82/cntG6DTT2kSB0tu4I5614zygrPB8x0MFVm/SAQEKGdXe2rY6eTSFwkRBvBrYgK5mJ4CU9evizh9SbOAxO5K7Uy4cExe+hH96O8Q3f8WOyPQR1RdQ/ScHir6NNxqysirnf0psFOqG7hCKhKNHFSKkeUdS7gfsRjVnMrxgDz3xzJZOjKY7u8+T+KbMG+ZkO4Z33h/92mP8iRPkRUpo/UUb/xs50OzcqMZSqaGw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, May 5, 2025 at 12:31=E2=80=AFPM David Wang <00107082@163.com> wrote= : > > Commit 51ff4d7486f0 ("mm: avoid extra mem_alloc_profiling_enabled() > checks") introduces a possible use-after-free scenario, when page > is non-compound, page[0] could be released by other thread right > after put_page_testzero failed in current thread, pgalloc_tag_sub_pages > afterwards would manipulate an invalid page for accounting remaining > pages: > > [timeline] [thread1] [thread2] > | alloc_page non-compound > V > | get_page, rf counter inc > V > | in ___free_pages > | put_page_testzero fails > V > | put_page, page released > V > | in ___free_pages, > | pgalloc_tag_sub_pages > | manipulate an invalid page > V > > Restore __free_pages() to its state before, retrieve alloc tag > beforehand. > > Fixes: 51ff4d7486f0 ("mm: avoid extra mem_alloc_profiling_enabled() check= s") > Signed-off-by: David Wang <00107082@163.com> Acked-by: Suren Baghdasaryan Thanks! > --- > include/linux/pgalloc_tag.h | 8 ++++++++ > mm/page_alloc.c | 15 ++++++--------- > 2 files changed, 14 insertions(+), 9 deletions(-) > > diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h > index c74077977830..8a7f4f802c57 100644 > --- a/include/linux/pgalloc_tag.h > +++ b/include/linux/pgalloc_tag.h > @@ -188,6 +188,13 @@ static inline struct alloc_tag *__pgalloc_tag_get(st= ruct page *page) > return tag; > } > > +static inline struct alloc_tag *pgalloc_tag_get(struct page *page) > +{ > + if (mem_alloc_profiling_enabled()) > + return __pgalloc_tag_get(page); > + return NULL; > +} > + > void pgalloc_tag_split(struct folio *folio, int old_order, int new_order= ); > void pgalloc_tag_swap(struct folio *new, struct folio *old); > > @@ -199,6 +206,7 @@ static inline void clear_page_tag_ref(struct page *pa= ge) {} > static inline void alloc_tag_sec_init(void) {} > static inline void pgalloc_tag_split(struct folio *folio, int old_order,= int new_order) {} > static inline void pgalloc_tag_swap(struct folio *new, struct folio *old= ) {} > +static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { ret= urn NULL; } > > #endif /* CONFIG_MEM_ALLOC_PROFILING */ > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 5669baf2a6fe..1b00e14a9780 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1151,14 +1151,9 @@ static inline void pgalloc_tag_sub(struct page *pa= ge, unsigned int nr) > __pgalloc_tag_sub(page, nr); > } > > -static inline void pgalloc_tag_sub_pages(struct page *page, unsigned int= nr) > +/* When tag is not NULL, assuming mem_alloc_profiling_enabled */ > +static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned= int nr) > { > - struct alloc_tag *tag; > - > - if (!mem_alloc_profiling_enabled()) > - return; > - > - tag =3D __pgalloc_tag_get(page); > if (tag) > this_cpu_sub(tag->counters->bytes, PAGE_SIZE * nr); > } > @@ -1168,7 +1163,7 @@ static inline void pgalloc_tag_sub_pages(struct pag= e *page, unsigned int nr) > static inline void pgalloc_tag_add(struct page *page, struct task_struct= *task, > unsigned int nr) {} > static inline void pgalloc_tag_sub(struct page *page, unsigned int nr) {= } > -static inline void pgalloc_tag_sub_pages(struct page *page, unsigned int= nr) {} > +static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned= int nr) {} > > #endif /* CONFIG_MEM_ALLOC_PROFILING */ > > @@ -5065,11 +5060,13 @@ static void ___free_pages(struct page *page, unsi= gned int order, > { > /* get PageHead before we drop reference */ > int head =3D PageHead(page); > + /* get alloc tag in case the page is released by others */ > + struct alloc_tag *tag =3D pgalloc_tag_get(page); > > if (put_page_testzero(page)) > __free_frozen_pages(page, order, fpi_flags); > else if (!head) { > - pgalloc_tag_sub_pages(page, (1 << order) - 1); > + pgalloc_tag_sub_pages(tag, (1 << order) - 1); > while (order-- > 0) > __free_frozen_pages(page + (1 << order), order, > fpi_flags); > -- > 2.39.2 >