From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53264C4338F for ; Thu, 19 Aug 2021 11:43:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E4A776109F for ; Thu, 19 Aug 2021 11:43:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E4A776109F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 729036B006C; Thu, 19 Aug 2021 07:43:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D9988D0001; Thu, 19 Aug 2021 07:43:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C81C6B0072; Thu, 19 Aug 2021 07:43:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 4299D6B006C for ; Thu, 19 Aug 2021 07:43:45 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DB52027741 for ; Thu, 19 Aug 2021 11:43:44 +0000 (UTC) X-FDA: 78491645568.32.11A8161 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf21.hostedemail.com (Postfix) with ESMTP id B1728D021DF3 for ; Thu, 19 Aug 2021 11:43:43 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Gr2vj4l40zYrjR; Thu, 19 Aug 2021 19:43:13 +0800 (CST) Received: from dggema756-chm.china.huawei.com (10.1.198.198) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Thu, 19 Aug 2021 19:43:39 +0800 Received: from [10.174.177.134] (10.174.177.134) by dggema756-chm.china.huawei.com (10.1.198.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Thu, 19 Aug 2021 19:43:38 +0800 Subject: Re: [PATCH 5.10.y 01/11] mm: memcontrol: Use helpers to read page's memcg data To: Greg Kroah-Hartman , Roman Gushchin CC: Muchun Song , Wang Hai , , , , Andrew Morton , "Alexei Starovoitov" References: <20210816072147.3481782-1-chenhuang5@huawei.com> <20210816072147.3481782-2-chenhuang5@huawei.com> <0d3c6aa4-be05-3c93-bdcd-ac30788d82bd@huawei.com> From: Chen Huang Message-ID: <9e946879-8a6e-6b86-9d8b-54a17976c6be@huawei.com> Date: Thu, 19 Aug 2021 19:43:37 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" X-Originating-IP: [10.174.177.134] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema756-chm.china.huawei.com (10.1.198.198) X-CFilter-Loop: Reflected Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of chenhuang5@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=chenhuang5@huawei.com X-Stat-Signature: z9wfo9te8qn3745pzdywrkp7b5k83zqq X-Rspamd-Queue-Id: B1728D021DF3 X-Rspamd-Server: rspam01 X-HE-Tag: 1629373423-371263 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2021/8/17 14:14, Greg Kroah-Hartman =E5=86=99=E9=81=93: > On Tue, Aug 17, 2021 at 09:45:00AM +0800, Chen Huang wrote: >> >> >> =E5=9C=A8 2021/8/16 21:35, Greg Kroah-Hartman =E5=86=99=E9=81=93: >>> On Mon, Aug 16, 2021 at 09:21:11PM +0800, Chen Huang wrote: >>>> >>>> >>>> =E5=9C=A8 2021/8/16 16:34, Greg Kroah-Hartman =E5=86=99=E9=81=93: >>>>> On Mon, Aug 16, 2021 at 07:21:37AM +0000, Chen Huang wrote: >>>>>> From: Roman Gushchin >>>>> >>>>> What is the git commit id of this patch in Linus's tree? >>>>> >>>>>> >>>>>> Patch series "mm: allow mapping accounted kernel pages to userspac= e", v6. >>>>>> >>>>>> Currently a non-slab kernel page which has been charged to a memor= y cgroup >>>>>> can't be mapped to userspace. The underlying reason is simple: Pa= geKmemcg >>>>>> flag is defined as a page type (like buddy, offline, etc), so it t= akes a >>>>>> bit from a page->mapped counter. Pages with a type set can't be m= apped to >>>>>> userspace. >>>>>> >>>>>> But in general the kmemcg flag has nothing to do with mapping to >>>>>> userspace. It only means that the page has been accounted by the = page >>>>>> allocator, so it has to be properly uncharged on release. >>>>>> >>>>>> Some bpf maps are mapping the vmalloc-based memory to userspace, a= nd their >>>>>> memory can't be accounted because of this implementation detail. >>>>>> >>>>>> This patchset removes this limitation by moving the PageKmemcg fla= g into >>>>>> one of the free bits of the page->mem_cgroup pointer. Also it for= malizes >>>>>> accesses to the page->mem_cgroup and page->obj_cgroups using new h= elpers, >>>>>> adds several checks and removes a couple of obsolete functions. A= s the >>>>>> result the code became more robust with fewer open-coded bit trick= s. >>>>>> >>>>>> This patch (of 4): >>>>>> >>>>>> Currently there are many open-coded reads of the page->mem_cgroup = pointer, >>>>>> as well as a couple of read helpers, which are barely used. >>>>>> >>>>>> It creates an obstacle on a way to reuse some bits of the pointer = for >>>>>> storing additional bits of information. In fact, we already do th= is for >>>>>> slab pages, where the last bit indicates that a pointer has an att= ached >>>>>> vector of objcg pointers instead of a regular memcg pointer. >>>>>> >>>>>> This commits uses 2 existing helpers and introduces a new helper t= o >>>>>> converts all read sides to calls of these helpers: >>>>>> struct mem_cgroup *page_memcg(struct page *page); >>>>>> struct mem_cgroup *page_memcg_rcu(struct page *page); >>>>>> struct mem_cgroup *page_memcg_check(struct page *page); >>>>>> >>>>>> page_memcg_check() is intended to be used in cases when the page c= an be a >>>>>> slab page and have a memcg pointer pointing at objcg vector. It d= oes >>>>>> check the lowest bit, and if set, returns NULL. page_memcg() cont= ains a >>>>>> VM_BUG_ON_PAGE() check for the page not being a slab page. >>>>>> >>>>>> To make sure nobody uses a direct access, struct page's >>>>>> mem_cgroup/obj_cgroups is converted to unsigned long memcg_data. >>>>>> >>>>>> Signed-off-by: Roman Gushchin >>>>>> Signed-off-by: Andrew Morton >>>>>> Signed-off-by: Alexei Starovoitov >>>>>> Reviewed-by: Shakeel Butt >>>>>> Acked-by: Johannes Weiner >>>>>> Acked-by: Michal Hocko >>>>>> Link: https://lkml.kernel.org/r/20201027001657.3398190-1-guro@fb.c= om >>>>>> Link: https://lkml.kernel.org/r/20201027001657.3398190-2-guro@fb.c= om >>>>>> Link: https://lore.kernel.org/bpf/20201201215900.3569844-2-guro@fb= .com >>>>>> >>>>>> Conflicts: >>>>>> mm/memcontrol.c >>>>> >>>>> The "Conflicts:" lines should be removed. >>>>> >>>>> Please fix up the patch series and resubmit. But note, this seems >>>>> really intrusive, are you sure these are all needed? >>>>> >>>> >>>> OK=EF=BC=8CI will resend the patchset. >>>> Roman Gushchin's patchset formalize accesses to the page->mem_cgroup= and >>>> page->obj_cgroups. But for LRU pages and most other raw memcg, they = may >>>> pin to a memcg cgroup pointer, which should always point to an objec= t cgroup >>>> pointer. That's the problem I met. And Muchun Song's patchset fix th= is. >>>> So I think these are all needed. >>> >>> What in-tree driver causes this to happen and under what workload? >>> >>>>> What UIO driver are you using that is showing problems like this? >>>>> >>>> >>>> The UIO driver is my own driver, and it's creation likes this: >>>> First, we register a device >>>> pdev =3D platform_device_register_simple("uio_driver,0, NULL, 0); >>>> and use uio_info to describe the UIO driver, the page is alloced and= used >>>> for uio_vma_fault >>>> info->mem[0].addr =3D (phys_addr_t) kzalloc(PAGE_SIZE, GFP_ATOMIC); >>> >>> That is not a physical address, and is not what the uio api is for at >>> all. Please do not abuse it that way. >>> >>>> then we register the UIO driver. >>>> uio_register_device(&pdev->dev, info) >>> >>> So no in-tree drivers are having problems with the existing code, onl= y >>> fake ones? >> >> Yes, but the nullptr porblem may not just about uio driver. For now, p= age struct >> has a union >> union { >> struct mem_cgroup *mem_cgroup; >> struct obj_cgroup **obj_cgroups; >> }; >> For the slab pages, the union info should belong to obj_cgroups. And f= or user >> pages, it should belong to mem_cgroup. When a slab page changes its ob= j_cgroups, >> then another user page which is in the same compound page of that slab= page will >> gets the wrong mem_cgroup in __mod_lruvec_page_state(), and will trigg= er nullptr >> in mem_cgroup_lruvec(). Correct me if I'm wrong. Thanks! >=20 > And how can that be triggered by a user in the 5.10.y kernel tree at th= e > moment? >=20 > I'm all for fixing problems, but this one does not seem like it is an > actual issue for the 5.10 tree right now. Am I missing something? >=20 > thanks, >=20 Sorry, it maybe just the problem of my own driver. Please ignore the patchset. Thanks! > greg k-h >=20 > . >=20