From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50EE7C54EBE for ; Mon, 16 Jan 2023 11:14:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5FA36B0071; Mon, 16 Jan 2023 06:14:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B101D6B0072; Mon, 16 Jan 2023 06:14:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D9ED6B0073; Mon, 16 Jan 2023 06:14:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8B9796B0071 for ; Mon, 16 Jan 2023 06:14:43 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 47DEAAAA3C for ; Mon, 16 Jan 2023 11:14:43 +0000 (UTC) X-FDA: 80360404446.11.4A45EF2 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf08.hostedemail.com (Postfix) with ESMTP id 5EED0160019 for ; Mon, 16 Jan 2023 11:14:38 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673867680; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0vdl7Vmg6gRgprdTodPxZ+uFPHI1R8wcYKvqMwMn8iU=; b=OrqV9wFWHQUCwp5AL77dHd9p4lcV79+/TnHkxV0UDk3q6pj1vJ4/HfM7N1rOCRHcR+fCDv 4qPJJoOcojRZ5plxe/PZxmM6EbsxJSH8yvuBUa+xkq2yNVAHSSEUmijRQ2tFOhTocBLBf8 aflebeyiFlMZ9bRoHSxCVbc7z34GqWA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673867680; a=rsa-sha256; cv=none; b=3cUR49shlr0yEljLFINpWUQ2pj/0nWPqWmdxy3GunNPlBH8IjMf3j3BThoNaIbWppEJneH LD1cf4efbTq+DljlQkX+rbI2Mn8J5J4ffc7gq2pHg+b05+o7e1ruqje+icPBRq/uZWpSBG Wzf+QWIhgkZC0ckQBTGR7VE+c9HJPJ4= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NwTlL2DT2zJrx8; Mon, 16 Jan 2023 19:07:58 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 16 Jan 2023 19:09:20 +0800 Message-ID: <6f1c8530-621c-c018-780f-60beb9054a7b@huawei.com> Date: Mon, 16 Jan 2023 19:09:19 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 Subject: Re: [PATCH -next 1/7] mm: huge_memory: make __do_huge_pmd_anonymous_page() to take a folio Content-Language: en-US To: Matthew Wilcox CC: , , References: <20230112083006.163393-1-wangkefeng.wang@huawei.com> <20230112083006.163393-2-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5EED0160019 X-Stat-Signature: 9sowmmw3h8g1ei7349sk3sku3wozfkyo X-HE-Tag: 1673867678-653170 X-HE-Meta: U2FsdGVkX1/5e35phl620vPovygVsgWSeYAyE0c+Mx7bRWbrSOWK+bsrwtGmL6Eg4U7AFMH9qB4+gvmXeAIhHq8PSz5kSkLDqRS/8cGLZc86a/VbGet98uQAoML9zrCUWqGcXY/BcvEcMHbfrQ2mlS9q6Bn+pRkbY3b/xy50Q5C4QZBO8ysAiFtIKxdTRchvS1IMow585eLSLZxSB8FYQu7rfYhh0cClbiSiXWEIjXL8unxapGBR4wcHX2F24ZOgY18FwVhjbqztHxCnkajkzrKut84364FaMqEVrPE9vSJ0Pj5ldxlGb5NeQTX9JnamTJYIbc4X5nOw/e1DtRISEH2Buq6zQDsiOvUEgPutm9Bbr7d68gu2l9wIbdsBOfBtFHzMkkZsv+9jq+/fzIV2MmAHKSSks+7KWOvsbEEoPAVo0TngWvJkBa+vjNQI0sNoTau12p4UP/BGEPt4EClMt6lu8fgpMWwWXkkfvaD5fCcPC/OrKHyX7oAKn/4UTFEk+UZfEtNKlw+Es7hfhS/MtP+Wd38PTq5EexXw1DnlgOl3+rxcsIsMUxlLINUCoJEUdb3NJEXp8+ittjKdUToV0s4sKtLvywoOz3FHDl6luG1IuxeJLK9dgT0ehCNIcgeQX354aoOc2Y/lavHdHKsQaz/Es3ZBr5ybGgFsYlhUOF1lLmxd871zw5pwLtd3e0sLKJg/gTcGMNUQvtmbIbr8uwPoinEuVtLkitroeaLjk2F42I1JLwujSzxL/ougSMu7CTlm+M+OjgIK4S1Cs7AUohzxGWX/a0rtY4duA4NP+mGSW+9TEnW4Pv7I01T6beZLaDfLagKtEMUBKPldcuJZT5U/1ifKxJX6Al28jCqNN9/WXtNKrrtC5Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/1/13 22:25, Matthew Wilcox wrote: > On Thu, Jan 12, 2023 at 04:30:00PM +0800, Kefeng Wang wrote: >> Let's __do_huge_pmd_anonymous_page() take a folio and convert related >> functions to use folios. > > No, this is actively wrong! Andrew, please drop this patch. > > If we want to support folio sizes larger than PMD size (and I think we > do), we need to be able to specify precisely which page in the folio > is to be stored at this PTE. The *interface* must remain struct page. > We can convert from page to folio within the function, but we *MUST NOT* > go the other way. Got it, > >> static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, >> - struct page *page, gfp_t gfp) >> + struct folio *folio, gfp_t gfp) >> { >> struct vm_area_struct *vma = vmf->vma; >> + struct page *page = &folio->page; > > ... ie this is bad and wrong. > >> @@ -834,7 +835,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) >> count_vm_event(THP_FAULT_FALLBACK); >> return VM_FAULT_FALLBACK; >> } >> - return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); >> + return __do_huge_pmd_anonymous_page(vmf, folio, gfp); >> } >> >> static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, > > A reasonable person might ask "But Matthew, you allocated a folio here, > then you're converting back to a struct page to call > __do_huge_pmd_anonymous_page() so isn't this a sensible patch?" and this is why I change the parameter from page to folio(no need to go back and forth between page and folio), > > And I would say "still no". This is a question of interfaces, and > even though __do_huge_pmd_anonymous_page() is static and has precisely > one caller today that always allocates a folio of precisely PMD size, > I suspect it will need to be more visible in the future and the > conversion of the interface from page to folio misleads people. ok, will keep page for __do_huge_pmd_anonymous_page(). >