From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A3ECC4345F for ; Fri, 12 Apr 2024 02:33:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B645A6B00A0; Thu, 11 Apr 2024 22:33:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B14416B00A1; Thu, 11 Apr 2024 22:33:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DBDB6B00A2; Thu, 11 Apr 2024 22:33:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7C2656B00A0 for ; Thu, 11 Apr 2024 22:33:09 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 38BFE80D95 for ; Fri, 12 Apr 2024 02:33:09 +0000 (UTC) X-FDA: 81999307698.28.273DD7C Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf08.hostedemail.com (Postfix) with ESMTP id BFAC516000B for ; Fri, 12 Apr 2024 02:33:06 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712889187; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ptr1RMMCr3wHl7pvMz9Fh/0vZZ7RFIBB5ZG3cDHORz0=; b=vUC/oVNZB5cL6A3s7XND3RlsNz7lZLOqN+EKp5CJJ6SJ0xX6GqQ8iV6s/myLmAmGH98GUr wFogSNjPw+E90duzuH9P+zoXe/fhUnC30Qs06XLmBlxhVH8CQqaqMXTSmBtUQp3HcFzv/j HMPQUdEy/Ps/90I63Cz3lXLDNBEEF7Q= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712889187; a=rsa-sha256; cv=none; b=n7DXWzlCkwlwlBlkctJQ2idYwtB+ilYapIUwFPEv2lkIVy9yAepgbshrMB6d1dLRcnnRYe /+ju4pDE6ebaNclHNGP9iWkbWqEP0PpEBGkazAJ9ec+bYHFyGdnSdh+V0Duoe+16jhQVKR gRWZmJSNW0arLVvKsWZtLDpAuGQZuEI= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4VG0sP65DCz1R5gh; Fri, 12 Apr 2024 10:30:17 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 49258140124; Fri, 12 Apr 2024 10:33:03 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 12 Apr 2024 10:33:02 +0800 Message-ID: <53922b32-1bba-45f1-8f4a-5891a74233fa@huawei.com> Date: Fri, 12 Apr 2024 10:33:02 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: move mm counter updating out of set_pte_range() Content-Language: en-US To: Andrew Morton CC: "Matthew Wilcox (Oracle)" , , References: <20240412025704.53245-1-wangkefeng.wang@huawei.com> <20240412025704.53245-2-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: <20240412025704.53245-2-wangkefeng.wang@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: BFAC516000B X-Stat-Signature: 5uka7uz5rdrhnxocera18tsm8s95de1q X-HE-Tag: 1712889186-286112 X-HE-Meta: U2FsdGVkX1978e5Txlc4PuhIVrs0IgsnLh2oI3nSrM4EB03e14EPlrxE2uwO/gUQNNbSNaKxKoYlIPQla3xxdsHtVPvXEGBV8omFtCv6i6qbpjqMuX12rPMei3EiNmeXJdDtzkfPBfxWyMwkx/4vV0iJLmvRvDAqpX2SrJ1jrJmJWIrcMXcyVWM9crJCseJxSLQRwP5HyUrTfNIICOHYkZ+cC3D++FZ5NnrxS9hwRZMRqfa+B/BdNdVj8wGDdAy+FYjJJnJcbN2zCgeWyeX0IGsmWLlYCLoGGu9vz+5r3jzSj0Eb2H25tcuNHLO7pq51wGkwz9guqDoFq6cgia1ioLCdv3Ibo/ra2bkJ7+09K7Htgm0fKkXQ816WPzqu4szQvBJgAVYZdc2TZrSjnu5/VDFbtXCPdhDL0poA79L1pD+61MBWR3yz00gAMGwHU9jLuSNoDQmcXq5mY4+dNeY4R3PVDLJYWyo6cyAua+uNCDsRTNt9XOObcnb7h6OT0GAZDiiDKkMxgRZOv4sil8NklrbEvdwnRcUhna8CRWJomOJJUrCdecGWL6mc4pkvSAf+7K0Tg+QmqwGCshynZLEbvPmyV90hKQ5YDoUsbQd/GJ3xYK6OnExIo1RHe4XGJFOC2VEL+E8s5rqyG0yd9FD/8GAHYozsqNoHMX1+g/hOqehneRF3oNTWW0JKs2IAKmxVYQg5zfWmh5jI9joWmM2nnNhOdhIjyGRQ2eA20iiyh+91283U7E+dLBIGfJX4BRlnc51DCLqXGGcI2tV/x94zq5V3DqlmkZ8K5Dfzz2xDnsppWmWbbtAHkjGPndKamhodCRNZUoukmrwajSvPShtRE+UwE5t2mCzn3s4O+6PU4b1pEoNcty/okOlR2FLvK9sTKS//sf+rKMw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/4/12 10:57, Kefeng Wang wrote: > In order to support batch mm counter updating in filemap_map_pages(), > move mm counter updating out of set_pte_range(), the folios are file > from filemap, and distinguish folios type by vmf->flags and vma->vm_flags > from another caller finish_fault(). > > Signed-off-by: Kefeng Wang > --- > mm/filemap.c | 4 ++++ > mm/memory.c | 8 +++++--- > 2 files changed, 9 insertions(+), 3 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 92e2d43e4c9d..04b813f0146c 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -3540,6 +3540,8 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, > skip: > if (count) { > set_pte_range(vmf, folio, page, count, addr); > + add_mm_counter(vmf->vma->vm_mm, mm_counter_file(folio), > + count); > folio_ref_add(folio, count); > if (in_range(vmf->address, addr, count * PAGE_SIZE)) > ret = VM_FAULT_NOPAGE; > @@ -3554,6 +3556,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, > > if (count) { > set_pte_range(vmf, folio, page, count, addr); > + add_mm_counter(vmf->vma->vm_mm, mm_counter_file(folio), count); > folio_ref_add(folio, count); > if (in_range(vmf->address, addr, count * PAGE_SIZE)) > ret = VM_FAULT_NOPAGE; > @@ -3590,6 +3593,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf, > ret = VM_FAULT_NOPAGE; > > set_pte_range(vmf, folio, page, 1, addr); > + add_mm_counter(vmf->vma->vm_mm, mm_counter_file(folio), 1); > folio_ref_inc(folio); > > return ret; > diff --git a/mm/memory.c b/mm/memory.c > index 78422d1c7381..69bc63a5d6c8 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4685,12 +4685,10 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, > entry = pte_mkuffd_wp(entry); > /* copy-on-write page */ > if (write && !(vma->vm_flags & VM_SHARED)) { > - add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); > VM_BUG_ON_FOLIO(nr != 1, folio); > folio_add_new_anon_rmap(folio, vma, addr); > folio_add_lru_vma(folio, vma); > } else { > - add_mm_counter(vma->vm_mm, mm_counter_file(folio), nr); > folio_add_file_rmap_ptes(folio, page, nr, vma); > } > set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); > @@ -4727,9 +4725,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > struct vm_area_struct *vma = vmf->vma; > struct page *page; > vm_fault_t ret; > + int is_cow = (vmf->flags & FAULT_FLAG_WRITE) && > + !(vma->vm_flags & VM_SHARED); oops, bool is enough. > > /* Did we COW the page? */ > - if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) > + if (is_cow) > page = vmf->cow_page; > else > page = vmf->page; > @@ -4765,8 +4765,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf) > /* Re-check under ptl */ > if (likely(!vmf_pte_changed(vmf))) { > struct folio *folio = page_folio(page); > + int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio); > > set_pte_range(vmf, folio, page, 1, vmf->address); > + add_mm_counter(vma->vm_mm, type, 1); > ret = 0; > } else { > update_mmu_tlb(vma, vmf->address, vmf->pte);