From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f69.google.com (mail-wm0-f69.google.com [74.125.82.69]) by kanga.kvack.org (Postfix) with ESMTP id AF1A86B0038 for ; Tue, 27 Dec 2016 05:18:00 -0500 (EST) Received: by mail-wm0-f69.google.com with SMTP id l2so27791812wml.5 for ; Tue, 27 Dec 2016 02:18:00 -0800 (PST) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id gs7si49403539wjc.209.2016.12.27.02.17.59 for (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 27 Dec 2016 02:17:59 -0800 (PST) Date: Tue, 27 Dec 2016 11:17:56 +0100 From: Michal Hocko Subject: Re: [PATCH v4] mm: pmd dirty emulation in page fault handler Message-ID: <20161227101755.GD1308@dhcp22.suse.cz> References: <1482506098-6149-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1482506098-6149-1-git-send-email-minchan@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andreas Schwab , Jason Evans , Will Deacon , Catalin Marinas , linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, "[4.5+]" , "Kirill A. Shutemov" On Sat 24-12-16 00:14:58, Minchan Kim wrote: > Andreas reported [1] made a test in jemalloc hang in THP mode in arm64. > http://lkml.kernel.org/r/mvmmvfy37g1.fsf@hawking.suse.de > > The problem is currently page fault handler doesn't supports dirty bit > emulation of pmd for non-HW dirty-bit architecture so that application > stucks until VM marked the pmd dirty. > > How the emulation work depends on the architecture. In case of arm64, > when it set up pte firstly, it sets pte PTE_RDONLY to get a chance to > mark the pte dirty via triggering page fault when store access happens. > Once the page fault occurs, VM marks the pmd dirty and arch code for > setting pmd will clear PTE_RDONLY for application to proceed. > > IOW, if VM doesn't mark the pmd dirty, application hangs forever by > repeated fault(i.e., store op but the pmd is PTE_RDONLY). > > This patch enables pmd dirty-bit emulation for those architectures. Thanks for extending the patch description again! > [1] b8d3c4c3009d, mm/huge_memory.c: don't split THP page when MADV_FREE syscall is called > > Cc: Jason Evans > Cc: Michal Hocko > Cc: Will Deacon > Cc: Catalin Marinas > Cc: linux-arch@vger.kernel.org > Cc: linux-arm-kernel@lists.infradead.org > Cc: [4.5+] > Fixes: b8d3c4c3009d ("mm/huge_memory.c: don't split THP page when MADV_FREE syscall is called") > Reported-and-Tested-by: Andreas Schwab > Acked-by: Kirill A. Shutemov > Signed-off-by: Minchan Kim Acked-by: Michal Hocko > --- > Merry Xmas! > > * from v3 > * Elaborate description > * from v2 > * Add acked-by/tested-by > * from v1 > * Remove __handle_mm_fault part - Kirill > mm/huge_memory.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 10eedbf..29ec8a4 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -883,15 +883,17 @@ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd) > { > pmd_t entry; > unsigned long haddr; > + bool write = vmf->flags & FAULT_FLAG_WRITE; > > vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd); > if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) > goto unlock; > > entry = pmd_mkyoung(orig_pmd); > + if (write) > + entry = pmd_mkdirty(entry); > haddr = vmf->address & HPAGE_PMD_MASK; > - if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, > - vmf->flags & FAULT_FLAG_WRITE)) > + if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, write)) > update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd); > > unlock: > -- > 2.7.4 > -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org