From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C15BC25B08 for ; Wed, 17 Aug 2022 09:41:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7B9F8D0001; Wed, 17 Aug 2022 05:41:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B04626B0074; Wed, 17 Aug 2022 05:41:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97DE38D0001; Wed, 17 Aug 2022 05:41:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 812E36B0073 for ; Wed, 17 Aug 2022 05:41:25 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4DAF1140B81 for ; Wed, 17 Aug 2022 09:41:25 +0000 (UTC) X-FDA: 79808591730.14.3B89EA8 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf28.hostedemail.com (Postfix) with ESMTP id D68AEC01CC for ; Wed, 17 Aug 2022 09:41:23 +0000 (UTC) Received: by mail-pg1-f169.google.com with SMTP id v4so7188413pgi.10 for ; Wed, 17 Aug 2022 02:41:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=to:references:message-id:content-transfer-encoding:cc:date :in-reply-to:from:subject:mime-version:from:to:cc; bh=8wFsy8/iq1mV1VmQqVVpVmjFvPaMVz+u/8oZtkeEhNY=; b=maJYnfWJosyTTf82+6p9ysiroNH9d0hwZiyl+CCMs/SB3wHFOsiCEM59gDyMBWQ/PP uB/5yY33zGwkZ6PZgNkMFcQ+U5jhDLZbEveoMOTRPeD9YkenVTgIeCQZz7kUIdD9pVY4 ZIBIGw+dXnmB+Ooy9hZvM0cFXmi49GtuE9C7J8x308Ppcn1fazS/qAI7C91Arp9fociA j3kvPpd85TrUtWPGS8khMb6P7opEEobomxOnz/HypmnlU2KfManQf5e2UX3j12Lo7IDf wB/LV4NSnCbIscyQWUNWMAP/Q1hyy5bFIjYgNEJXgvV0wD4PSf/4dYdt4qHmak+iYtrV sP7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=to:references:message-id:content-transfer-encoding:cc:date :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc; bh=8wFsy8/iq1mV1VmQqVVpVmjFvPaMVz+u/8oZtkeEhNY=; b=IumQKRQV55hdXIrkjhm4H+OM2kiFY3w68mnXBq+9dY/jBKoqS/6THvUqlIn/FGwVKg iX7wdl4dIuDfbDoxEYt3D5lgfTAH1zhwTNU55D8FCXR1WhryOftDt1CRf3URyC+IzyCl lhmtmKr3+VpXTx9uMZmMJb6KMR4nwYDJt78olBVct7/hJ1Jzw6Im0wCUpp2qRq2G6OGw JXUsrgmMOh7gJxwipztjFQ+0WJ7tOv3rzGmfbj/SbWdv6GxEEywq6sltzVK5Dql2udwv y4uHyrPIpX0w3FqQLj224RhfiIDF0qkCLbYi6NbBu8vBDEkS5dWTh3CkjCcbRltl0upd 6cGw== X-Gm-Message-State: ACgBeo2xm/tEZME6zfoDGYZK8yFja0/h2dpwUH0iL/bzy28aq/TTzuDH OykLQzETMOk79yNncsY1Fhk= X-Google-Smtp-Source: AA6agR4fQe0JMAMZT9E6T+BOdyywOXefDPIuSNMqKMCTzfgoN6Gv3+3INclu6TwqgICD+J0uWGhJgg== X-Received: by 2002:aa7:88d3:0:b0:52e:ade6:6192 with SMTP id k19-20020aa788d3000000b0052eade66192mr24904496pff.41.1660729282369; Wed, 17 Aug 2022 02:41:22 -0700 (PDT) Received: from smtpclient.apple (c-24-6-216-183.hsd1.ca.comcast.net. [24.6.216.183]) by smtp.gmail.com with ESMTPSA id u1-20020a1709026e0100b0016d62ba5665sm963653plk.254.2022.08.17.02.41.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 Aug 2022 02:41:21 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\)) Subject: Re: [PATCH v2 1/2] mm/migrate_device.c: Copy pte dirty bit to page From: Nadav Amit In-Reply-To: <87tu6bbaq7.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Wed, 17 Aug 2022 02:41:19 -0700 Cc: Alistair Popple , Peter Xu , huang ying , Linux MM , Andrew Morton , LKML , "Sierra Guiza, Alejandro (Alex)" , Felix Kuehling , Jason Gunthorpe , John Hubbard , David Hildenbrand , Ralph Campbell , Matthew Wilcox , Karol Herbst , Lyude Paul , Ben Skeggs , Logan Gunthorpe , paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, stable@vger.kernel.org Content-Transfer-Encoding: quoted-printable Message-Id: <1D2FB37E-831B-445E-ADDC-C1D3FF0425C1@gmail.com> References: <6e77914685ede036c419fa65b6adc27f25a6c3e9.1660635033.git-series.apopple@nvidia.com> <871qtfvdlw.fsf@nvdebian.thelocal> <87o7wjtn2g.fsf@nvdebian.thelocal> <87tu6bbaq7.fsf@yhuang6-desk2.ccr.corp.intel.com> To: "Huang, Ying" X-Mailer: Apple Mail (2.3696.120.41.1.1) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660729283; a=rsa-sha256; cv=none; b=cuN5kJaFHir4bDRaqvw2ZFYKWdExzDxCkk80Erc+nZ8M/HRoTmV5vxD19TUDfTeCNM3DIs cxyLGPlpK4EnYs88tlj8E6W+5WmZbhm1hRxWyJGpbu5cRCu5ZnDh7N8URxtN0nyeIsgYRU Tkc6Lb8oRtJiMSzo6dm1kb9Cnj0RYZI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=maJYnfWJ; spf=pass (imf28.hostedemail.com: domain of nadav.amit@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=nadav.amit@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660729283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8wFsy8/iq1mV1VmQqVVpVmjFvPaMVz+u/8oZtkeEhNY=; b=jI3XLbojquTWap1ziWOtQEghZQhN++uK+tLSaSLK0nGfkJPmVnbToystEr00R2dhYF1zm3 pZO5hMdVO5wqNObuyr/nomdbdJxcJ/b3CKyU4R5mqYo75lYztxHtZgPuFXBFZ37wfjH/Gg /dYEIurWLSyw6PvberSBLCT8z/0aNvQ= X-Stat-Signature: insigntoa6r6ybbf77tdgjsmr3d19tgg X-Rspamd-Queue-Id: D68AEC01CC Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=maJYnfWJ; spf=pass (imf28.hostedemail.com: domain of nadav.amit@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=nadav.amit@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1660729283-827247 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Aug 17, 2022, at 12:17 AM, Huang, Ying wrote: > Alistair Popple writes: >=20 >> Peter Xu writes: >>=20 >>> On Wed, Aug 17, 2022 at 11:49:03AM +1000, Alistair Popple wrote: >>>> Peter Xu writes: >>>>=20 >>>>> On Tue, Aug 16, 2022 at 04:10:29PM +0800, huang ying wrote: >>>>>>> @@ -193,11 +194,10 @@ static int migrate_vma_collect_pmd(pmd_t = *pmdp, >>>>>>> bool anon_exclusive; >>>>>>> pte_t swp_pte; >>>>>>>=20 >>>>>>> + flush_cache_page(vma, addr, = pte_pfn(*ptep)); >>>>>>> + pte =3D ptep_clear_flush(vma, addr, = ptep); >>>>>>=20 >>>>>> Although I think it's possible to batch the TLB flushing just = before >>>>>> unlocking PTL. The current code looks correct. >>>>>=20 >>>>> If we're with unconditionally ptep_clear_flush(), does it mean we = should >>>>> probably drop the "unmapped" and the last flush_tlb_range() = already since >>>>> they'll be redundant? >>>>=20 >>>> This patch does that, unless I missed something? >>>=20 >>> Yes it does. Somehow I didn't read into the real v2 patch, sorry! >>>=20 >>>>> If that'll need to be dropped, it looks indeed better to still = keep the >>>>> batch to me but just move it earlier (before unlock iiuc then = it'll be >>>>> safe), then we can keep using ptep_get_and_clear() afaiu but keep = "pte" >>>>> updated. >>>>=20 >>>> I think we would also need to check should_defer_flush(). Looking = at >>>> try_to_unmap_one() there is this comment: >>>>=20 >>>> if (should_defer_flush(mm, flags) && = !anon_exclusive) { >>>> /* >>>> * We clear the PTE but do not flush so = potentially >>>> * a remote CPU could still be writing = to the folio. >>>> * If the entry was previously clean = then the >>>> * architecture must guarantee that a = clear->dirty >>>> * transition on a cached TLB entry is = written through >>>> * and traps if the PTE is unmapped. >>>> */ >>>>=20 >>>> And as I understand it we'd need the same guarantee here. Given >>>> try_to_migrate_one() doesn't do batched TLB flushes either I'd = rather >>>> keep the code as consistent as possible between >>>> migrate_vma_collect_pmd() and try_to_migrate_one(). I could look at >>>> introducing TLB flushing for both in some future patch series. >>>=20 >>> should_defer_flush() is TTU-specific code? >>=20 >> I'm not sure, but I think we need the same guarantee here as = mentioned >> in the comment otherwise we wouldn't see a subsequent CPU write that >> could dirty the PTE after we have cleared it but before the TLB = flush. >>=20 >> My assumption was should_defer_flush() would ensure we have that >> guarantee from the architecture, but maybe there are alternate/better >> ways of enforcing that? >>> IIUC the caller sets TTU_BATCH_FLUSH showing that tlb can be omitted = since >>> the caller will be responsible for doing it. In = migrate_vma_collect_pmd() >>> iiuc we don't need that hint because it'll be flushed within the = same >>> function but just only after the loop of modifying the ptes. Also = it'll be >>> with the pgtable lock held. >>=20 >> Right, but the pgtable lock doesn't protect against HW PTE changes = such >> as setting the dirty bit so we need to ensure the HW does the right >> thing here and I don't know if all HW does. >=20 > This sounds sensible. But I take a look at zap_pte_range(), and find > that it appears that the implementation requires the PTE dirty bit to = be > write-through. Do I miss something? >=20 > Hi, Nadav, Can you help? Sorry for joining the discussion late. I read most ofthis thread and I = hope I understand what you ask me. So at the risk of rehashing or repeating = what you already know - here are my 2 cents. Feel free to ask me again if I = did not understand your questions: 1. ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH is currently x86 specific. There is = a recent patch that want to use it for arm64 as well [1]. The assumption = that Alistair cited from the code (regarding should_defer_flush()) might not = be applicable to certain architectures (although most likely it is). I = tried to encapsulate the logic on whether an unflushed RO entry can become = dirty in an arch specific function [2]. 2. Having said all of that, using the logic of =E2=80=9Cflush if there = are pending TLB flushes for this mm=E2=80=9D as done by UNMAP_TLB_FLUSH makes sense = IMHO (although I would have considered doing it in finer granularity of VMA/page-table as I proposed before and got somewhat lukewarm response = [3]). 3. There is no question that flushing after dropping the ptl is wrong. = But reading the thread, I think that you only focus on whether a dirty indication might get lost. The problem, I think, is bigger, as it might = also cause correction problems after concurrently removing mappings. What = happens if we get for a clean PTE something like: CPU0 CPU1 ---- ---- migrate_vma_collect_pmd() [ defer flush, release ptl ] madvise(MADV_DONTNEED) -> zap_pte_range() [ PTE not present; mmu_gather not updated ] =09 [ no flush; stale PTE in TLB ] [ page is still accessible ] [ might apply to munmap(); I usually regard MADV_DONTNEED since it does not call mmap_write_lock() ] 4. Having multiple TLB flushing infrastructures makes all of these discussions very complicated and unmaintainable. I need to convince = myself in every occasion (including this one) whether calls to flush_tlb_batched_pending() and tlb_flush_pending() are needed or not. What I would like to have [3] is a single infrastructure that gets a =E2=80=9Cticket=E2=80=9D (generation when the batching started), the old = PTE and the new PTE and checks whether a TLB flush is needed based on the arch behavior and = the current TLB generation. If needed, it would update the =E2=80=9Cticket=E2=80= =9D to the new generation. Andy wanted a ring for pending TLB flushes, but I think it = is an overkill with more overhead and complexity than needed. But the current situation in which every TLB flush is a basis for long discussions and prone to bugs is impossible. I hope it helps. Let me know if you want me to revive the patch-set or = other feedback. [1] = https://lore.kernel.org/all/20220711034615.482895-5-21cnbao@gmail.com/ [2] https://lore.kernel.org/all/20220718120212.3180-13-namit@vmware.com/ [3] = https://lore.kernel.org/all/20210131001132.3368247-16-namit@vmware.com/