From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD858C6FD19 for ; Thu, 16 Mar 2023 05:17:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFDBB6B0071; Thu, 16 Mar 2023 01:17:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAE346B0072; Thu, 16 Mar 2023 01:17:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9B8C6B0075; Thu, 16 Mar 2023 01:17:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C6B516B0071 for ; Thu, 16 Mar 2023 01:17:31 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9F62D1A0BD0 for ; Thu, 16 Mar 2023 05:17:31 +0000 (UTC) X-FDA: 80573603502.05.D794526 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by imf16.hostedemail.com (Postfix) with ESMTP id A48F9180004 for ; Thu, 16 Mar 2023 05:17:28 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=collabora.com header.s=mail header.b=KXnEm+eS; spf=pass (imf16.hostedemail.com: domain of usama.anjum@collabora.com designates 46.235.227.172 as permitted sender) smtp.mailfrom=usama.anjum@collabora.com; dmarc=pass (policy=quarantine) header.from=collabora.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678943849; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o99GtH1xX7QK/x0tTmnM1XYn3sEpkpnis9IO2FWgB5U=; b=16Qvn//Of/7pig8gExcXeQQRZvxrvC7c8gtMG1b/S85i4mTrloTwHEdmVBg+Pca3OdEn5n yDMyW95b0PPhB1k+NWar2UGyqOS+ITT0tqEhGNYu3ppDDZ5/N32TlW6/BtuO9PkdxgAWqk 1wXCZGxGC6KijKH/q50jB9xWdtynRoQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=collabora.com header.s=mail header.b=KXnEm+eS; spf=pass (imf16.hostedemail.com: domain of usama.anjum@collabora.com designates 46.235.227.172 as permitted sender) smtp.mailfrom=usama.anjum@collabora.com; dmarc=pass (policy=quarantine) header.from=collabora.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678943849; a=rsa-sha256; cv=none; b=wr4bSCIzfXpz5aJMogyI3zUvptUm0HcRYLpYnKge2VKDYxRMlOve4bwSbfbEiL2slbHFM/ B8smv894dDSPux607S83vn5q7MgsZiqIGuZFhPKD2ZMD7XFJiXvo11JiVa9uV396udtH0E TDUjdibOacaaxMRo9Mqs2gB2qG6SNqE= Received: from [192.168.10.39] (unknown [39.37.168.222]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: usama.anjum) by madras.collabora.co.uk (Postfix) with ESMTPSA id 259AF6603080; Thu, 16 Mar 2023 05:17:15 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1678943846; bh=M2Z2/iT5OeTZwq4KV/ZUqlsT5FpvCp+6Ly3f9Y3ZLkE=; h=Date:Cc:Subject:To:References:From:In-Reply-To:From; b=KXnEm+eScqZ6ev+nTGspMoKauD7BVBd17dPh9L/zb3HSaE6jgNeULO2m7TuHW2l20 7LlbgiOgCsFuwpVaBBxTqMO3btyHel0uppBJFfYy3iooW/mBAVt2hw4Xp4/RDEbsSh 3IM3VJhPwdD33JvIApOSRqw6M/zLCY0z8k2xyZvsGIn8m/ntFZhSo+KlhpDiL9l2gD BMbR8lQj6Nw9qlX1/LEYS3EZxPzn3gRjSZ8zEI33m2Sk5MzNM2rsxCWtXMMzVqJil7 Dtv4e4tWAvKVkQSu5r1Nzvwav1Ffu8+rJF4Us58cuXOfDJZVQ9JpO8BI0R6yxptRL7 E7dnbjHbF+AmA== Message-ID: <1947fb8d-a307-ac47-a66b-d2dcdce9e850@collabora.com> Date: Thu, 16 Mar 2023 10:17:10 +0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.2 Cc: Muhammad Usama Anjum , David Hildenbrand , Andrew Morton , =?UTF-8?B?TWljaGHFgiBNaXJvc8WC?= =?UTF-8?Q?aw?= , Andrei Vagin , Danylo Mocherniuk , Paul Gofman , Cyrill Gorcunov , Mike Rapoport , Nadav Amit , Alexander Viro , Shuah Khan , Christian Brauner , Yang Shi , Vlastimil Babka , "Liam R . Howlett" , Yun Zhou , Suren Baghdasaryan , Alex Sierra , Matthew Wilcox , Pasha Tatashin , Axel Rasmussen , "Gustavo A . R . Silva" , Dan Williams , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Greg KH , kernel@collabora.com Subject: Re: [PATCH v11 4/7] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs Content-Language: en-US To: Peter Xu References: <20230309135718.1490461-1-usama.anjum@collabora.com> <20230309135718.1490461-5-usama.anjum@collabora.com> <3d2d1ba4-bfab-6b3d-f0d6-ae0920ebdcb0@collabora.com> From: Muhammad Usama Anjum In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: A48F9180004 X-Stat-Signature: 4yd1iqqjh7mdmheus3co57n7mh978mq6 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1678943848-673898 X-HE-Meta: U2FsdGVkX1/lXlxacRd9puUHyISZv0awvDDa5lKKFHhHVbDmSq4mrfAMYObYyRpAYYUXaErCHlxyR1UjP2YgiB2CbG/OlfZyQ9TO4XaaxDYo67b3Lr9cZoc+TowDWEbwXTfh/kACneljwCRXl0sjyuceEX5ecKAglUvqndLLDZ0fMmqw7hRolKwDPKwSUUtmDN1FQGjtlT4l3NK5UU0nJOmqop3vrOB8uQCNyITyHKmBY86j5GZBFSL5Zs4cuabln0KDlvKETgUKb1CBaJ5BU7x7Nj2i3EAUnKcuoabYxcfeR7Pb0PT7lFjSMGGTfy4XhYZ5h2p3YufXpuoANmCBT33buzAm0rWA925YpuU4A3f6/FJc6YmG3CawO7iB6ugJ7LxXm2i8WcFlzfFWocYpLjFy5CXZBG/Orj0NWyAHhDVP4hPG8l+cbkg2H0DESle7N8ZNofOTskScNEzTR4/EPb7neihUsXxV79rEdaKrJWCr50q6Fj03Pky7m3QHji8r3mGhLnGRcIayB1UyZot9vRewTZsU1Wj991x0R/e04KeOalTGrP4NlKVeXyOdX77a+YL1b1cT5R42iuGeK32H/bt6wJEPB8/nIUlcJP6UW3+9CP+QvdsG1ka8EGPnJ8TBuOLEsm+N+xg16aIXpc/aWSSlQEUGrs/o4/TLLIB3lLirXb8UvgBawxxvqPP8TtkQs2O38BGvo1TPbrOu6kTE2NGlrxKbiHfAj7B1qSFuXjKTtvQ1EmF65DF7HwXEUUG4lWzklRGf7rpF0HPS3Gb8DSPDLjPhKIwxEZcP4XUedf9ox2t06dx8kglmVC0XfPtt+g+jCY8GqVAdyXYfBZMc6oUI7ObzOi20XZ99jJTGudSZagnMtkokfmHl8BnbDw86COeIimZC3KeReOEReMZJTO28dD6z6gLY4hepPSKX9isR7VUnmu6Ae4c5inaGTmQCkrNaB47GGq0wDN+CkL7 4XTABruN RaJYkORLf4es0mcrgjNBHujqkpWS86Rq3SXIUIR8z5NV++S8uYuTpRamKMJQb/C+jy/5uHSZj+w5iCnuMzgK8Ujx2gU7U+EEWsrlT7t/OXgmzguvhVDoLY3NvzHz3+xlN1dFBtdHqyIkhgjK1ER7w9jFoRh6X9Lk95X3bMDry7SKdR1R1iS1UwJM21dc8LJaI+UJmqzkVuDRmCcCqH6ROo/HYChUg62Ia2PVFkyZ/sNAec2at4Ms+IcLM8zjxzGrouf59GLQib9eJELk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/16/23 12:53 AM, Peter Xu wrote: > On Wed, Mar 15, 2023 at 09:54:40PM +0500, Muhammad Usama Anjum wrote: >> On 3/15/23 8:55 PM, Peter Xu wrote: >>> On Thu, Mar 09, 2023 at 06:57:15PM +0500, Muhammad Usama Anjum wrote: >>>> + for (addr = start; !ret && addr < end; pte++, addr += PAGE_SIZE) { >>>> + pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); >>>> + >>>> + is_writ = !is_pte_uffd_wp(*pte); >>>> + is_file = vma->vm_file; >>>> + is_pres = pte_present(*pte); >>>> + is_swap = is_swap_pte(*pte); >>>> + >>>> + pte_unmap_unlock(pte, ptl); >>>> + >>>> + ret = pagemap_scan_output(is_writ, is_file, is_pres, is_swap, >>>> + p, addr, 1); >>>> + if (ret) >>>> + break; >>>> + >>>> + if (PM_SCAN_OP_IS_WP(p) && is_writ && >>>> + uffd_wp_range(walk->mm, vma, addr, PAGE_SIZE, true) < 0) >>>> + ret = -EINVAL; >>>> + } >>> >>> This is not real atomic.. >>> >>> Taking the spinlock for eacy pte is not only overkill but wrong in >>> atomicity because the pte can change right after spinlock unlocked. >> Let me explain. It seems like wrong, but it isn't. In my rigorous testing, >> it didn't show any side-effect. Here we are finding out if a page is >> written. If page is written, only then we clear it. Lets look at the >> different possibilities here: >> - If a page isn't written, we'll not clear it. >> - If a page is written and there isn't any race, we'll clear written-to >> flag by write protecting it. >> - If a page is written but before clearing it, data is written again to the >> page. The page would remain written and we'll clear it. >> - If a page is written but before clearing it, it gets write protected, >> we'll still write protected it. There is double right protection here, but >> no side-effect. >> >> Lets turn this into a truth table for easier understanding. Here first >> coulmn and thrid column represents this above code. 2nd column represents >> any other thread interacting with the page. >> >> If page is written/dirty some other task interacts wp_page >> no does nothing no >> no writes to page no >> no wp the page no >> yes does nothing yes >> yes write to page yes >> yes wp the page yes >> >> As you can see there isn't any side-effect happening. We aren't over doing >> the wp or under-doing the write-protect. >> >> Even if we were doing something wrong here and I bring the lock over all of >> this, the pages get become written or wp just after unlocking. It is >> expected. This current implementation doesn't seem to be breaking this. >> >> Is my understanding wrong somewhere here? Can you point out? > > Yes you're right. With is_writ check it looks all fine. > >> >> Previous to this current locking design were either buggy or slower when >> multiple threads were working on same pages. Current implementation removes >> the limitations: >> - The memcpy inside pagemap_scan_output is happening with pte unlocked. > > Why this has anything to worry? Isn't that memcpy only applies to a > page_region struct? Yeah, correct. I'm just saying that memcpy without pte lock is better than memcpy with pte locked. :) > >> - We are only wp a page if we have noted this page to be dirty >> - No mm write lock is required. Only read lock works fine just like >> userfaultfd_writeprotect() takes only read lock. > > I didn't even notice you used to use write lock. Yes I think read lock is > suffice here. > >> >> There is only one con here that we are locking and unlocking the pte lock >> again and again. >> >> Please have a look at my explanation and let me know what do you think. > > I think this is fine as long as the semantics is correct, which I believe > is the case. The spinlock can be optimized, but it can be done on top if > needs more involved changes. > >> >>> >>> Unfortunately you also cannot reuse uffd_wp_range() because that's not >>> atomic either, my fault here. Probably I was thinking mostly from >>> soft-dirty pov on batching the collect+reset. >>> >>> You need to take the spin lock, collect whatever bits, set/clear whatever >>> bits, only until then release the spin lock. >>> >>> "Not atomic" means you can have some page got dirtied but you could miss >>> it. Depending on how strict you want, I think it'll break apps like CRIU >>> if strict atomicity needed for migrating a process. If we want to have a >>> new interface anyway, IMHO we'd better do that in the strict way. >> In my rigorous multi-threaded testing where a lots of threads are working >> on same set of pages, we aren't losing even a single update. I can share >> the test if you want. > > Good to have tests covering that. I'd say you can add the test into > selftests along with the series when you repost if it's convenient. It can > be part of an existing test or it can be a new one under mm/. Sure, I'll add it to the selftests. Thank you for reviewing and asking the questions. > > Thanks, > -- BR, Muhammad Usama Anjum