From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AA6AEB64D9 for ; Tue, 27 Jun 2023 18:01:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9863C8D0002; Tue, 27 Jun 2023 14:01:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 935F18D0001; Tue, 27 Jun 2023 14:01:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FDEF8D0002; Tue, 27 Jun 2023 14:01:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7269A8D0001 for ; Tue, 27 Jun 2023 14:01:08 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 83F5D16099B for ; Tue, 27 Jun 2023 18:01:06 +0000 (UTC) X-FDA: 80949294132.14.C05727C Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by imf01.hostedemail.com (Postfix) with ESMTP id C22CA4004E for ; Tue, 27 Jun 2023 18:00:52 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=collabora.com header.s=mail header.b=F3c3d61d; dmarc=pass (policy=quarantine) header.from=collabora.com; spf=pass (imf01.hostedemail.com: domain of usama.anjum@collabora.com designates 46.235.227.172 as permitted sender) smtp.mailfrom=usama.anjum@collabora.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687888853; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rt7FzfhG1O5+6/ZRG29EpT3tnkqcsiRxN2wUEeB/1kQ=; b=UjYB2zkZ3VXsf6c9Wyh5BjSztyEclgy9H0lsxcCNGd6YoYTll88gklocD+DF4a0znCqJbh 5cFA8CCWe7QXSVVQmSkP3f/K99+IxfgmAyetg4gePcVf65bKCVAkgHGsU+xNXrKQg4Tm4O sqHV6qggEx4R0x1sBhNnzKF9q1ogLvs= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=collabora.com header.s=mail header.b=F3c3d61d; dmarc=pass (policy=quarantine) header.from=collabora.com; spf=pass (imf01.hostedemail.com: domain of usama.anjum@collabora.com designates 46.235.227.172 as permitted sender) smtp.mailfrom=usama.anjum@collabora.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687888853; a=rsa-sha256; cv=none; b=5abl2qWBCDgU3WYE4Eev8t5Qs9pNF5L+U3iwNgh6lnBlyMhCfWWu4FJYZlg2AlPEMHK+/Y 0EUpcF0XfTJZw7WkaTpJmo3XH/b0aHHS03SI5E6wqRi8JMKWeWarAR41MpOn+5a4pJCVOY y7wsAU/iBWeRH23rCo7hez7OvIsYFKQ= Received: from [192.168.10.54] (unknown [182.179.162.32]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: usama.anjum) by madras.collabora.co.uk (Postfix) with ESMTPSA id 4EA2A6607116; Tue, 27 Jun 2023 19:00:43 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1687888850; bh=6ANmRGCz3lNyrGcZkbqxi6PCm0I3pVpl35MMbyHLf4k=; h=Date:Cc:Subject:To:References:From:In-Reply-To:From; b=F3c3d61dBArehry5bIHbtwpIr5IXKI9yMLGiaVHGctFRTvYeKwftbdiWOMzRzVKx6 68w9Sxm2U1FV5ymzKMxVzRrCTQq/4gnHtyhJzgA9NzUXHFb7D9Pd9Pcz09A208ZScx xExrlUl7kIb7+JhLGcZ8WGr2gNivF6t94SmYZPXZqim9Ca3CJcOU52WbLDL4dnywlT QhDzQvLPqJxkqqyf5jJc91Uo+2NRlfox7+ivPX88SVniEzCvOwJaVroHWIHa7Wvyng ICsYA7k7w475lT8K9z1tcM2CTddj/yfIx95wjItQbqfZZcgiSFbEw3wJzvuxuuCVze JD+l0yFggtSfA== Message-ID: Date: Tue, 27 Jun 2023 23:00:39 +0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Cc: Muhammad Usama Anjum , Peter Xu , David Hildenbrand , Andrew Morton , Danylo Mocherniuk , Paul Gofman , Cyrill Gorcunov , Mike Rapoport , Nadav Amit , Alexander Viro , Shuah Khan , Christian Brauner , Yang Shi , Vlastimil Babka , "Liam R . Howlett" , Yun Zhou , Suren Baghdasaryan , Alex Sierra , Matthew Wilcox , Pasha Tatashin , Axel Rasmussen , "Gustavo A . R . Silva" , Dan Williams , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Greg KH , kernel@collabora.com, =?UTF-8?B?TWljaGHFgiBNaXJvc8WCYXc=?= Subject: Re: [PATCH v21 2/5] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs To: Andrei Vagin References: <20230626113156.1274521-1-usama.anjum@collabora.com> <20230626113156.1274521-3-usama.anjum@collabora.com> <13ea54c0-25a3-285c-f47e-d6da11c91795@collabora.com> Content-Language: en-US From: Muhammad Usama Anjum In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C22CA4004E X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: h9kzj4fftuxs78qgcd9k1kob1no7eji4 X-HE-Tag: 1687888852-152149 X-HE-Meta: U2FsdGVkX18oCZmJGmYzSVmrrbH2DaR/FV2+Frg1aGQ+iG9eqlV8ZKuE1i9y7g56PT6n+gecoj135MtVSVICC+ISDd5QKarO+taiphn57o6i50RDPV+2eDSJz5Xm/LGBdiZekb/B2ERFKPwcsBcWZWXPsVG9y5r9rYQyUL3f7ZatSj8VTu7BIsx0NPtkZECViaX42LqcRWq40zisZXwvWEjn37oElb7UD2swlsWynz6b0daSW8HtTgHSCutmUIx+iWAa9gG8yMOmAcVUQIMoWeapDZYZuddZOpwDhPzmYbfAoRCRhGr1q2g5R+cBcuo/dwKAh/Y3PTFFBKLPrWybni71kJEes59foIp9sqyJZTTChFVVd+nHiGEfpMnZm0+UexcYWjfq4advIKBIFF939f7WmYw34SiW4FYG5hlZwUBeJ0uoSpdL4LYps66kYO8d9UsFUCY4TzUMb/iJDBef7Jr8JG+D4/e0fMVWGETRjgrGRmq0DEWDzt0GgOZSQ+HsJyetib0iQt/HgCZfj4GHQeoJQvZJBUeIR5gOHCU5SEZia/kiWbXI2fV8qeoUo1CBMMgJKpZh7fz0K6BgiGzK+CYnU7BHcdOotVtvhzWOk0X3PmVEqPlibJNg3wnt6mL2w40Bnmb2+UfaJFbK0KZBSL+3HGE7ADbJBXlUmVwy8oITRt4aRh0EDIGwdwYLLnYUu/+EYljiAjVEuwtsYd50cFjDLBYNdrGhJpaAK7AH55siAW39QJBCfi3Jm46/yle9mlyYawPHobLE5dEgIZkKPoJyuws2tPDQgUOF2zMq8ZyCc2Q85FPAwZpUJsydPum9HvejEKZMdYweGKqXP0NgoZuq/ZUFz8YZOCzy0t+LXYRuuN8HCo33BEtWwgEhZuu+Y2Gz436PbEzNIGXEyV9NiOlEa3QumlTY+DGMuJKv04ApewMNSBB/q3SzbHp04AAkRLrdvROwRGwALT3XBE5 qiOtkf8E EN5wU6DjWbwGYmBWxK6ZmXfFibocG0VAV6+Oh7fotKB+7sDenvFJoYAj21+KF/8/DRURuRui1CYqkrEbFQ4Sv81jHGTACmjiXF0Z7fzsJrHkUZifO6s/u/J4W4Ax5ughFoUucMa9qCL2ck3qy/CodscZ08ac01JVIewSFaO3JwmPzxqevSU506ScETUQ6jAF9LdImoBwXukHy/F41t2uXJoT0tKyRoZJc5nHs2fi64bkYTvfOOMsBXp+Ahz3MXSrbc7Pk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/27/23 7:36 PM, Andrei Vagin wrote: > On Tue, Jun 27, 2023 at 02:00:31PM +0500, Muhammad Usama Anjum wrote: >> Hi Andrei and Michal, >> >> Lets resolve last two points. Please reply below. >> >> On 6/27/23 6:46 AM, Andrei Vagin wrote: >> ... >>>> +#ifdef CONFIG_HUGETLB_PAGE >>>> +static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask, >>>> + unsigned long start, unsigned long end, >>>> + struct mm_walk *walk) >>>> +{ >>>> + unsigned long n_pages = (end - start)/PAGE_SIZE; >>>> + struct pagemap_scan_private *p = walk->private; >>>> + struct vm_area_struct *vma = walk->vma; >>>> + bool is_written, is_interesting = true; >>>> + struct hstate *h = hstate_vma(vma); >>>> + unsigned long bitmap; >>>> + spinlock_t *ptl; >>>> + int ret = 0; >>>> + pte_t ptent; >>>> + >>>> + if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE) >>>> + return -EINVAL; >>>> + >>>> + if (n_pages > p->max_pages - p->found_pages) >>>> + n_pages = p->max_pages - p->found_pages; >>>> + >>>> + if (IS_PM_SCAN_WP(p->flags)) { >>>> + i_mmap_lock_write(vma->vm_file->f_mapping); >>>> + ptl = huge_pte_lock(h, vma->vm_mm, ptep); >>>> + } >>>> + >>>> + ptent = huge_ptep_get(ptep); >>>> + is_written = !is_huge_pte_uffd_wp(ptent); >>>> + >>>> + /* >>>> + * Partial hugetlb page clear isn't supported >>>> + */ >>>> + if (is_written && IS_PM_SCAN_WP(p->flags) && >>>> + n_pages < HPAGE_SIZE/PAGE_SIZE) { >>> >>> should it be done only if is_interesting is set? >> This can be good optimization. We shouldn't return error before finding if >> page is interesting. I'll update. >> >>> >>>> + ret = PM_SCAN_END_WALK; >>>> + goto unlock_and_return; >>>> + } >>>> + >>>> + bitmap = PM_SCAN_FLAGS(is_written, pagemap_scan_is_huge_file(ptent), >>>> + pte_present(ptent), is_swap_pte(ptent), >>>> + pte_present(ptent) && is_zero_pfn(pte_pfn(ptent))); >>>> + >>>> + if (IS_PM_SCAN_GET(p->flags)) { >>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p); >>>> + if (is_interesting) >>>> + ret = pagemap_scan_output(bitmap, p, start, n_pages); >>>> + } >>>> + >>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting && >>>> + ret >= 0) { >>>> + make_uffd_wp_huge_pte(vma, start, ptep, ptent); >>>> + flush_hugetlb_tlb_range(vma, start, end); >>>> + } >>>> + >>>> +unlock_and_return: >>>> + if (IS_PM_SCAN_WP(p->flags)) { >>>> + spin_unlock(ptl); >>>> + i_mmap_unlock_write(vma->vm_file->f_mapping); >>>> + } >>>> + >>>> + return ret; >>>> +} >> ... >>>> + >>>> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg) >>>> +{ >>>> + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg; >>>> + unsigned long long start, end, walk_start, walk_end; >>>> + unsigned long empty_slots, vec_index = 0; >>>> + struct mmu_notifier_range range; >>>> + struct page_region __user *vec; >>>> + struct pagemap_scan_private p; >>>> + struct pm_scan_arg arg; >>>> + int ret = 0; >>>> + >>>> + if (copy_from_user(&arg, uarg, sizeof(arg))) >>>> + return -EFAULT; >>>> + >>>> + start = untagged_addr((unsigned long)arg.start); >>>> + vec = (struct page_region *)untagged_addr((unsigned long)arg.vec); >>>> + >>>> + ret = pagemap_scan_args_valid(&arg, start, vec); >>>> + if (ret) >>>> + return ret; >>>> + >>>> + end = start + arg.len; >>>> + p.max_pages = (arg.max_pages) ? arg.max_pages : ULONG_MAX; >>>> + p.found_pages = 0; >>>> + p.required_mask = arg.required_mask; >>>> + p.anyof_mask = arg.anyof_mask; >>>> + p.excluded_mask = arg.excluded_mask; >>>> + p.return_mask = arg.return_mask; >>>> + p.flags = arg.flags; >>>> + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) & >>>> + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0; >>>> + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0; >>>> + p.vec_buf = NULL; >>>> + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT; >>>> + >>>> + /* >>>> + * Allocate smaller buffer to get output from inside the page walk >>>> + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As >>>> + * we want to return output to user in compact form where no two >>>> + * consecutive regions should be continuous and have the same flags. >>>> + * So store the latest element in p.cur_buf between different walks and >>>> + * store the p.cur_buf at the end of the walk to the user buffer. >>>> + */ >>>> + if (IS_PM_SCAN_GET(p.flags)) { >>>> + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf), >>>> + GFP_KERNEL); >>>> + if (!p.vec_buf) >>>> + return -ENOMEM; >>>> + } >>>> + >>>> + if (IS_PM_SCAN_WP(p.flags)) { >>>> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, >>>> + mm, start, end); >>>> + mmu_notifier_invalidate_range_start(&range); >>>> + } >>>> + >>>> + walk_start = walk_end = start; >>>> + while (walk_end < end && !ret) { >>>> + if (IS_PM_SCAN_GET(p.flags)) { >>>> + p.vec_buf_index = 0; >>>> + >>>> + /* >>>> + * All data is copied to cur_buf first. When more data >>>> + * is found, we push cur_buf to vec_buf and copy new >>>> + * data to cur_buf. Subtract 1 from length as the >>>> + * index of cur_buf isn't counted in length. >>>> + */ >>>> + empty_slots = arg.vec_len - vec_index; >>>> + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1); >>>> + } >>>> + >>>> + walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK; >>>> + if (walk_end > end) >>>> + walk_end = end; >>>> + >>> >>> If this loop can run for a long time, we need to interrupt it in case of >>> pending signals. >>> >>> If you think we don't need to do that, pls explain in the commit >>> message, so that maintainers don't miss this part and double check that >>> everything is alright here. >> This can be done. I'll add to the commit message that we are walking over >> entire range passed. >> >>> >>>> + ret = mmap_read_lock_killable(mm); >>>> + if (ret) >>> >>> If any pages have been handled, we need to report them to user-space. It >>> isn't acceptable to return a error in such cases. >> This will return error only when task has gotten some serios signal and it >> is giong to be killed. In this scenerio, we shouldn't care about returning >> gracefully. Why do you think we should return gracefully in this case? > > You are right, it can be interrupted only by a fatal signal. You can > ignore this comment. > >> >>> >>> And we need to report an address where it stopped scanning. >>> We can do that by adding zero length vector. >> I don't want to do multiplexing the ending address in vec. Can we add >> end_addr variable in struct pm_scan_arg to always return the ending address? >> >> struct pm_scan_arg { >> ... >> _u64 end_addr; >> }; >> >> >>> >>> >>>> + goto free_data; >>>> + ret = walk_page_range(mm, walk_start, walk_end, >>>> + &pagemap_scan_ops, &p); >>>> + mmap_read_unlock(mm); >>>> + >>>> + if (ret && ret != PM_SCAN_FOUND_MAX_PAGES && >>>> + ret != PM_SCAN_END_WALK) >>>> + goto free_data; >>>> + >>>> + walk_start = walk_end; >>>> + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) { >>>> + if (copy_to_user(&vec[vec_index], p.vec_buf, >>>> + p.vec_buf_index * sizeof(*p.vec_buf))) { >>>> + /* >>>> + * Return error even though the OP succeeded >>>> + */ >>>> + ret = -EFAULT; >>>> + goto free_data; >>>> + } >>>> + vec_index += p.vec_buf_index; >>> >>> Should we set ret to zero here if it is equal PM_SCAN_END_WALK. >> No, PM_SCAN_END_WALK is just internal code to stop the page walk and return >> immedtitely. When we get this return value, we stop this loop and return to >> user with whatever data we have in user buffer. > > but PM_SCAN_END_WALK is returned when p.vec_buf is full, so we can > restart the loop after coping vec_buf to the user buffer, can't we? No, we set the capacity of p.vec_buf based on how many empty slots are remaining in user buffer. So when p.vec_buf is marked as full, it means user buffer is full. S > >> >>> >>>> + } >>>> + } >>>> + >>>> + if (p.cur_buf.len) { >>>> + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) { >>>> + ret = -EFAULT; >>>> + goto free_data; >>>> + } >>>> + vec_index++; >>>> + } >>>> + >>>> + ret = vec_index; >>>> + >>>> +free_data: >>>> + if (IS_PM_SCAN_WP(p.flags)) >>>> + mmu_notifier_invalidate_range_end(&range); >>>> + >>>> + kfree(p.vec_buf); >>>> + return ret; >>>> +} >>>> + >> ... >> >> -- >> BR, >> Muhammad Usama Anjum -- BR, Muhammad Usama Anjum