From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D167C369C2 for ; Thu, 24 Apr 2025 14:13:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A86066B0022; Thu, 24 Apr 2025 10:13:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A12EC6B00AF; Thu, 24 Apr 2025 10:13:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 861BC6B00B3; Thu, 24 Apr 2025 10:13:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 644576B0022 for ; Thu, 24 Apr 2025 10:13:17 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0317E1608AA for ; Thu, 24 Apr 2025 14:13:18 +0000 (UTC) X-FDA: 83369129718.10.A65194F Received: from mail-io1-f43.google.com (mail-io1-f43.google.com [209.85.166.43]) by imf05.hostedemail.com (Postfix) with ESMTP id 99805100003 for ; Thu, 24 Apr 2025 14:13:16 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel-dk.20230601.gappssmtp.com header.s=20230601 header.b=r+Erquvg; dmarc=none; spf=pass (imf05.hostedemail.com: domain of axboe@kernel.dk designates 209.85.166.43 as permitted sender) smtp.mailfrom=axboe@kernel.dk ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745503996; a=rsa-sha256; cv=none; b=WeRsyNfHT36kK+NrLbUdXZLSCS8bWOzBDVxuakOr6tKVmNH78mHsw2qdhp0CWNzVYHiMPK ZONf4decMRBKamCyTnnURJa4nYjF0GbayPadaCHt/QAMAMUyb4u6zpDpHh7il1hPO51afd ZeIxwH4dqo6aMu+OUDOZ7/Tp39prsJ8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel-dk.20230601.gappssmtp.com header.s=20230601 header.b=r+Erquvg; dmarc=none; spf=pass (imf05.hostedemail.com: domain of axboe@kernel.dk designates 209.85.166.43 as permitted sender) smtp.mailfrom=axboe@kernel.dk ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745503996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fd31tuMqcDYEf3YOh3nb9hbCE7tpXTrqaGBK0QTTKgc=; b=IE6VhX+BOgdKAPrU64YDDCk2K9jrtfDDBqLPHrvN2Dl/4oJK961xvFCh/fr4OP/aDPkzNM KfRSzruFqYHjhGOHny7KT9oL+q43orTkVGn6TulSi63+RzRHUNJoa7SiHfwqHbc8SZ/2Ou 72y37NTwtmzyfm2c+SXAULDEJa766vU= Received: by mail-io1-f43.google.com with SMTP id ca18e2360f4ac-85df99da233so108770139f.3 for ; Thu, 24 Apr 2025 07:13:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1745503995; x=1746108795; darn=kvack.org; h=content-transfer-encoding:in-reply-to:content-language:from :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=fd31tuMqcDYEf3YOh3nb9hbCE7tpXTrqaGBK0QTTKgc=; b=r+ErquvgzmYvjoJazw+aCJtB2N965Kez/vlR/jxJclHKi3MWRKNi8/Ck6OoDz9aA3p vL3jEvPo2F+1O0iOK0p3CknPuVuhThvKyh8Asi9Hly3oSBzpyEPAU+JXWUfMc8QAQRKB CSZ8g0ZhLl6+YSS7RafiNVpcHstcwCY82/g8kMh+Wt5uQ++h0fTxoXUNY6AtB9Qazc+e Bie8M492hD/F34oW0EXerBBRLq9X1/JKXOY5XBgxyi0gfqQ54kmqm2diWjVkH2H7S5r/ tnnH9VKc0TlQo9kJnRXiWCwQHe7L0shy+y8PCSRW0Q3VXVv6X8Mdn1TjIhGD9go5nKVH ovwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745503995; x=1746108795; h=content-transfer-encoding:in-reply-to:content-language:from :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fd31tuMqcDYEf3YOh3nb9hbCE7tpXTrqaGBK0QTTKgc=; b=Uz0AF0A1Byn/pL0FcV9wN/Uxf8C7oRf0nKZqx3r3BdjCtI2Ali5G9mly64jRbBE2Ca fnWu+mlaBwZzvu6uKR4YhoyAjW1tOha6guxPzDqA/KhzNNwC0AUGwh4j/cpMwmVvLjPv 59upp8epK9gIvi7ouWxXvR8szSnHCld55esX2fRikGX5ZhL8+pj5Xwoc5Uhsr7pMdCRZ 6EQzdUthrhUFBo0+QxLP0FxV+KPi3etIyvHJcW4SjzxLbXzJUvrCQ9XlaIYdXj+Y61pN JzTZojGwVOTpWFpGKqO5YafTetDKskrHAYiUXS9AChAWCrOqTqy2iBpTZuePfCTdiEG0 kowQ== X-Forwarded-Encrypted: i=1; AJvYcCV/9sU26dbIzqTy2x3Pt27QBSilkVY8KBeJAyOFGCVWpzcp9Cj2Kz7c3zen8H4n6dmDmdotGQEqoQ==@kvack.org X-Gm-Message-State: AOJu0YyVEJJxuOy4wXkPLty4GkuvNUztuAp+k7LEWoofdJUc3shppD/A oc6YHpvwv220fpe3nNG0rk2F7JbxsycmQa3PluQHbmyiGGV/zbCt4Zlf/mzEwt8= X-Gm-Gg: ASbGnctBUoH8M6cB63Si+UepQZSEeLvvF7w7baDvKuAulZlkCROMTfO29xTqjfwDAiX g1Rzk2yhkPSmXNpccE07jJlBjJJ6ppoIV/TibBl3G7v8LeBrD1vbMYIUmt1MvLghcbbaUDvV89L fjLDTczv1AqEEevLBZfRzcO7IxDiJbVFpGwSRz2Ufh8rIlBjjMvKL9zvqzfRRi0IGNpHe6vtuaw 328CvYCOU22og8OCk4nLrilQo91Tjo+QVJV1rwLtOwHPO3Gvf3hh4VUqHutgZGSp+F54wkhNUcH IvrCq6IkBepdoCjGoRSosa8ONtt0wCVB+D4k X-Google-Smtp-Source: AGHT+IEYHzLjHult/C3OCAnnN70ZyOtQR7NOtOT3xPstQU442NKdq7lUb6GFqX933ULAUPG/kkbK/w== X-Received: by 2002:a05:6602:6d07:b0:861:c4cf:cae8 with SMTP id ca18e2360f4ac-8644f99fbe6mr297485339f.2.1745503995399; Thu, 24 Apr 2025 07:13:15 -0700 (PDT) Received: from [192.168.1.116] ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id ca18e2360f4ac-864518de4e9sm20913439f.16.2025.04.24.07.13.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 24 Apr 2025 07:13:14 -0700 (PDT) Message-ID: <5c20b5ca-ce41-43c4-870a-c50206ab058d@kernel.dk> Date: Thu, 24 Apr 2025 08:13:13 -0600 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios To: =?UTF-8?B?5aec5pm65Lyf?= Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, akpm@linux-foundation.org, peterx@redhat.com, asml.silence@gmail.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, io-uring@vger.kernel.org References: <20250422162913.1242057-1-qq282012236@gmail.com> <20250422162913.1242057-2-qq282012236@gmail.com> <14195206-47b1-4483-996d-3315aa7c33aa@kernel.dk> <7bea9c74-7551-4312-bece-86c4ad5c982f@kernel.dk> <52d55891-36e3-43e7-9726-a2cd113f5327@kernel.dk> From: Jens Axboe Content-Language: en-US In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 99805100003 X-Stat-Signature: jgbfgxkbg9h8iwnpi7w3pcb9d4z1pc1x X-Rspam-User: X-HE-Tag: 1745503996-988583 X-HE-Meta: U2FsdGVkX1+Sq9Pbf8YqCu+LhsZhjVJ5QdNtvy94GhUuQPCjHv+mcuAnw5K2ZBd1yFJ/e9PIk7t7HbFPlZ5AiekaTCojkTSt1ckJaDKvOjp4MqTywrr2YgElrDcdF1ps2UPnRN0kvGLSDhPhp9Z2uWejvtt8XIh3EB57Eug5U5XzfvHBp+MlGYqg2haxB8khi7AyocRsTJIuBJ0/rS3ubwUhumhZApUdKuENL5ug02Z7hIFsyssAlL1KYBkkKWbbdfSjR669VGjLPZB690fZYB1kxhUE6xtxxqKvHWZoIerOuKR/pRtOomD+VBrH1/RtJ2hoiXD74n2viiy3zxUflgXCcxu7o2+92l3XRWQMJdYnFYuUlHxCc5b83Jg5uBg1ULRwZUavXQp7tQPjxh+pLNF5Jkx6MvTdMnbHT7r6m1sksAEQEdjp0P7sdqChNWTgU7UpQaCpzPdkw+spJkbELEdWh6erKkI6wmwUbzrXl22mug2ZNkYY2A64HC+lNH+HdtG9ufUd4Afl6xMGGkwaBS7P/0+/q+eV6WbcuetUzPuRP+JL/N2We3ZVz3HZazu9anepoRaOsyw1d1sDeKr2bMrcRIZEL1LgRK/OelYbiXzi6qK2RN0fQtl0xZJ3+rCYwqdpFHTdYfaEzTeY1oHtVTc6Dh5Pv63Dqzl9iN6+u5RcE08tLY178CHF+CH9PvvH6tQvlRe/BTpvi0Dign/9pHM7k8wdIklbXGEWJ+xFxsSS7FbEFkSguhtgEK/s6BJjO+kOwtkx/gWelEAO9GaE2BK5soehhGgUqmRnc7TmTykXwrO3Pr5gpmt0QnUQokUiThvDc5/dVtdCGoVZMXwbkRr0aVesw4v0ZJDjs8fNxpllbuvjPFb/r5iVqgEvDodtTw1yQ1nYwM4J/IMr6KP5kf7WPEXCfOr+1qjpWRbPjOR8MgBMwEwZC2dMQ3g8M4aF24dLoWJU06QVJSIc7JI 0XZfw951 IToFpwYOhr2tC8nY3tqeYi6WvhKOwRRBL/WOt1MUNPWOH86RjMEf3AwP5XgAAptFabvNyVd2e4jfayR7i61MXd9m8CO066RGgJJE6XLVtM6TZAz/yC0tvrT7+Z9L0k9nOSQwyugqWLqeVxwcmMdDy9KBcaYkt8KF3rh3A8FH8THPsUoiDx2iDaezchoQP7y7RStOdJadZTeiw3y1vV/7m//UieTr4Fyc8sdnbL7G3qmGTsRIth/sFFJ3cGP7C3wrDrDagbHnoCj74INjo6uBTMgALnIhgimbBgiEqlEOiK55l5uUCUBAYoNOgvkD+A0B1zQXL8l3yp3PVJy4TM7I9JgO7Z2WFgnYSqoi6yqFdzITLkzwOCUzSvFrPdBBzGEaE8uGSjNJPp9UXntfwFVwCCQVdhcX/kJmIgao0V7giLaDeGkkEMyVLGuoKhHHhTrBlxDNouA/xz3I1rcCGiVwRvtZAeRbbwQc13cEx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 4/24/25 8:08 AM, ??? wrote: > Jens Axboe ?2025?4?24??? 06:58??? >> >> On 4/23/25 9:55 AM, Jens Axboe wrote: >>> Something like this, perhaps - it'll ensure that io-wq workers get a >>> chance to flush out pending work, which should prevent the looping. I've >>> attached a basic test case. It'll issue a write that will fault, and >>> then try and cancel that as a way to trigger the TIF_NOTIFY_SIGNAL based >>> looping. >> >> Something that may actually work - use TASK_UNINTERRUPTIBLE IFF >> signal_pending() is true AND the fault has already been tried once >> before. If that's the case, rather than just call schedule() with >> TASK_INTERRUPTIBLE, use TASK_UNINTERRUPTIBLE and schedule_timeout() with >> a suitable timeout length that prevents the annoying parts busy looping. >> I used HZ / 10. >> >> I don't see how to fix userfaultfd for this case, either using io_uring >> or normal write(2). Normal syscalls can pass back -ERESTARTSYS and get >> it retried, but there's no way to do that from inside fault handling. So >> I think we just have to be nicer about it. >> >> Andrew, as the userfaultfd maintainer, what do you think? >> >> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c >> index d80f94346199..1016268c7b51 100644 >> --- a/fs/userfaultfd.c >> +++ b/fs/userfaultfd.c >> @@ -334,15 +334,29 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, >> return ret; >> } >> >> -static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags) >> +struct userfault_wait { >> + unsigned int task_state; >> + bool timeout; >> +}; >> + >> +static struct userfault_wait userfaultfd_get_blocking_state(unsigned int flags) >> { >> + /* >> + * If the fault has already been tried AND there's a signal pending >> + * for this task, use TASK_UNINTERRUPTIBLE with a small timeout. >> + * This prevents busy looping where schedule() otherwise does nothing >> + * for TASK_INTERRUPTIBLE when the task has a signal pending. >> + */ >> + if ((flags & FAULT_FLAG_TRIED) && signal_pending(current)) >> + return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, true }; >> + >> if (flags & FAULT_FLAG_INTERRUPTIBLE) >> - return TASK_INTERRUPTIBLE; >> + return (struct userfault_wait) { TASK_INTERRUPTIBLE, false }; >> >> if (flags & FAULT_FLAG_KILLABLE) >> - return TASK_KILLABLE; >> + return (struct userfault_wait) { TASK_KILLABLE, false }; >> >> - return TASK_UNINTERRUPTIBLE; >> + return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, false }; >> } >> >> /* >> @@ -368,7 +382,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) >> struct userfaultfd_wait_queue uwq; >> vm_fault_t ret = VM_FAULT_SIGBUS; >> bool must_wait; >> - unsigned int blocking_state; >> + struct userfault_wait wait_mode; >> >> /* >> * We don't do userfault handling for the final child pid update >> @@ -466,7 +480,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) >> uwq.ctx = ctx; >> uwq.waken = false; >> >> - blocking_state = userfaultfd_get_blocking_state(vmf->flags); >> + wait_mode = userfaultfd_get_blocking_state(vmf->flags); >> >> /* >> * Take the vma lock now, in order to safely call >> @@ -488,7 +502,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) >> * following the spin_unlock to happen before the list_add in >> * __add_wait_queue. >> */ >> - set_current_state(blocking_state); >> + set_current_state(wait_mode.task_state); >> spin_unlock_irq(&ctx->fault_pending_wqh.lock); >> >> if (!is_vm_hugetlb_page(vma)) >> @@ -501,7 +515,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) >> >> if (likely(must_wait && !READ_ONCE(ctx->released))) { >> wake_up_poll(&ctx->fd_wqh, EPOLLIN); >> - schedule(); >> + /* See comment in userfaultfd_get_blocking_state() */ >> + if (!wait_mode.timeout) >> + schedule(); >> + else >> + schedule_timeout(HZ / 10); >> } >> >> __set_current_state(TASK_RUNNING); >> >> -- >> Jens Axboe > I guess the previous io_work_fault patch might have already addressed > the issue sufficiently. The later patch that adds a timeout for > userfaultfd might That one isn't guaranteed to be safe, as it's not necessarily a safe context to prune the conditions that lead to a busy loop rather than the normal "schedule until the condition is resolved". Running task_work should only be done at the outermost point in the kernel, where the task state is known sane in terms of what locks etc are being held. For some conditions the patch will work just fine, but it's not guaranteed to be the case. > not be necessary wouldn?t returning after a timeout just cause the > same fault to repeat indefinitely again? Regardless of whether the > thread is in UN or IN state, the expected behavior should be to wait > until the page is filled or the uffd resource is released to be woken > up, which seems like the correct logic. Right, it'll just sleep timeout for a bit as not to be a 100% busy loop. That's unfortunately the best we can do for this case... The expected behavior is indeed to schedule until we get woken, however that just doesn't work if there are signals pending, or other conditions that lead to TASK_INTERRUPTIBLE + schedule() being a no-op. -- Jens Axboe