From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B1B0C369AB for ; Thu, 24 Apr 2025 14:45:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E5356B000C; Thu, 24 Apr 2025 10:45:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 393A06B00B0; Thu, 24 Apr 2025 10:45:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 235766B00B2; Thu, 24 Apr 2025 10:45:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 06A0F6B00B0 for ; Thu, 24 Apr 2025 10:45:16 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5EB8D160B5E for ; Thu, 24 Apr 2025 14:45:17 +0000 (UTC) X-FDA: 83369210274.29.AC8E170 Received: from mail-oo1-f49.google.com (mail-oo1-f49.google.com [209.85.161.49]) by imf20.hostedemail.com (Postfix) with ESMTP id 811BE1C0011 for ; Thu, 24 Apr 2025 14:45:15 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=cZCSkWbY; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of qq282012236@gmail.com designates 209.85.161.49 as permitted sender) smtp.mailfrom=qq282012236@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745505915; a=rsa-sha256; cv=none; b=fs5T2AlWa8mFBt/KHEyn/KTeyDCfxsLhslSmQZvVDsIflAZNgGVGvyXDarz8cG6GVciPOU 5qBR39BPq2lk0U1vC0qEsnIa62WgKEt47gDL0XX/tnm9FDdXT3nBAS+Rqt1MuTpZLm6Ktc s/eJx55XMDWmuTsDxA+NR4YIofwIhqM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745505915; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ivYN5mDklFLzKiOVUifLYvXVJ3uDlc8wINIXNGqMTP8=; b=dYAnJYIp8WGMf5pIlaZOaS/RlehXdySjSJ4hXEMunz/N/iDDkfw1/udDHZMTlgjSVVrd6U LsUQSszSInQD8Id4UJP+vLvYuaB5j36oYPHRl0iDiGxZvxiszadrKkUXZp4J9LZoNbQ4f9 2RHtYYDsEQOFNTS0TnkXkDd63jJbEgw= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=cZCSkWbY; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of qq282012236@gmail.com designates 209.85.161.49 as permitted sender) smtp.mailfrom=qq282012236@gmail.com Received: by mail-oo1-f49.google.com with SMTP id 006d021491bc7-604ad6c4cf1so418289eaf.3 for ; Thu, 24 Apr 2025 07:45:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745505914; x=1746110714; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ivYN5mDklFLzKiOVUifLYvXVJ3uDlc8wINIXNGqMTP8=; b=cZCSkWbYHv3ATZRzOLPszd5F78Ff3PkfYbCBBWjc4Pyl93zdDO1GVfCxhegDhTL47O uuGa90rYfyBeU9qvTueqRPJIjitHgQdeLSnrQ6tPCcjWyDtFedK5MoMlVUJ76nq5Re61 Q0V+poVi/ndZcfd7Fe1w7DkLIogthQPPyTszbtVpd5I5t/M4VYLcN3rE/0vyrjitjwaY nyIqouWlUGVHxM22OvaLuzOc87mEzKyWHhNFLk0CRsCrQte0GRJEIyK1txguKWfOxUmb FOhkWaIPy9Icli9uUyfmUgHq73c109kwUuwx8afDtio7FT/JWeiGee7BmHkRXEhcbW+q B3UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745505914; x=1746110714; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ivYN5mDklFLzKiOVUifLYvXVJ3uDlc8wINIXNGqMTP8=; b=o4fhHlNaU3Mv2J+AZbj2DLoW4Lpd/sI8x9xGekBBaJW+a5BZ2xVOItxfw3Wwj4BLor 7vSDTHxCRoIKS+tuu/48Zau7WJrjHzyT16U4JYkZTwSFYtGi77jEQzTNwIjeEG/qjoWk 6ev2cv3lDHKsKZroaJjK0opR3szEnEE9Wv1ZXGnusGQ5F7HBJtICaUo5OTo8MQGCcfgm hzd+6xBF+XMhq7B4XsZWLRN3DmmALK91z7AqegbwH83YCd+5VS0sj0cg9P4vDv7J9EIk aD8yAwPrOoZ6I9caXjh9JgUx1iQ85IZ5fZOl13m3ZCZh/E7ibL2Z6pFu1Gpi0xo6GQsP 89LQ== X-Forwarded-Encrypted: i=1; AJvYcCVeHFIZJG+SGCGp0CAqyEHbPHjFz1SIZ9nvwAmfvA2Xt5joFzRETtGEuzKAH7jSgN2PJ36zpETVMQ==@kvack.org X-Gm-Message-State: AOJu0YxeAOj9Hg5HjFJbCiWgY2r05IZEbryZG77tWRfbxTvUnOYi/y/r UDK7LpLPst32iIJTIac9TESH7iCJMYt2Tb8FD+vszVqQjKzdeW4aLf58/9SNZ+yTpYPTdw+5E+6 0hYEMBp9Ag2AQQRDLvnfkR4fGc3rcbUGjPFgBLbmi X-Gm-Gg: ASbGnctLZOLS0ffZ3YMk157FW7k5wp8kuzSJqO6nZr3S9Oc0gn0AiyxG1cak7adpYlm +z39ucI/PF+qWw5Hnvmy8mVf2Z69FDn5EBQWnt5c3+b7jrrO6Vd8+WF//vQvLHP/P50etnVjPMA U8MwrndAXhEWJdPIjzTvyCR4z2+LjSPcEMCg== X-Google-Smtp-Source: AGHT+IFQSaMuNjk+yxCa9qzVf4T3bP4fN/vKiQkFhG6TF8LpTaQahDSpm6uB6amyKRxrxFgXnFNjb4mL7e265AZNAH8= X-Received: by 2002:a05:6870:238d:b0:296:b568:7901 with SMTP id 586e51a60fabf-2d96e311871mr1601156fac.16.1745505914450; Thu, 24 Apr 2025 07:45:14 -0700 (PDT) MIME-Version: 1.0 References: <20250422162913.1242057-1-qq282012236@gmail.com> <20250422162913.1242057-2-qq282012236@gmail.com> <14195206-47b1-4483-996d-3315aa7c33aa@kernel.dk> <7bea9c74-7551-4312-bece-86c4ad5c982f@kernel.dk> <52d55891-36e3-43e7-9726-a2cd113f5327@kernel.dk> <5c20b5ca-ce41-43c4-870a-c50206ab058d@kernel.dk> In-Reply-To: <5c20b5ca-ce41-43c4-870a-c50206ab058d@kernel.dk> From: =?UTF-8?B?5aec5pm65Lyf?= Date: Thu, 24 Apr 2025 22:45:01 +0800 X-Gm-Features: ATxdqUHq0xvp_kAPKUb6ho_n4ukUomRbsXEEodCWwwWOleA8px4xCz-CIo3C81w Message-ID: Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios To: Jens Axboe Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, akpm@linux-foundation.org, peterx@redhat.com, asml.silence@gmail.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, io-uring@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 811BE1C0011 X-Stat-Signature: hnri1dxokegkz33x6oy5wtf5t4gkor3y X-Rspam-User: X-HE-Tag: 1745505915-47408 X-HE-Meta: U2FsdGVkX1+aNhaB2MLZq8oapNVlyN7MlI4tK5lAUhUFX1n4hgfHeUmX5zHLRzsvwY2U3cTSZJR0xOrDrfKIApSG0qFGh2JS3sZXCEbbMf3eq0oIPtKJdwim4oaie4rVmYLsqLemKYoryPI5iQ57kVoAHugYgqxazQlMaJ91HbL0os86kN3eZ0krktkelIK77VQqpZTAeK/Xs0++dP6N97XGLRbYwbyPHr/89OwnLafzWEqH875+H1F8SGE9r4+SYC6m4fRQ196xpZ892vWyfZJGNrkJ9f9+lfiaxf6ceslmjGU0tTc3tT+a5wUj+PxXCiGMVJb6Y0Fp35MxJbW3Aq7nv0AJn+7ZCTrnMi0rCkx+x7e7g80+oMeYrpP432RvgrGWyMuAVqKyyXuIywhR/jE1vrxbOpj8Q7huUDsAf4Y7qh3bJ9zfpnGHjJnqVz5dQQCSXaM/Y9Y0BFK3ww49SNBJz8bSpGvLMpAMZIUjHzLClA/LuZOdjAPK8BV+DqVJ5unP61b5lEMBReEQmil9CdDwOOvMqH4bQcLDsuY6uKjqdh/W/Gtt9nQHVB7AXVQ6E9HmBYERO4ArtDiKwdZiVQ4KCRtVHb/v8tSqPSkLfs/sE18R4KFPcil2zWhMJwQPS25VsybEVkFIefYsNuP19VhxvxAhxbpff5OBlzzaPY+j85NuotVoBbaM8jdEQLy6n6AF4Ih35dU9SmA9EhNtIvX59Yb46tDPv4eMtbPajiQ6cUZRPSGIeBwZ4QcCLmKwNCcRzOmhrEu8pI9o6ItmTnrKKEAVgUffKrvwX2cyk9Yh0Jjf/vtxHD2rAUo15fRiHefe6Qg3vwbgmVQAhOMQwstYoZyI6ftG3otUPMSSx8kc+Dmxtj1YFxFSHYP0dV9mth7blZh+c80uy+pOmTdJDhgLZedeE4SoCjfSCeCEszrzmvaE9SYYjp6CIbeeftZrxN21MIIDr18j/6WfTvn AYC+QmvU uf7DfFRCL/XO/rU7aJqxEGryH2aTpBc+pTdyOFdlOMtLn0M9Ymk6LaU1d5AQluLp15awCty2UBOneymk97i8ZhHqOk41LJWHiOukX3trkm4uyE9mQQyMk/N+sFPqeqp/XaFQMizWAVrgj+hXKUzpb+sAG9NwTlmCh8fB3MoKaaItJyMdPsc4n0pNYtMKxH0x8jgFXQycG7ZpCY7cB6rCWrfqvA4QZdQqh+iURpeyhfsIaJoNqb5tX1Q8uC3iUqm7XatFvWi2Ih69PCjK3zi1V8JFVdnOVxKBePokHvdnZjoBawQ7XLgCiIRootlNv1qr73mICw2DWpUyMPXTh+m16vXBdtbYTvEnOoRdUszq0utrZuOi3f6vOcS3EM5AIFgeD0ena7wp3bLcjjw/tXN/eywAqfYbUwmbUuvNm+9cSF43QENigmVq6nZZUdl5M5egtwAPV97qWZZmKLB8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Jens Axboe =E4=BA=8E2025=E5=B9=B44=E6=9C=8824=E6=97=A5=E5= =91=A8=E5=9B=9B 22:13=E5=86=99=E9=81=93=EF=BC=9A > > On 4/24/25 8:08 AM, ??? wrote: > > Jens Axboe ?2025?4?24??? 06:58??? > >> > >> On 4/23/25 9:55 AM, Jens Axboe wrote: > >>> Something like this, perhaps - it'll ensure that io-wq workers get a > >>> chance to flush out pending work, which should prevent the looping. I= 've > >>> attached a basic test case. It'll issue a write that will fault, and > >>> then try and cancel that as a way to trigger the TIF_NOTIFY_SIGNAL ba= sed > >>> looping. > >> > >> Something that may actually work - use TASK_UNINTERRUPTIBLE IFF > >> signal_pending() is true AND the fault has already been tried once > >> before. If that's the case, rather than just call schedule() with > >> TASK_INTERRUPTIBLE, use TASK_UNINTERRUPTIBLE and schedule_timeout() wi= th > >> a suitable timeout length that prevents the annoying parts busy loopin= g. > >> I used HZ / 10. > >> > >> I don't see how to fix userfaultfd for this case, either using io_urin= g > >> or normal write(2). Normal syscalls can pass back -ERESTARTSYS and get > >> it retried, but there's no way to do that from inside fault handling. = So > >> I think we just have to be nicer about it. > >> > >> Andrew, as the userfaultfd maintainer, what do you think? > >> > >> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > >> index d80f94346199..1016268c7b51 100644 > >> --- a/fs/userfaultfd.c > >> +++ b/fs/userfaultfd.c > >> @@ -334,15 +334,29 @@ static inline bool userfaultfd_must_wait(struct = userfaultfd_ctx *ctx, > >> return ret; > >> } > >> > >> -static inline unsigned int userfaultfd_get_blocking_state(unsigned in= t flags) > >> +struct userfault_wait { > >> + unsigned int task_state; > >> + bool timeout; > >> +}; > >> + > >> +static struct userfault_wait userfaultfd_get_blocking_state(unsigned = int flags) > >> { > >> + /* > >> + * If the fault has already been tried AND there's a signal pe= nding > >> + * for this task, use TASK_UNINTERRUPTIBLE with a small timeou= t. > >> + * This prevents busy looping where schedule() otherwise does = nothing > >> + * for TASK_INTERRUPTIBLE when the task has a signal pending. > >> + */ > >> + if ((flags & FAULT_FLAG_TRIED) && signal_pending(current)) > >> + return (struct userfault_wait) { TASK_UNINTERRUPTIBLE,= true }; > >> + > >> if (flags & FAULT_FLAG_INTERRUPTIBLE) > >> - return TASK_INTERRUPTIBLE; > >> + return (struct userfault_wait) { TASK_INTERRUPTIBLE, f= alse }; > >> > >> if (flags & FAULT_FLAG_KILLABLE) > >> - return TASK_KILLABLE; > >> + return (struct userfault_wait) { TASK_KILLABLE, false = }; > >> > >> - return TASK_UNINTERRUPTIBLE; > >> + return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, false }= ; > >> } > >> > >> /* > >> @@ -368,7 +382,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, = unsigned long reason) > >> struct userfaultfd_wait_queue uwq; > >> vm_fault_t ret =3D VM_FAULT_SIGBUS; > >> bool must_wait; > >> - unsigned int blocking_state; > >> + struct userfault_wait wait_mode; > >> > >> /* > >> * We don't do userfault handling for the final child pid upda= te > >> @@ -466,7 +480,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, = unsigned long reason) > >> uwq.ctx =3D ctx; > >> uwq.waken =3D false; > >> > >> - blocking_state =3D userfaultfd_get_blocking_state(vmf->flags); > >> + wait_mode =3D userfaultfd_get_blocking_state(vmf->flags); > >> > >> /* > >> * Take the vma lock now, in order to safely call > >> @@ -488,7 +502,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, = unsigned long reason) > >> * following the spin_unlock to happen before the list_add in > >> * __add_wait_queue. > >> */ > >> - set_current_state(blocking_state); > >> + set_current_state(wait_mode.task_state); > >> spin_unlock_irq(&ctx->fault_pending_wqh.lock); > >> > >> if (!is_vm_hugetlb_page(vma)) > >> @@ -501,7 +515,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf,= unsigned long reason) > >> > >> if (likely(must_wait && !READ_ONCE(ctx->released))) { > >> wake_up_poll(&ctx->fd_wqh, EPOLLIN); > >> - schedule(); > >> + /* See comment in userfaultfd_get_blocking_state() */ > >> + if (!wait_mode.timeout) > >> + schedule(); > >> + else > >> + schedule_timeout(HZ / 10); > >> } > >> > >> __set_current_state(TASK_RUNNING); > >> > >> -- > >> Jens Axboe > > I guess the previous io_work_fault patch might have already addressed > > the issue sufficiently. The later patch that adds a timeout for > > userfaultfd might > > That one isn't guaranteed to be safe, as it's not necessarily a safe > context to prune the conditions that lead to a busy loop rather than the > normal "schedule until the condition is resolved". Running task_work > should only be done at the outermost point in the kernel, where the task > state is known sane in terms of what locks etc are being held. For some > conditions the patch will work just fine, but it's not guaranteed to be > the case. > > > not be necessary wouldn?t returning after a timeout just cause the > > same fault to repeat indefinitely again? Regardless of whether the > > thread is in UN or IN state, the expected behavior should be to wait > > until the page is filled or the uffd resource is released to be woken > > up, which seems like the correct logic. > > Right, it'll just sleep timeout for a bit as not to be a 100% busy loop. > That's unfortunately the best we can do for this case... The expected > behavior is indeed to schedule until we get woken, however that just > doesn't work if there are signals pending, or other conditions that lead > to TASK_INTERRUPTIBLE + schedule() being a no-op. > > -- > Jens Axboe In my testing, clearing the NOTIFY flag in the original io_work_fault ensures that the next schedule correctly waits. However, adding a timeout causes the issue to return to multiple faults again. Also, after clearing the NOTIFY flag in handle_userfault, it=E2=80=99s possible that some task work hasn=E2=80=99t been executed. But if task_work_run isn=E2=80=99t added back, tasks might get lost? It seems like there isn=E2=80=99t an appropriate place to add it back. So, do you suggest adjusting the fault frequency in userfaultfd to make it more rhythmic to alleviate the issue?