From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EF5BC5B552 for ; Fri, 30 May 2025 09:35:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE4326B009C; Fri, 30 May 2025 05:35:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E93FE6B00CB; Fri, 30 May 2025 05:35:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5C196B00CC; Fri, 30 May 2025 05:35:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AE2B36B009C for ; Fri, 30 May 2025 05:35:25 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 504E7162BDA for ; Fri, 30 May 2025 09:35:25 +0000 (UTC) X-FDA: 83499066210.04.6788A45 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf21.hostedemail.com (Postfix) with ESMTP id 4E8FE1C0007 for ; Fri, 30 May 2025 09:35:22 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=cfF8gG5M; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf21.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748597723; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VFn4X8pNflxKPFbQMCWiED+xW2QzJgp0gVRveKZQMVQ=; b=5n2TseCmq96hLzhkHm4SHV8NSuN927VFGtGFFUcQRPgbMaHehKHmpCyG4V5RAUHCPUuIkt nPfjKgLWrkrArH8V1BwFxFIrZwbOZLDV//H4NfONfn/llZrXSdjcHt4PnHpHkf1mBYvJUW Vv6JCiD4sogqTcktYQDDyy3EA1dsSxo= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=cfF8gG5M; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf21.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748597723; a=rsa-sha256; cv=none; b=MEq9uzbtgwYqNcSgB/acAA43SVRbaPEm7nCAW8RXaRlhpOObMdqMC3M3Lfwc6HVQrcJYRv /IA3s8hYG0efbYt+Uac7TaT//k+lc4I13tNb1BuGmM6QmT5YHu4luwLjspVc+vQjzIYIZk Gt6ADXN3FFuIHaQrNUwo/O8wQXI6i04= Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-310cf8f7301so1490205a91.1 for ; Fri, 30 May 2025 02:35:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1748597722; x=1749202522; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VFn4X8pNflxKPFbQMCWiED+xW2QzJgp0gVRveKZQMVQ=; b=cfF8gG5MXHEw58LZ3CGIF535hpMBZ6GYT4xgPl8nT48rn1ZfDQkR4K1kT8EwcMVy8+ KVlb9Qq4vgojQrT7rGESsALmuLmDbAOsk4Bioc4EOEEajnZhMMCFIrlGS+QSKxDoI28Y i1zbVQ2gywqZ685FvIl4iXVU9a3mvdpuMUXzNWHx2dhmQ6F8BOWWQq9i+fACwTqsUD34 eL2S+0oSSnppoHJHkM6E4ubTkK5ZlRacaT/wD43Mdz2BF+Hav9gyqyv7JmLU1L5WjnY1 1/0+3jWJHD/Z8BDEuVGySLrTJNLkW/Ta7ngWY1ZsSulXxs+l/ib1DS2lILZHwG0Yemqx 8fWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748597722; x=1749202522; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VFn4X8pNflxKPFbQMCWiED+xW2QzJgp0gVRveKZQMVQ=; b=IjeJBB9Lfbg8V/Mj188xOMNa3TWVchByloc57qz7ZLSOIWA2WtC1LiCam4fYqClZxp 9hvXUYyNNOayMCodBbSRrQTKN8UneXyXRNuVGon2LAo7fns5DkpXxxiXF5LwTGf0NXjZ NKXgqG1zGibtqiaqUR1luVzSmYLAn3JXbve6TCPS/xOqk1Kc9UDsTkNza7LlU75SX+xV 1niZmwZZTZj+4qkE678PsQ7oKC37j7/mm1LrFP9O+Xpt7NPwJy5SlmVB2dA10R1lmYde a6NiWPFIzdKXMJa0kiBXK9YMxFthUlCf6AjnKja0NkYYUlx6buCrbzBqKhiEPByGgSTA qWVg== X-Forwarded-Encrypted: i=1; AJvYcCVuaPscJ4Zos8/R/nv/Ijddk0zPKRcAYpfVYLBA1L8yuiHu5lLe7xBpPbSpw3+xk6vK/7mDBnFkmA==@kvack.org X-Gm-Message-State: AOJu0Yz5oGtVTSa9hlIc/BVEhpgOrnOkHfYRyLIX1cXTasPp1noq1XzF +hT7wDR0m6Vlh+k9yKSbrPbmoLumeWVbea/AXApgTcv3wQ8RSh8WtIrhw4Xw3+vEyO0= X-Gm-Gg: ASbGnctiv/a3+Pv4XiBvXzQwbZqOU+bkiLC73xZY7CCkUBwsg4NdPMGzJcfpUg4stPG 76yL9+Orwbi394OHE/wJYvbv2TqyTk0la49RZBJN6W7ARgLRFvS1dSjNQwhIMRhVfy7euGTmfBr opjJheSDbuK73SRTerRHfMH8kMSkQ7Bbup5zlg7kplFcL5gU37ffXfgcaYf5sqeCGxWePzrzYqG juNy4ixvWpjV7lyUecG0VbybGqR1aq4U52SFyY5iV7H8JuIQ516nT7/7QX6qEccr1lvcmXaRqF2 QQUAM5nsSZ1Ja3BMoiPv8mlCiBQES5+/6Sw6jqpl+brs8JULPBXIIhM14UkcRDrXjxkUGo/RCfd qVgVF3z/lQES3sNWh6Zm+ X-Google-Smtp-Source: AGHT+IH46+bZCrX7K5xEEg6cSKu1TR1I7SkSliOWsHgcBqoA6CG8wyIvf12Zvpihk7VZmfu1IuwYXQ== X-Received: by 2002:a17:90b:164b:b0:310:8d4a:4a97 with SMTP id 98e67ed59e1d1-31214ee68f2mr10223135a91.15.1748597721380; Fri, 30 May 2025 02:35:21 -0700 (PDT) Received: from FQ627FTG20.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e29f7b8sm838724a91.2.2025.05.30.02.35.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 30 May 2025 02:35:21 -0700 (PDT) From: Bo Li To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, luto@kernel.org, kees@kernel.org, akpm@linux-foundation.org, david@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, peterz@infradead.org Cc: dietmar.eggemann@arm.com, hpa@zytor.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, jannh@google.com, pfalcato@suse.de, riel@surriel.com, harry.yoo@oracle.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, yinhongbo@bytedance.com, dengliang.1214@bytedance.com, xieyongji@bytedance.com, chaiwen.cc@bytedance.com, songmuchun@bytedance.com, yuanzhu@bytedance.com, chengguozhu@bytedance.com, sunjiadong.lff@bytedance.com, Bo Li Subject: [RFC v2 27/35] RPAL: add epoll support Date: Fri, 30 May 2025 17:27:55 +0800 Message-Id: <7eb30a577e2c6a4f582515357aea25260105eb18.1748594841.git.libo.gcs85@bytedance.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4E8FE1C0007 X-Stat-Signature: zgq5h59fjdb1133nqdx58d3i3xb8sms1 X-Rspam-User: X-HE-Tag: 1748597722-900372 X-HE-Meta: U2FsdGVkX1/dyS4O16EHbrgoR3/yxvsQdeh1qTo12McAp5SVVed4qdBIOLmgyaHlgw2PSVmUNe0yQqMHnfHnNOnnNsflQVSADr3uhYG9jgGr5i3TdSCp3dXx4ChKikL37R4ftLo4fgWNVvsptNmzhOXl1R4E9ooZqjyvFZeWvEBy2yjffBxkjW8AM76Ofl9ea4IFERFeu5oShzP/TkJ7cMrTLVNQ3DNRnUal3gZr7/yWIzKm2aKd1A3f0/K/+H3Wt0KMYV8lrmnafRKLWbXIxB7bmQmo6RqdtQ9hg6U4ifX16WihLCUaQm9FHjmFI3YFpEPBncc3GPS6GswDSP56zMgH/sMs2nQh99oo9QAzAOf4FyZShGlja8Or35lBdzhiI9JXxK7ketBsbEjf2q6T7O0oLqAW9k3WGL6bAegBrDQ6ArEvX5MCSS0amPqewKn6N0axLuHkMTEJRpUz6FPE6lTqnqzZOdLW/yqeHWBXhu7qVtGhGTDbzyrbom2K5weBP8FQVp0djdsZ2SAvy3TC35FAwj+olyQDJqFDN3zwUCxluCF52SWTfZPn8slDgYj/5FcKkuzYRsWv3Nv3ddvV0LBl27e8RfK98r+BNSTZ/6/xTOFT5kG085JcsN4R2xCAQhQ3/r9D4u7rk6m6f2SGN51VBhEgRHLTC6A3zFGD+k5zXaEj3xJzbnl+X6ran/t+j2iIWEoOLjUNSpg9nvCMLSBuz/DNSgMmjdU/aQjcHegvsC6w6Z0E+YmjhP9JUXfJNxlpokZLGs2oBQ/KlAZY9ACXCInkwqNpEzzpGV/dUVJeVoLjTWcuOt8B1RfoiQJPTM6DQvcX6kLqsmr7Jjf/B54UDzgmyX0qq+OVeZr54JDIqLUdNct7lfY3oTeYM/eAsQNWJnEt9w8yIf5YM3Pdbg2YInE14H1tORYTf1PUuXrYuiZpExT38SYEkzyljn2G6Y2Q1k6yTUf9Z2ACmTi SuG6sXjB kgMmndotntqGFxUpSPJVk6JMdkrD/m+e2nHULi51f2zrVQyNMiTuflV1jxRGZ6YDzs+8cXH6pzuluAeCUlHt2SjJksry6iScZIKuvXSuUk+DCJVEpnkrbYPKG3y+lxTsz9KsPYaCBmEHyBd4FYILEPqni/sJ/eihvPehISWC7W9AYGb1KO+8FA9DFJTNLS6bqVEWTR54fJPZVlVqW+K0yK2wt7GivmtCEM6xnnwLmveK32j0pDUkKbSZMkiBJS3XNv+/RirZf43HlvxpIrQvXSWxsaMYJI32bLk69cMDKMsdRYKEhcwsG7/E2Og== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To support the epoll family, RPAL needs to add new logic for RPAL services to the existing epoll logic, ensuring that user mode can execute RPAL service-related logic through identical interfaces. When the receiver thread calls epoll_wait(), it can set RPAL_EP_POLL_MAGIC to notify the kernel to invoke RPAL-related logic. The kernel then sets the receiver's state to RPAL_RECEIVER_STATE_READY and transitions it to RPAL_RECEIVER_STATE_WAIT when the receiver is actually removed from the runqueue, allowing the sender to perform RPAL calls on the receiver thread. Signed-off-by: Bo Li --- arch/x86/rpal/core.c | 4 + fs/eventpoll.c | 200 +++++++++++++++++++++++++++++++++++++++++++ include/linux/rpal.h | 21 +++++ kernel/sched/core.c | 17 ++++ 4 files changed, 242 insertions(+) diff --git a/arch/x86/rpal/core.c b/arch/x86/rpal/core.c index 47c9e551344e..6a22b9faa100 100644 --- a/arch/x86/rpal/core.c +++ b/arch/x86/rpal/core.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include "internal.h" @@ -63,6 +64,7 @@ void rpal_kernel_ret(struct pt_regs *regs) if (rpal_test_current_thread_flag(RPAL_RECEIVER_BIT)) { rcc = current->rpal_rd->rcc; + regs->ax = rpal_try_send_events(current->rpal_rd->ep, rcc); atomic_xchg(&rcc->receiver_state, RPAL_RECEIVER_STATE_KERNEL_RET); } else { tsk = current->rpal_sd->receiver; @@ -142,6 +144,7 @@ rpal_do_kernel_context_switch(struct task_struct *next, struct pt_regs *regs) struct task_struct *prev = current; if (rpal_test_task_thread_flag(next, RPAL_LAZY_SWITCHED_BIT)) { + rpal_resume_ep(next); current->rpal_sd->receiver = next; rpal_lock_cpu(current); rpal_lock_cpu(next); @@ -154,6 +157,7 @@ rpal_do_kernel_context_switch(struct task_struct *next, struct pt_regs *regs) */ rebuild_sender_stack(current->rpal_sd, regs); rpal_schedule(next); + fdput(next->rpal_rd->f); } else { update_dst_stack(next, regs); /* diff --git a/fs/eventpoll.c b/fs/eventpoll.c index d4dbffdedd08..437cd5764c03 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -38,6 +38,7 @@ #include #include #include +#include #include /* @@ -2141,6 +2142,187 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events, } } +#ifdef CONFIG_RPAL + +void rpal_resume_ep(struct task_struct *tsk) +{ + struct rpal_receiver_data *rrd = tsk->rpal_rd; + struct eventpoll *ep = (struct eventpoll *)rrd->ep; + struct rpal_receiver_call_context *rcc = rrd->rcc; + + if (rcc->timeout > 0) { + hrtimer_cancel(&rrd->ep_sleeper.timer); + destroy_hrtimer_on_stack(&rrd->ep_sleeper.timer); + } + if (!list_empty_careful(&rrd->ep_wait.entry)) { + write_lock(&ep->lock); + __remove_wait_queue(&ep->wq, &rrd->ep_wait); + write_unlock(&ep->lock); + } +} + +int rpal_try_send_events(void *ep, struct rpal_receiver_call_context *rcc) +{ + int eavail; + int res = 0; + + res = ep_send_events(ep, rcc->events, rcc->maxevents); + if (res > 0) + ep_suspend_napi_irqs(ep); + + eavail = ep_events_available(ep); + if (!eavail) { + atomic_and(~RPAL_KERNEL_PENDING, &rcc->ep_pending); + /* check again to avoid data race on RPAL_KERNEL_PENDING */ + eavail = ep_events_available(ep); + if (eavail) + atomic_or(RPAL_KERNEL_PENDING, &rcc->ep_pending); + } + return res; +} + +static int rpal_schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta, + const enum hrtimer_mode mode, + clockid_t clock_id) +{ + struct hrtimer_sleeper *t = ¤t->rpal_rd->ep_sleeper; + + /* + * Optimize when a zero timeout value is given. It does not + * matter whether this is an absolute or a relative time. + */ + if (expires && *expires == 0) { + __set_current_state(TASK_RUNNING); + return 0; + } + + /* + * A NULL parameter means "infinite" + */ + if (!expires) { + schedule(); + return -EINTR; + } + + hrtimer_setup_sleeper_on_stack(t, clock_id, mode); + hrtimer_set_expires_range_ns(&t->timer, *expires, delta); + hrtimer_sleeper_start_expires(t, mode); + + if (likely(t->task)) + schedule(); + + hrtimer_cancel(&t->timer); + destroy_hrtimer_on_stack(&t->timer); + + __set_current_state(TASK_RUNNING); + + return !t->task ? 0 : -EINTR; +} + +static int rpal_ep_poll(struct eventpoll *ep, struct epoll_event __user *events, + int maxevents, struct timespec64 *timeout) +{ + int res = 0, eavail, timed_out = 0; + u64 slack = 0; + struct rpal_receiver_data *rrd = current->rpal_rd; + wait_queue_entry_t *wait = &rrd->ep_wait; + ktime_t expires, *to = NULL; + + rrd->ep = ep; + + lockdep_assert_irqs_enabled(); + + if (timeout && (timeout->tv_sec | timeout->tv_nsec)) { + slack = select_estimate_accuracy(timeout); + to = &expires; + *to = timespec64_to_ktime(*timeout); + } else if (timeout) { + timed_out = 1; + } + + eavail = ep_events_available(ep); + + while (1) { + if (eavail) { + res = rpal_try_send_events(ep, rrd->rcc); + if (res) { + atomic_xchg(&rrd->rcc->receiver_state, + RPAL_RECEIVER_STATE_RUNNING); + return res; + } + } + + if (timed_out) { + atomic_xchg(&rrd->rcc->receiver_state, + RPAL_RECEIVER_STATE_RUNNING); + return 0; + } + + eavail = ep_busy_loop(ep); + if (eavail) + continue; + + if (signal_pending(current)) { + atomic_xchg(&rrd->rcc->receiver_state, + RPAL_RECEIVER_STATE_RUNNING); + return -EINTR; + } + + init_wait(wait); + wait->func = rpal_ep_autoremove_wake_function; + wait->private = rrd; + write_lock_irq(&ep->lock); + + atomic_xchg(&rrd->rcc->receiver_state, + RPAL_RECEIVER_STATE_READY); + __set_current_state(TASK_INTERRUPTIBLE); + + eavail = ep_events_available(ep); + if (!eavail) + __add_wait_queue_exclusive(&ep->wq, wait); + + write_unlock_irq(&ep->lock); + + if (!eavail && ep_schedule_timeout(to)) { + if (RPAL_USER_PENDING & atomic_read(&rrd->rcc->ep_pending)) { + timed_out = 1; + } else { + timed_out = + !rpal_schedule_hrtimeout_range_clock( + to, slack, HRTIMER_MODE_ABS, + CLOCK_MONOTONIC); + } + } + atomic_cmpxchg(&rrd->rcc->receiver_state, + RPAL_RECEIVER_STATE_READY, + RPAL_RECEIVER_STATE_RUNNING); + __set_current_state(TASK_RUNNING); + + /* + * We were woken up, thus go and try to harvest some events. + * If timed out and still on the wait queue, recheck eavail + * carefully under lock, below. + */ + eavail = 1; + + if (!list_empty_careful(&wait->entry)) { + write_lock_irq(&ep->lock); + /* + * If the thread timed out and is not on the wait queue, + * it means that the thread was woken up after its + * timeout expired before it could reacquire the lock. + * Thus, when wait.entry is empty, it needs to harvest + * events. + */ + if (timed_out) + eavail = list_empty(&wait->entry); + __remove_wait_queue(&ep->wq, wait); + write_unlock_irq(&ep->lock); + } + } +} +#endif + /** * ep_loop_check_proc - verify that adding an epoll file inside another * epoll structure does not violate the constraints, in @@ -2529,7 +2711,25 @@ static int do_epoll_wait(int epfd, struct epoll_event __user *events, ep = fd_file(f)->private_data; /* Time to fish for events ... */ +#ifdef CONFIG_RPAL + /* + * For RPAL task, if it is a receiver and it set MAGIC in shared memory, + * We think it is prepared for rpal calls. Therefore, we need to handle + * it differently. + * + * In other cases, RPAL task always plays like a normal task. + */ + if (rpal_current_service() && + rpal_test_current_thread_flag(RPAL_RECEIVER_BIT) && + current->rpal_rd->rcc->rpal_ep_poll_magic == RPAL_EP_POLL_MAGIC) { + current->rpal_rd->f = f; + return rpal_ep_poll(ep, events, maxevents, to); + } else { + return ep_poll(ep, events, maxevents, to); + } +#else return ep_poll(ep, events, maxevents, to); +#endif } SYSCALL_DEFINE4(epoll_wait, int, epfd, struct epoll_event __user *, events, diff --git a/include/linux/rpal.h b/include/linux/rpal.h index f2474cb53abe..5912ffec6e28 100644 --- a/include/linux/rpal.h +++ b/include/linux/rpal.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include #define RPAL_ERROR_MSG "rpal error: " #define rpal_err(x...) pr_err(RPAL_ERROR_MSG x) @@ -89,6 +91,7 @@ enum { }; #define RPAL_ERROR_MAGIC 0x98CC98CC +#define RPAL_EP_POLL_MAGIC 0xCC98CC98 #define RPAL_SID_SHIFT 24 #define RPAL_ID_SHIFT 8 @@ -103,6 +106,9 @@ enum { #define RPAL_PKRU_UNION 1 #define RPAL_PKRU_INTERSECT 2 +#define RPAL_KERNEL_PENDING 0x1 +#define RPAL_USER_PENDING 0x2 + extern unsigned long rpal_cap; enum rpal_task_flag_bits { @@ -282,6 +288,12 @@ struct rpal_receiver_call_context { int receiver_id; atomic_t receiver_state; atomic_t sender_state; + atomic_t ep_pending; + int rpal_ep_poll_magic; + int epfd; + void __user *events; + int maxevents; + int timeout; }; /* recovery point for sender */ @@ -325,6 +337,10 @@ struct rpal_receiver_data { struct rpal_shared_page *rsp; struct rpal_receiver_call_context *rcc; struct task_struct *sender; + void *ep; + struct fd f; + struct hrtimer_sleeper ep_sleeper; + wait_queue_entry_t ep_wait; }; struct rpal_sender_data { @@ -574,4 +590,9 @@ __rpal_switch_to(struct task_struct *prev_p, struct task_struct *next_p); asmlinkage __visible void rpal_schedule_tail(struct task_struct *prev); int do_rpal_mprotect_pkey(unsigned long start, size_t len, int pkey); void rpal_set_pku_schedule_tail(struct task_struct *prev); +int rpal_ep_autoremove_wake_function(wait_queue_entry_t *curr, + unsigned int mode, int wake_flags, + void *key); +void rpal_resume_ep(struct task_struct *tsk); +int rpal_try_send_events(void *ep, struct rpal_receiver_call_context *rcc); #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index eb5d5bd51597..486d59bdd3fc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6794,6 +6794,23 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) #define SM_RTLOCK_WAIT 2 #ifdef CONFIG_RPAL +int rpal_ep_autoremove_wake_function(wait_queue_entry_t *curr, + unsigned int mode, int wake_flags, + void *key) +{ + struct rpal_receiver_data *rrd = curr->private; + struct task_struct *tsk = rrd->rcd.bp_task; + int ret; + + ret = try_to_wake_up(tsk, mode, wake_flags); + + list_del_init_careful(&curr->entry); + if (!ret) + atomic_or(RPAL_KERNEL_PENDING, &rrd->rcc->ep_pending); + + return 1; +} + static inline void rpal_check_ready_state(struct task_struct *tsk, int state) { if (rpal_test_task_thread_flag(tsk, RPAL_RECEIVER_BIT)) { -- 2.20.1