From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70016C433EF for ; Sun, 7 Nov 2021 18:27:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BFC1160F6B for ; Sun, 7 Nov 2021 18:27:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BFC1160F6B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=posk.io Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 56B946B00C5; Sun, 7 Nov 2021 13:27:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 51AA76B00C7; Sun, 7 Nov 2021 13:27:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 409B56B00C8; Sun, 7 Nov 2021 13:27:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 30D236B00C5 for ; Sun, 7 Nov 2021 13:27:14 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DF5F27596F for ; Sun, 7 Nov 2021 18:27:13 +0000 (UTC) X-FDA: 78782966346.04.083F956 Received: from mail-ua1-f41.google.com (mail-ua1-f41.google.com [209.85.222.41]) by imf23.hostedemail.com (Postfix) with ESMTP id E8A8E9000397 for ; Sun, 7 Nov 2021 18:26:59 +0000 (UTC) Received: by mail-ua1-f41.google.com with SMTP id q13so27403986uaq.2 for ; Sun, 07 Nov 2021 10:27:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=posk.io; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=x6qDJwFCEM31eDxN9rlVQlXj0fTGNQbXIopORuPZmxY=; b=HHq08N71FdqLDSBMmVrSxi2iRW5U0RuXPfXSZK2+ZlbELTTKuD4Rvcgmwo8as/5req wZn7JWk2/g4Af0dGyxzXFj1MKwdmqx/PG36+X9A0R3LayClasbxWh1eSztu4IX3dwHSY OwtiJZIIKVG6p11SIa6kjJJv9QkuQsg8GyISsBXFIQG05pmAimsRoGb65PwbVbxcaGbL a3xN9lwMeQ/Xcesx2/Rmtlyyxawge79NNKtRn3xxigZp3stme19KtIjBux+kQEltm0ef gW65S/uUvFg8/zd4ijjMn0ySUfygvg2eDwB7wikDjFJdrMHuKtLUtO2eA8vEq5It6DXy vLag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=x6qDJwFCEM31eDxN9rlVQlXj0fTGNQbXIopORuPZmxY=; b=eLuE2X1ZmsuTrV2i0bJLRfdEDE55JTVRrI6ELpbyYZBdGcses/eIcK5+qDQbSEZFBN mgsFjwYpkFTznLd7TPwX0fC4DCgH/uZxJ+BRYkrMhOA5k/j3i5F5ZELjkhVaTsknjtTP fFioW7ohr1ji1Ex+IhjXuieksm75mi7/larZ+MoUDzlWjdgyMZKrIiCpFMfN0SIZMB77 /l/d3ES7EBgW5l3J1QdcudlwLg3wv23gaKa04WzdJVSHG9Lc1H23n/qIiSs0vFXyfY2j ioJ9rUGmzUAYN9asDnQg9r1LOdES+wM2N3TmGHqGciwZmf9V3pp+C3kUJyE2dJMmqa6h UAYQ== X-Gm-Message-State: AOAM531M+ex1RvECaazj0H2lQLWCwpsn1JyzIpS5NKRsC9GcUa8WXg1v UQTwauBLP827/9GLynRjMstk0vI0JZK7LuF/y/MthA== X-Google-Smtp-Source: ABdhPJza9YuGM/kdbE744obc3oIb52ZoQIrUfYxUCfUyaGjbBE002Y7XSzSzXpb08vt4RJoxXRg988EmASpBI8MOAIM= X-Received: by 2002:a67:fa0f:: with SMTP id i15mr1303407vsq.16.1636309632812; Sun, 07 Nov 2021 10:27:12 -0800 (PST) MIME-Version: 1.0 References: <20211104195804.83240-1-posk@google.com> <20211104195804.83240-5-posk@google.com> In-Reply-To: From: Peter Oskolkov Date: Sun, 7 Nov 2021 10:27:02 -0800 Message-ID: Subject: Re: [PATCH v0.8 4/6] sched/umcg, lib/umcg: implement libumcg To: Tao Zhou Cc: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Andrew Morton , Dave Hansen , Andy Lutomirski , linux-mm@kvack.org, Linux Kernel Mailing List , linux-api@vger.kernel.org, Paul Turner , Ben Segall , Peter Oskolkov , Andrei Vagin , Jann Horn , Thierry Delisle Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E8A8E9000397 X-Stat-Signature: srckeh6md1ua46zk9qxkytte4isq6ort Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=posk.io header.s=google header.b=HHq08N71; dmarc=none; spf=pass (imf23.hostedemail.com: domain of posk@posk.io designates 209.85.222.41 as permitted sender) smtp.mailfrom=posk@posk.io X-HE-Tag: 1636309619-697730 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Nov 7, 2021 at 8:33 AM Tao Zhou wrote: > > On Thu, Nov 04, 2021 at 12:58:02PM -0700, Peter Oskolkov wrote: > > > +/* Update the state variable, set new timestamp. */ > > +static bool umcg_update_state(uint64_t *state, uint64_t *prev, uint64_t next) > > +{ > > + uint64_t prev_ts = (*prev) >> (64 - UMCG_STATE_TIMESTAMP_BITS); > > + struct timespec now; > > + uint64_t next_ts; > > + int res; > > + > > + /* > > + * clock_gettime(CLOCK_MONOTONIC, ...) takes less than 20ns on a > > + * typical Intel processor on average, even when run concurrently, > > + * so the overhead is low enough for most applications. > > + * > > + * If this is still too high, `next_ts = prev_ts + 1` should work > > + * as well. The only real requirement is that the "timestamps" are > > + * uniqueue per thread within a reasonable time frame. > > + */ > > + res = clock_gettime(CLOCK_MONOTONIC, &now); > > + assert(!res); > > + next_ts = (now.tv_sec * NSEC_PER_SEC + now.tv_nsec) >> > > + UMCG_STATE_TIMESTAMP_GRANULARITY; > > + > > + /* Cut higher order bits. */ > > + next_ts &= ((1ULL << UMCG_STATE_TIMESTAMP_BITS) - 1); > > This is the right cut.. The same to the kernel side. Yes, thanks! > > > + > > + if (next_ts == prev_ts) > > + ++next_ts; > > + > > +#ifndef NDEBUG > > + if (prev_ts > next_ts) { > > + fprintf(stderr, "%s: time goes back: prev_ts: %lu " > > + "next_ts: %lu diff: %lu\n", __func__, > > + prev_ts, next_ts, prev_ts - next_ts); > > + } > > +#endif > > + > > + /* Remove old timestamp, if any. */ > > + next &= ((1ULL << (64 - UMCG_STATE_TIMESTAMP_BITS)) - 1); > > + > > + /* Set the new timestamp. */ > > + next |= (next_ts << (64 - UMCG_STATE_TIMESTAMP_BITS)); > > + > > + /* > > + * TODO: review whether memory order below can be weakened to > > + * memory_order_acq_rel for success and memory_order_acquire for > > + * failure. > > + */ > > + return atomic_compare_exchange_strong_explicit(state, prev, next, > > + memory_order_seq_cst, memory_order_seq_cst); > > +} > > + > > > +static void task_unlock(struct umcg_task_tls *task, uint64_t expected_state, > > + uint64_t new_state) > > +{ > > + bool ok; > > + uint64_t next; > > + uint64_t prev = atomic_load_explicit(&task->umcg_task.state_ts, > > + memory_order_acquire); > > + > > + next = ((prev & ~UMCG_TASK_STATE_MASK_FULL) | new_state) & ~UMCG_TF_LOCKED; > > Use UMCG_TASK_STATE_MASK instead and the other state flag can be checked. Why? We want to clear the TF_LOCKED flag and keep every other bit of state, including other state flags (but excluding timestamp). > > All others places that use UMCG_TASK_STATE_MASK_FULL to mask to check > the task state may seems reasonable if the state flag not allowed to > be set when we check that task state, otherwise use UMCG_TASK_STATE_MASK > will be enough. > > Not sure. > > > Thanks, > Tao > > + assert(next != prev); > > + assert((prev & UMCG_TASK_STATE_MASK_FULL & ~UMCG_TF_LOCKED) == expected_state); > > + > > + ok = umcg_update_state(&task->umcg_task.state_ts, &prev, next); > > + assert(ok); > > +}