From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38A8EC433FE for ; Wed, 27 Oct 2021 18:22:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 83E0760F9B for ; Wed, 27 Oct 2021 18:22:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 83E0760F9B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 17DDE6B0072; Wed, 27 Oct 2021 14:22:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 12D8A940007; Wed, 27 Oct 2021 14:22:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F10146B0074; Wed, 27 Oct 2021 14:22:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id CB3AA6B0072 for ; Wed, 27 Oct 2021 14:22:48 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 617AC2B37D for ; Wed, 27 Oct 2021 18:22:48 +0000 (UTC) X-FDA: 78743038416.39.F086782 Received: from mail-lj1-f172.google.com (mail-lj1-f172.google.com [209.85.208.172]) by imf28.hostedemail.com (Postfix) with ESMTP id C8C8B9000515 for ; Wed, 27 Oct 2021 18:22:47 +0000 (UTC) Received: by mail-lj1-f172.google.com with SMTP id 17so3040307ljq.0 for ; Wed, 27 Oct 2021 11:22:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=2PjuNeHqkEC+TnHGw+HkK8UIEzWJEETPPBQlrocuPTU=; b=lLMbuFSOE+jF05Bi6R4jcYzGGMqVatV0odlAiYllsUoS+qAXv+ZpDMJuO5RzvOZQdR D5tjHrdYUtdnoQJ2rdnv/Enk+JCyKcnZYsLwSvVUd2I146BN/2hQLFX930ofc0g8kChC LJLBtzwRRmFlnqrm6RLIGt7FmuYCoS3sn+evJa3IP7AyvkYRDCV58BO4FURlVuUuyFZD 3WHotNhKoHU90flIseiCDMZliZTbAy9pi6BHt0mF+npdL26WjVvhqge51TReCgfRK65U YtNs40Mn+uDoVgm3W0QrcKs/y6Ne4Cg7ShmPQayK69U5W0XAoyiwEvLGrevriNMj3FH1 pOeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=2PjuNeHqkEC+TnHGw+HkK8UIEzWJEETPPBQlrocuPTU=; b=7N0xa/wgIcw7qv1ifMn+TKHaciacJk1z3PLJiJhcJEe7ONBeEZTPVj7HR1bkxXXoK+ MKBo4GVupwvcPCGhEWH1NiTdKTX82IPhLUcWalC6gm8bNRqHwQwkQA3+yvYmYYwKozEi 6mzVJbDiqrhrvEdAgAuHKrVb8RiXTvE4O4Bx70jMhzjVh5QMO7ykJ/tM5zePgCFYixtg H0DraZgkdifnd8rhIAg3W48owJHS/LoV0kmpjJ1Xcd3fCUAMgPcVkv7x/OUU40pQzTcV AhG3y0s0NkUTLjedUTUj8bEBmd3C2+6Yi3T/1QKeWEqNHzPK07We9CGhm/83h//qfuRm TGww== X-Gm-Message-State: AOAM53159TPEl+Coim4yh+kea+tvUItlCLhgGLtlz950OVgqyqn73Evp mcw9I+kOwJbqek11BHzyf9CNQrpLoGr0LPYTJ4fV7w== X-Google-Smtp-Source: ABdhPJzTT0dOmYBCYBy1pf627CzSQYcktFnt+YBoCzZGxCnrzYDdLm3N9GGUHg1utxfan0QOSwZl3MzUHr4ArxOXuMA= X-Received: by 2002:a05:651c:c3:: with SMTP id 3mr4927747ljr.355.1635358966211; Wed, 27 Oct 2021 11:22:46 -0700 (PDT) MIME-Version: 1.0 References: <20211026173822.502506-1-pasha.tatashin@soleen.com> <20211026173822.502506-2-pasha.tatashin@soleen.com> In-Reply-To: From: Pasha Tatashin Date: Wed, 27 Oct 2021 14:22:08 -0400 Message-ID: Subject: Re: [RFC 1/8] mm: add overflow and underflow checks for page->_refcount To: Muchun Song Cc: LKML , Linux Memory Management List , linux-m68k@lists.linux-m68k.org, Anshuman Khandual , Matthew Wilcox , Andrew Morton , william.kucharski@oracle.com, Mike Kravetz , Vlastimil Babka , Geert Uytterhoeven , schmitzmic@gmail.com, Steven Rostedt , Ingo Molnar , Johannes Weiner , Roman Gushchin , weixugc@google.com, Greg Thelen Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: C8C8B9000515 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=lLMbuFSO; spf=pass (imf28.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.208.172 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Stat-Signature: 8qfmwj19r1y1d3kt4phkx87qu5a5mspx X-Rspamd-Server: rspam06 X-HE-Tag: 1635358967-228291 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > I found some atomic_add/dec are replaced with atomic_add/dec_return, I am going to replace -return variants with -fetch variants, potentially -fetch > those helpers with return value imply a full memory barrier around it, but > others without return value do not. Do you have any numbers to show > the impact? Maybe atomic_add/dec_return_relaxed can help this. The generic variant uses arch_cmpxchg() for all atomic variants without any extra barriers. Therefore, on platforms that use generic implementations there won't be performance differences except for an extra branch that checks results when VM_BUG_ON is enabled. On x86 the difference between the two is the following atomic_add: lock add %eax,(%rsi) atomic_fetch_add: lock xadd %eax,(%rsi) atomic_fetch_add_relaxed: lock xadd %eax,(%rsi) No differences between relaxed and non relaxed variants. However, we used lock xadd instead of lock add. I am not sure if the performance difference is going to be different. Pasha