From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48E09C28CF5 for ; Wed, 26 Jan 2022 18:59:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C04776B0073; Wed, 26 Jan 2022 13:59:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BB4976B0074; Wed, 26 Jan 2022 13:59:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DF426B0075; Wed, 26 Jan 2022 13:59:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0029.hostedemail.com [216.40.44.29]) by kanga.kvack.org (Postfix) with ESMTP id 90BCD6B0073 for ; Wed, 26 Jan 2022 13:59:23 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4A304180B7D53 for ; Wed, 26 Jan 2022 18:59:23 +0000 (UTC) X-FDA: 79073351406.23.6C0DD9A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id 8D0E21C000D for ; Wed, 26 Jan 2022 18:59:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=RpL+rxhAhlZQJ9/uSU/qLylpB6hdYn8xv5qg2QKDxL0=; b=i5kUzxOGZ4MkbrV0H9kNy8JeC8 eJ6XSbef0LQFUFaxdLeZOeHLtOiEjMQCzGtWkMAj4YduJF7cxoOS6SgQqWDGgFlYgQGxliaQRKdvX TNkjIgZclQ6binDZ8OIBlI670KSAEglYme75yekdyiSm55Ws3NtiyRuUksSs4hR8Qgkx2qvxVetNd aBdU9f1IX4PAc2ymQNrJ8fnO7QTjA56nd3r9fN3BPP8hvnbImC9UpuKTWqVPd93JXWSLuEdvMwmW5 PKIgMDv+UORATCAvgLBbZlZH/c7L3y06XHeyL+4WlWoCl8GEYmXP1liMmJ+DZXhdyYca5WCgl2JPr KAYLXe2g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCnVf-004LUW-CH; Wed, 26 Jan 2022 18:59:11 +0000 Date: Wed, 26 Jan 2022 18:59:11 +0000 From: Matthew Wilcox To: Pasha Tatashin Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: Re: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Message-ID: References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> <20220126183429.1840447-2-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220126183429.1840447-2-pasha.tatashin@soleen.com> X-Stat-Signature: 59ukw7duiwkofcdx7wceikpksbfnppuq X-Rspam-User: nil Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=i5kUzxOG; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8D0E21C000D X-HE-Tag: 1643223562-564237 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 26, 2022 at 06:34:21PM +0000, Pasha Tatashin wrote: > The problems with page->_refcount are hard to debug, because usually > when they are detected, the damage has occurred a long time ago. Yet, > the problems with invalid page refcount may be catastrophic and lead to > memory corruptions. > > Reduce the scope of when the _refcount problems manifest themselves by > adding checks for underflows and overflows into functions that modify > _refcount. If you're chasing a bug like this, presumably you turn on page tracepoints. So could we reduce the cost of this by putting the VM_BUG_ON_PAGE parts into __page_ref_mod() et al? Yes, we'd need to change the arguments to those functions to pass in old & new, but that should be a cheap change compared to embedding the VM_BUG_ON_PAGE. > static inline void page_ref_add(struct page *page, int nr) > { > - atomic_add(nr, &page->_refcount); > + int old_val = atomic_fetch_add(nr, &page->_refcount); > + int new_val = old_val + nr; > + > + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); > if (page_ref_tracepoint_active(page_ref_mod)) > __page_ref_mod(page, nr); > }