linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH AUTOSEL 4.20 34/60] kasan, slub: move kasan_poison_slab hook before page_address
       [not found] <20190313191021.158171-1-sashal@kernel.org>
@ 2019-03-13 19:09 ` Sasha Levin
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 35/60] mm: handle lru_add_drain_all for UP properly Sasha Levin
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-03-13 19:09 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Andrey Konovalov, Alexander Potapenko, Andrey Ryabinin,
	Catalin Marinas, Christoph Lameter, David Rientjes,
	Dmitry Vyukov, Evgeniy Stepanov, Joonsoo Kim, Kostya Serebryany,
	Pekka Enberg, Qian Cai, Vincenzo Frascino, Andrew Morton,
	Linus Torvalds, Sasha Levin, linux-mm

From: Andrey Konovalov <andreyknvl@google.com>

[ Upstream commit a71012242837fe5e67d8c999cfc357174ed5dba0 ]

With tag based KASAN page_address() looks at the page flags to see whether
the resulting pointer needs to have a tag set.  Since we don't want to set
a tag when page_address() is called on SLAB pages, we call
page_kasan_tag_reset() in kasan_poison_slab().  However in allocate_slab()
page_address() is called before kasan_poison_slab().  Fix it by changing
the order.

[andreyknvl@google.com: fix compilation error when CONFIG_SLUB_DEBUG=n]
  Link: http://lkml.kernel.org/r/ac27cc0bbaeb414ed77bcd6671a877cf3546d56e.1550066133.git.andreyknvl@google.com
Link: http://lkml.kernel.org/r/cd895d627465a3f1c712647072d17f10883be2a1.1549921721.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgeniy Stepanov <eugenis@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Qian Cai <cai@lca.pw>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/slub.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index e3629cd7aff1..d1e053d48f46 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1075,6 +1075,16 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
 	init_tracking(s, object);
 }
 
+static void setup_page_debug(struct kmem_cache *s, void *addr, int order)
+{
+	if (!(s->flags & SLAB_POISON))
+		return;
+
+	metadata_access_enable();
+	memset(addr, POISON_INUSE, PAGE_SIZE << order);
+	metadata_access_disable();
+}
+
 static inline int alloc_consistency_checks(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
@@ -1330,6 +1340,8 @@ slab_flags_t kmem_cache_flags(unsigned int object_size,
 #else /* !CONFIG_SLUB_DEBUG */
 static inline void setup_object_debug(struct kmem_cache *s,
 			struct page *page, void *object) {}
+static inline void setup_page_debug(struct kmem_cache *s,
+			void *addr, int order) {}
 
 static inline int alloc_debug_processing(struct kmem_cache *s,
 	struct page *page, void *object, unsigned long addr) { return 0; }
@@ -1640,12 +1652,11 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (page_is_pfmemalloc(page))
 		SetPageSlabPfmemalloc(page);
 
-	start = page_address(page);
+	kasan_poison_slab(page);
 
-	if (unlikely(s->flags & SLAB_POISON))
-		memset(start, POISON_INUSE, PAGE_SIZE << order);
+	start = page_address(page);
 
-	kasan_poison_slab(page);
+	setup_page_debug(s, start, order);
 
 	shuffle = shuffle_freelist(s, page);
 
-- 
2.19.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH AUTOSEL 4.20 35/60] mm: handle lru_add_drain_all for UP properly
       [not found] <20190313191021.158171-1-sashal@kernel.org>
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 34/60] kasan, slub: move kasan_poison_slab hook before page_address Sasha Levin
@ 2019-03-13 19:09 ` Sasha Levin
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in Sasha Levin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-03-13 19:09 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Michal Hocko, Tejun Heo, Andrew Morton, Linus Torvalds,
	Sasha Levin, linux-mm

From: Michal Hocko <mhocko@suse.com>

[ Upstream commit 6ea183d60c469560e7b08a83c9804299e84ec9eb ]

Since for_each_cpu(cpu, mask) added by commit 2d3854a37e8b767a
("cpumask: introduce new API, without changing anything") did not
evaluate the mask argument if NR_CPUS == 1 due to CONFIG_SMP=n,
lru_add_drain_all() is hitting WARN_ON() at __flush_work() added by
commit 4d43d395fed12463 ("workqueue: Try to catch flush_work() without
INIT_WORK().") by unconditionally calling flush_work() [1].

Workaround this issue by using CONFIG_SMP=n specific lru_add_drain_all
implementation.  There is no real need to defer the implementation to
the workqueue as the draining is going to happen on the local cpu.  So
alias lru_add_drain_all to lru_add_drain which does all the necessary
work.

[akpm@linux-foundation.org: fix various build warnings]
[1] https://lkml.kernel.org/r/18a30387-6aa5-6123-e67c-57579ecc3f38@roeck-us.net
Link: http://lkml.kernel.org/r/20190213124334.GH4525@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Guenter Roeck <linux@roeck-us.net>
Debugged-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/swap.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index aa483719922e..e99ef3dcdfd5 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -320,11 +320,6 @@ static inline void activate_page_drain(int cpu)
 {
 }
 
-static bool need_activate_page_drain(int cpu)
-{
-	return false;
-}
-
 void activate_page(struct page *page)
 {
 	struct zone *zone = page_zone(page);
@@ -653,13 +648,15 @@ void lru_add_drain(void)
 	put_cpu();
 }
 
+#ifdef CONFIG_SMP
+
+static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
+
 static void lru_add_drain_per_cpu(struct work_struct *dummy)
 {
 	lru_add_drain();
 }
 
-static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
-
 /*
  * Doesn't need any cpu hotplug locking because we do rely on per-cpu
  * kworkers being shut down before our page_alloc_cpu_dead callback is
@@ -702,6 +699,12 @@ void lru_add_drain_all(void)
 
 	mutex_unlock(&lock);
 }
+#else
+void lru_add_drain_all(void)
+{
+	lru_add_drain();
+}
+#endif
 
 /**
  * release_pages - batched put_page()
-- 
2.19.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in
       [not found] <20190313191021.158171-1-sashal@kernel.org>
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 34/60] kasan, slub: move kasan_poison_slab hook before page_address Sasha Levin
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 35/60] mm: handle lru_add_drain_all for UP properly Sasha Levin
@ 2019-03-13 19:09 ` Sasha Levin
  2019-03-13 19:58   ` Hugh Dickins
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 38/60] kasan, slab: fix conflicts with CONFIG_HARDENED_USERCOPY Sasha Levin
  2019-03-13 19:10 ` [PATCH AUTOSEL 4.20 39/60] kasan, slab: make freelist stored without tags Sasha Levin
  4 siblings, 1 reply; 7+ messages in thread
From: Sasha Levin @ 2019-03-13 19:09 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Darrick J. Wong, Hugh Dickins, Andrew Morton, Linus Torvalds,
	Sasha Levin, linux-mm

From: "Darrick J. Wong" <darrick.wong@oracle.com>

[ Upstream commit 1062af920c07f5b54cf5060fde3339da6df0cf6b ]

tmpfs has a peculiarity of accounting hard links as if they were
separate inodes: so that when the number of inodes is limited, as it is
by default, a user cannot soak up an unlimited amount of unreclaimable
dcache memory just by repeatedly linking a file.

But when v3.11 added O_TMPFILE, and the ability to use linkat() on the
fd, we missed accommodating this new case in tmpfs: "df -i" shows that
an extra "inode" remains accounted after the file is unlinked and the fd
closed and the actual inode evicted.  If a user repeatedly links
tmpfiles into a tmpfs, the limit will be hit (ENOSPC) even after they
are deleted.

Just skip the extra reservation from shmem_link() in this case: there's
a sense in which this first link of a tmpfile is then cheaper than a
hard link of another file, but the accounting works out, and there's
still good limiting, so no need to do anything more complicated.

Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1902182134370.7035@eggly.anvils
Fixes: f4e0c30c191 ("allow the temp files created by open() to be linked to")
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Reported-by: Matej Kupljen <matej.kupljen@gmail.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/shmem.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 5d07e0b1352f..7872e3b75e57 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2852,10 +2852,14 @@ static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentr
 	 * No ordinary (disk based) filesystem counts links as inodes;
 	 * but each new link needs a new dentry, pinning lowmem, and
 	 * tmpfs dentries cannot be pruned until they are unlinked.
+	 * But if an O_TMPFILE file is linked into the tmpfs, the
+	 * first link must skip that, to get the accounting right.
 	 */
-	ret = shmem_reserve_inode(inode->i_sb);
-	if (ret)
-		goto out;
+	if (inode->i_nlink) {
+		ret = shmem_reserve_inode(inode->i_sb);
+		if (ret)
+			goto out;
+	}
 
 	dir->i_size += BOGO_DIRENT_SIZE;
 	inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
-- 
2.19.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH AUTOSEL 4.20 38/60] kasan, slab: fix conflicts with CONFIG_HARDENED_USERCOPY
       [not found] <20190313191021.158171-1-sashal@kernel.org>
                   ` (2 preceding siblings ...)
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in Sasha Levin
@ 2019-03-13 19:09 ` Sasha Levin
  2019-03-13 19:10 ` [PATCH AUTOSEL 4.20 39/60] kasan, slab: make freelist stored without tags Sasha Levin
  4 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-03-13 19:09 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Andrey Konovalov, Alexander Potapenko, Andrey Ryabinin,
	Catalin Marinas, Dmitry Vyukov, Evgeniy Stepanov,
	Kostya Serebryany, Vincenzo Frascino, Andrew Morton,
	Linus Torvalds, Sasha Levin, linux-mm

From: Andrey Konovalov <andreyknvl@google.com>

[ Upstream commit 219667c23c68eb3dbc0d5662b9246f28477fe529 ]

Similarly to commit 96fedce27e13 ("kasan: make tag based mode work with
CONFIG_HARDENED_USERCOPY"), we need to reset pointer tags in
__check_heap_object() in mm/slab.c before doing any pointer math.

Link: http://lkml.kernel.org/r/9a5c0f958db10e69df5ff9f2b997866b56b7effc.1550602886.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Qian Cai <cai@lca.pw>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgeniy Stepanov <eugenis@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/slab.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/slab.c b/mm/slab.c
index 9d5de959d9d9..05f21f736be8 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4421,6 +4421,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
 	unsigned int objnr;
 	unsigned long offset;
 
+	ptr = kasan_reset_tag(ptr);
+
 	/* Find and validate object. */
 	cachep = page->slab_cache;
 	objnr = obj_to_index(cachep, page, (void *)ptr);
-- 
2.19.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH AUTOSEL 4.20 39/60] kasan, slab: make freelist stored without tags
       [not found] <20190313191021.158171-1-sashal@kernel.org>
                   ` (3 preceding siblings ...)
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 38/60] kasan, slab: fix conflicts with CONFIG_HARDENED_USERCOPY Sasha Levin
@ 2019-03-13 19:10 ` Sasha Levin
  4 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-03-13 19:10 UTC (permalink / raw)
  To: linux-kernel, stable
  Cc: Andrey Konovalov, Alexander Potapenko, Andrey Ryabinin,
	Catalin Marinas, Dmitry Vyukov, Evgeniy Stepanov,
	Kostya Serebryany, Vincenzo Frascino, Andrew Morton,
	Linus Torvalds, Sasha Levin, linux-mm

From: Andrey Konovalov <andreyknvl@google.com>

[ Upstream commit 51dedad06b5f6c3eea7ec1069631b1ef7796912a ]

Similarly to "kasan, slub: move kasan_poison_slab hook before
page_address", move kasan_poison_slab() before alloc_slabmgmt(), which
calls page_address(), to make page_address() return value to be
non-tagged.  This, combined with calling kasan_reset_tag() for off-slab
slab management object, leads to freelist being stored non-tagged.

Link: http://lkml.kernel.org/r/dfb53b44a4d00de3879a05a9f04c1f55e584f7a1.1550602886.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Qian Cai <cai@lca.pw>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Evgeniy Stepanov <eugenis@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 mm/slab.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/slab.c b/mm/slab.c
index 05f21f736be8..b85524f2ab35 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2381,6 +2381,7 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
 		/* Slab management obj is off-slab. */
 		freelist = kmem_cache_alloc_node(cachep->freelist_cache,
 					      local_flags, nodeid);
+		freelist = kasan_reset_tag(freelist);
 		if (!freelist)
 			return NULL;
 	} else {
@@ -2694,6 +2695,13 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
 
 	offset *= cachep->colour_off;
 
+	/*
+	 * Call kasan_poison_slab() before calling alloc_slabmgmt(), so
+	 * page_address() in the latter returns a non-tagged pointer,
+	 * as it should be for slab pages.
+	 */
+	kasan_poison_slab(page);
+
 	/* Get slab management. */
 	freelist = alloc_slabmgmt(cachep, page, offset,
 			local_flags & ~GFP_CONSTRAINT_MASK, page_node);
@@ -2702,7 +2710,6 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
 
 	slab_map_pages(cachep, page, freelist);
 
-	kasan_poison_slab(page);
 	cache_init_objs(cachep, page);
 
 	if (gfpflags_allow_blocking(local_flags))
-- 
2.19.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in
  2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in Sasha Levin
@ 2019-03-13 19:58   ` Hugh Dickins
  2019-03-19 20:07     ` Sasha Levin
  0 siblings, 1 reply; 7+ messages in thread
From: Hugh Dickins @ 2019-03-13 19:58 UTC (permalink / raw)
  To: Sasha Levin
  Cc: linux-kernel, stable, Darrick J. Wong, Hugh Dickins,
	Andrew Morton, Linus Torvalds, linux-mm

AUTOSEL is wrong to select this commit without also selecting
29b00e609960 ("tmpfs: fix uninitialized return value in shmem_link")
which contains the tag
Fixes: 1062af920c07 ("tmpfs: fix link accounting when a tmpfile is linked in")
Please add 29b00e609960 for those 6 trees, or else omit 1062af920c07 for now.

Thanks,
Hugh

On Wed, 13 Mar 2019, Sasha Levin wrote:

> From: "Darrick J. Wong" <darrick.wong@oracle.com>
> 
> [ Upstream commit 1062af920c07f5b54cf5060fde3339da6df0cf6b ]
> 
> tmpfs has a peculiarity of accounting hard links as if they were
> separate inodes: so that when the number of inodes is limited, as it is
> by default, a user cannot soak up an unlimited amount of unreclaimable
> dcache memory just by repeatedly linking a file.
> 
> But when v3.11 added O_TMPFILE, and the ability to use linkat() on the
> fd, we missed accommodating this new case in tmpfs: "df -i" shows that
> an extra "inode" remains accounted after the file is unlinked and the fd
> closed and the actual inode evicted.  If a user repeatedly links
> tmpfiles into a tmpfs, the limit will be hit (ENOSPC) even after they
> are deleted.
> 
> Just skip the extra reservation from shmem_link() in this case: there's
> a sense in which this first link of a tmpfile is then cheaper than a
> hard link of another file, but the accounting works out, and there's
> still good limiting, so no need to do anything more complicated.
> 
> Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1902182134370.7035@eggly.anvils
> Fixes: f4e0c30c191 ("allow the temp files created by open() to be linked to")
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Reported-by: Matej Kupljen <matej.kupljen@gmail.com>
> Acked-by: Al Viro <viro@zeniv.linux.org.uk>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Sasha Levin <sashal@kernel.org>
> ---
>  mm/shmem.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 5d07e0b1352f..7872e3b75e57 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2852,10 +2852,14 @@ static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentr
>  	 * No ordinary (disk based) filesystem counts links as inodes;
>  	 * but each new link needs a new dentry, pinning lowmem, and
>  	 * tmpfs dentries cannot be pruned until they are unlinked.
> +	 * But if an O_TMPFILE file is linked into the tmpfs, the
> +	 * first link must skip that, to get the accounting right.
>  	 */
> -	ret = shmem_reserve_inode(inode->i_sb);
> -	if (ret)
> -		goto out;
> +	if (inode->i_nlink) {
> +		ret = shmem_reserve_inode(inode->i_sb);
> +		if (ret)
> +			goto out;
> +	}
>  
>  	dir->i_size += BOGO_DIRENT_SIZE;
>  	inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
> -- 
> 2.19.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in
  2019-03-13 19:58   ` Hugh Dickins
@ 2019-03-19 20:07     ` Sasha Levin
  0 siblings, 0 replies; 7+ messages in thread
From: Sasha Levin @ 2019-03-19 20:07 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: linux-kernel, stable, Darrick J. Wong, Andrew Morton,
	Linus Torvalds, linux-mm

On Wed, Mar 13, 2019 at 12:58:26PM -0700, Hugh Dickins wrote:
>AUTOSEL is wrong to select this commit without also selecting
>29b00e609960 ("tmpfs: fix uninitialized return value in shmem_link")
>which contains the tag
>Fixes: 1062af920c07 ("tmpfs: fix link accounting when a tmpfile is linked in")
>Please add 29b00e609960 for those 6 trees, or else omit 1062af920c07 for now.

I usually look up relevant "Fixes" tags right before I queue patches up
to avoid missing things that came up recently.

I've queued 29b00e609960 on top for all trees, thank you!

--
Thanks,
Sasha


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-03-19 20:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20190313191021.158171-1-sashal@kernel.org>
2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 34/60] kasan, slub: move kasan_poison_slab hook before page_address Sasha Levin
2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 35/60] mm: handle lru_add_drain_all for UP properly Sasha Levin
2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 37/60] tmpfs: fix link accounting when a tmpfile is linked in Sasha Levin
2019-03-13 19:58   ` Hugh Dickins
2019-03-19 20:07     ` Sasha Levin
2019-03-13 19:09 ` [PATCH AUTOSEL 4.20 38/60] kasan, slab: fix conflicts with CONFIG_HARDENED_USERCOPY Sasha Levin
2019-03-13 19:10 ` [PATCH AUTOSEL 4.20 39/60] kasan, slab: make freelist stored without tags Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox