linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: Dave Hansen <dave@sr71.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Hugh Dickins <hughd@google.com>,
	Wu Fengguang <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Mel Gorman <mgorman@suse.de>,
	linux-mm@kvack.org, Andi Kleen <ak@linux.intel.com>,
	Matthew Wilcox <matthew.r.wilcox@intel.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Hillf Danton <dhillf@gmail.com>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCHv4 23/39] thp: wait_split_huge_page(): serialize over i_mmap_mutex too
Date: Mon,  3 Jun 2013 18:02:14 +0300 (EEST)	[thread overview]
Message-ID: <20130603150214.54C34E0090@blue.fi.intel.com> (raw)
In-Reply-To: <519BEFAE.1080800@sr71.net>

Dave Hansen wrote:
> On 05/11/2013 06:23 PM, Kirill A. Shutemov wrote:
> > From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> > 
> > Since we're going to have huge pages backed by files,
> > wait_split_huge_page() has to serialize not only over anon_vma_lock,
> > but over i_mmap_mutex too.
> ...
> > -#define wait_split_huge_page(__anon_vma, __pmd)				\
> > +#define wait_split_huge_page(__vma, __pmd)				\
> >  	do {								\
> >  		pmd_t *____pmd = (__pmd);				\
> > -		anon_vma_lock_write(__anon_vma);			\
> > -		anon_vma_unlock_write(__anon_vma);			\
> > +		struct address_space *__mapping =			\
> > +					vma->vm_file->f_mapping;	\
> > +		struct anon_vma *__anon_vma = (__vma)->anon_vma;	\
> > +		if (__mapping)						\
> > +			mutex_lock(&__mapping->i_mmap_mutex);		\
> > +		if (__anon_vma) {					\
> > +			anon_vma_lock_write(__anon_vma);		\
> > +			anon_vma_unlock_write(__anon_vma);		\
> > +		}							\
> > +		if (__mapping)						\
> > +			mutex_unlock(&__mapping->i_mmap_mutex);		\
> >  		BUG_ON(pmd_trans_splitting(*____pmd) ||			\
> >  		       pmd_trans_huge(*____pmd));			\
> >  	} while (0)
> 
> Kirill, I asked about this patch in the previous series, and you wrote
> some very nice, detailed answers to my stupid questions.  But, you
> didn't add any comments or update the patch description.  So, if a
> reviewer or anybody looking at the changelog in the future has my same
> stupid questions, they're unlikely to find the very nice description
> that you wrote up.
> 
> I'd highly suggest that you go back through the comments you've received
> before and make sure that you both answered the questions, *and* made
> sure to cover those questions either in the code or in the patch
> descriptions.

Will do.

> Could you also describe the lengths to which you've gone to try and keep
> this macro from growing in to any larger of an abomination.  Is it truly
> _impossible_ to turn this in to a normal function?  Or will it simply be
> a larger amount of work that you can do right now?  What would it take?

Okay, I've tried once again. The patch is below. It looks too invasive for
me. What do you think?

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 19c8c14..7ed4412 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -1,6 +1,8 @@
 #ifndef _LINUX_HUGE_MM_H
 #define _LINUX_HUGE_MM_H
 
+#include <linux/fs.h>
+
 extern int do_huge_pmd_anonymous_page(struct mm_struct *mm,
 				      struct vm_area_struct *vma,
 				      unsigned long address, pmd_t *pmd,
@@ -114,23 +116,22 @@ extern void __split_huge_page_pmd(struct vm_area_struct *vma,
 			__split_huge_page_pmd(__vma, __address,		\
 					____pmd);			\
 	}  while (0)
-#define wait_split_huge_page(__vma, __pmd)				\
-	do {								\
-		pmd_t *____pmd = (__pmd);				\
-		struct address_space *__mapping =			\
-					vma->vm_file->f_mapping;	\
-		struct anon_vma *__anon_vma = (__vma)->anon_vma;	\
-		if (__mapping)						\
-			mutex_lock(&__mapping->i_mmap_mutex);		\
-		if (__anon_vma) {					\
-			anon_vma_lock_write(__anon_vma);		\
-			anon_vma_unlock_write(__anon_vma);		\
-		}							\
-		if (__mapping)						\
-			mutex_unlock(&__mapping->i_mmap_mutex);		\
-		BUG_ON(pmd_trans_splitting(*____pmd) ||			\
-		       pmd_trans_huge(*____pmd));			\
-	} while (0)
+static inline void wait_split_huge_page(struct vm_area_struct *vma,
+		pmd_t *pmd)
+{
+	struct address_space *mapping = vma->vm_file->f_mapping;
+
+	if (mapping)
+		mutex_lock(&mapping->i_mmap_mutex);
+	if (vma->anon_vma) {
+		anon_vma_lock_write(vma->anon_vma);
+		anon_vma_unlock_write(vma->anon_vma);
+	}
+	if (mapping)
+		mutex_unlock(&mapping->i_mmap_mutex);
+	BUG_ON(pmd_trans_splitting(*pmd));
+	BUG_ON(pmd_trans_huge(*pmd));
+}
 extern void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
 		pmd_t *pmd);
 #if HPAGE_PMD_ORDER > MAX_ORDER
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0a60f28..9fc126e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -19,7 +19,6 @@
 #include <linux/shrinker.h>
 
 struct mempolicy;
-struct anon_vma;
 struct anon_vma_chain;
 struct file_ra_state;
 struct user_struct;
@@ -260,7 +259,6 @@ static inline int get_freepage_migratetype(struct page *page)
  * files which need it (119 of them)
  */
 #include <linux/page-flags.h>
-#include <linux/huge_mm.h>
 
 /*
  * Methods to modify the page usage count.
@@ -1475,6 +1473,28 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
 	for (avc = anon_vma_interval_tree_iter_first(root, start, last); \
 	     avc; avc = anon_vma_interval_tree_iter_next(avc, start, last))
 
+static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
+{
+	down_write(&anon_vma->root->rwsem);
+}
+
+static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
+{
+	up_write(&anon_vma->root->rwsem);
+}
+
+static inline void anon_vma_lock_read(struct anon_vma *anon_vma)
+{
+	down_read(&anon_vma->root->rwsem);
+}
+
+static inline void anon_vma_unlock_read(struct anon_vma *anon_vma)
+{
+	up_read(&anon_vma->root->rwsem);
+}
+
+#include <linux/huge_mm.h>
+
 /* mmap.c */
 extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
 extern int vma_adjust(struct vm_area_struct *vma, unsigned long start,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index fb425aa..9805e55 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -453,4 +453,41 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm)
 	return mm->cpu_vm_mask_var;
 }
 
+/*
+ * The anon_vma heads a list of private "related" vmas, to scan if
+ * an anonymous page pointing to this anon_vma needs to be unmapped:
+ * the vmas on the list will be related by forking, or by splitting.
+ *
+ * Since vmas come and go as they are split and merged (particularly
+ * in mprotect), the mapping field of an anonymous page cannot point
+ * directly to a vma: instead it points to an anon_vma, on whose list
+ * the related vmas can be easily linked or unlinked.
+ *
+ * After unlinking the last vma on the list, we must garbage collect
+ * the anon_vma object itself: we're guaranteed no page can be
+ * pointing to this anon_vma once its vma list is empty.
+ */
+struct anon_vma {
+	struct anon_vma *root;		/* Root of this anon_vma tree */
+	struct rw_semaphore rwsem;	/* W: modification, R: walking the list */
+	/*
+	 * The refcount is taken on an anon_vma when there is no
+	 * guarantee that the vma of page tables will exist for
+	 * the duration of the operation. A caller that takes
+	 * the reference is responsible for clearing up the
+	 * anon_vma if they are the last user on release
+	 */
+	atomic_t refcount;
+
+	/*
+	 * NOTE: the LSB of the rb_root.rb_node is set by
+	 * mm_take_all_locks() _after_ taking the above lock. So the
+	 * rb_root must only be read/written after taking the above lock
+	 * to be sure to see a valid next pointer. The LSB bit itself
+	 * is serialized by a system wide lock only visible to
+	 * mm_take_all_locks() (mm_all_locks_mutex).
+	 */
+	struct rb_root rb_root;	/* Interval tree of private "related" vmas */
+};
+
 #endif /* _LINUX_MM_TYPES_H */
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6dacb93..22c7278 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -11,43 +11,6 @@
 #include <linux/memcontrol.h>
 
 /*
- * The anon_vma heads a list of private "related" vmas, to scan if
- * an anonymous page pointing to this anon_vma needs to be unmapped:
- * the vmas on the list will be related by forking, or by splitting.
- *
- * Since vmas come and go as they are split and merged (particularly
- * in mprotect), the mapping field of an anonymous page cannot point
- * directly to a vma: instead it points to an anon_vma, on whose list
- * the related vmas can be easily linked or unlinked.
- *
- * After unlinking the last vma on the list, we must garbage collect
- * the anon_vma object itself: we're guaranteed no page can be
- * pointing to this anon_vma once its vma list is empty.
- */
-struct anon_vma {
-	struct anon_vma *root;		/* Root of this anon_vma tree */
-	struct rw_semaphore rwsem;	/* W: modification, R: walking the list */
-	/*
-	 * The refcount is taken on an anon_vma when there is no
-	 * guarantee that the vma of page tables will exist for
-	 * the duration of the operation. A caller that takes
-	 * the reference is responsible for clearing up the
-	 * anon_vma if they are the last user on release
-	 */
-	atomic_t refcount;
-
-	/*
-	 * NOTE: the LSB of the rb_root.rb_node is set by
-	 * mm_take_all_locks() _after_ taking the above lock. So the
-	 * rb_root must only be read/written after taking the above lock
-	 * to be sure to see a valid next pointer. The LSB bit itself
-	 * is serialized by a system wide lock only visible to
-	 * mm_take_all_locks() (mm_all_locks_mutex).
-	 */
-	struct rb_root rb_root;	/* Interval tree of private "related" vmas */
-};
-
-/*
  * The copy-on-write semantics of fork mean that an anon_vma
  * can become associated with multiple processes. Furthermore,
  * each child process will have its own anon_vma, where new
@@ -118,27 +81,6 @@ static inline void vma_unlock_anon_vma(struct vm_area_struct *vma)
 		up_write(&anon_vma->root->rwsem);
 }
 
-static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
-{
-	down_write(&anon_vma->root->rwsem);
-}
-
-static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
-{
-	up_write(&anon_vma->root->rwsem);
-}
-
-static inline void anon_vma_lock_read(struct anon_vma *anon_vma)
-{
-	down_read(&anon_vma->root->rwsem);
-}
-
-static inline void anon_vma_unlock_read(struct anon_vma *anon_vma)
-{
-	up_read(&anon_vma->root->rwsem);
-}
-
-
 /*
  * anon_vma helper functions.
  */
diff --git a/mm/memory.c b/mm/memory.c
index c845cf2..2f4fb39 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -589,7 +589,7 @@ int __pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
 		pmd_t *pmd, unsigned long address)
 {
 	pgtable_t new = pte_alloc_one(mm, address);
-	int wait_split_huge_page;
+	int wait_split;
 	if (!new)
 		return -ENOMEM;
 
@@ -609,17 +609,17 @@ int __pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
 	smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */
 
 	spin_lock(&mm->page_table_lock);
-	wait_split_huge_page = 0;
+	wait_split = 0;
 	if (likely(pmd_none(*pmd))) {	/* Has another populated it ? */
 		mm->nr_ptes++;
 		pmd_populate(mm, pmd, new);
 		new = NULL;
 	} else if (unlikely(pmd_trans_splitting(*pmd)))
-		wait_split_huge_page = 1;
+		wait_split = 1;
 	spin_unlock(&mm->page_table_lock);
 	if (new)
 		pte_free(mm, new);
-	if (wait_split_huge_page)
+	if (wait_split)
 		wait_split_huge_page(vma, pmd);
 	return 0;
 }
-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-06-03 14:59 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-12  1:22 [PATCHv4 00/39] Transparent huge page cache Kirill A. Shutemov
2013-05-12  1:22 ` [PATCHv4 01/39] mm: drop actor argument of do_generic_file_read() Kirill A. Shutemov
2013-05-21 18:22   ` Dave Hansen
2013-05-12  1:22 ` [PATCHv4 02/39] block: implement add_bdi_stat() Kirill A. Shutemov
2013-05-21 18:25   ` Dave Hansen
2013-05-22 11:06     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 03/39] mm: implement zero_huge_user_segment and friends Kirill A. Shutemov
2013-05-23 10:32   ` Hillf Danton
2013-05-23 11:32     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 04/39] radix-tree: implement preload for multiple contiguous elements Kirill A. Shutemov
2013-05-21 18:58   ` Dave Hansen
2013-05-22 12:03     ` Kirill A. Shutemov
2013-05-22 14:20       ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 05/39] memcg, thp: charge huge cache pages Kirill A. Shutemov
2013-05-21 19:04   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 06/39] thp, mm: avoid PageUnevictable on active/inactive lru lists Kirill A. Shutemov
2013-05-21 19:17   ` Dave Hansen
2013-05-22 12:34     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 07/39] thp, mm: basic defines for transparent huge page cache Kirill A. Shutemov
2013-05-23 10:36   ` Hillf Danton
2013-05-23 15:49     ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 08/39] thp: compile-time and sysfs knob for thp pagecache Kirill A. Shutemov
2013-05-22 11:19   ` Hillf Danton
2013-05-12  1:23 ` [PATCHv4 09/39] thp, mm: introduce mapping_can_have_hugepages() predicate Kirill A. Shutemov
2013-05-21 19:28   ` Dave Hansen
2013-05-22 13:51     ` Kirill A. Shutemov
2013-05-22 15:31       ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 10/39] thp: account anon transparent huge pages into NR_ANON_PAGES Kirill A. Shutemov
2013-05-21 19:32   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 11/39] thp: represent file thp pages in meminfo and friends Kirill A. Shutemov
2013-05-21 19:34   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 12/39] thp, mm: rewrite add_to_page_cache_locked() to support huge pages Kirill A. Shutemov
2013-05-21 19:59   ` Dave Hansen
2013-05-23 14:36     ` Kirill A. Shutemov
2013-05-23 16:00       ` Dave Hansen
2013-05-28 11:59         ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 13/39] mm: trace filemap: dump page order Kirill A. Shutemov
2013-05-21 19:35   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 14/39] thp, mm: rewrite delete_from_page_cache() to support huge pages Kirill A. Shutemov
2013-05-21 20:14   ` Dave Hansen
2013-05-28 12:28     ` Kirill A. Shutemov
2013-06-07 15:10       ` Kirill A. Shutemov
2013-06-07 15:56         ` Dave Hansen
2013-06-10 17:41           ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 15/39] thp, mm: trigger bug in replace_page_cache_page() on THP Kirill A. Shutemov
2013-05-21 20:17   ` Dave Hansen
2013-05-28 12:53     ` Kirill A. Shutemov
2013-05-28 16:33       ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 16/39] thp, mm: locking tail page is a bug Kirill A. Shutemov
2013-05-21 20:18   ` Dave Hansen
2013-05-22 14:12     ` Kirill A. Shutemov
2013-05-22 14:53       ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 17/39] thp, mm: handle tail pages in page_cache_get_speculative() Kirill A. Shutemov
2013-05-21 20:49   ` Dave Hansen
2013-06-27 12:40     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 18/39] thp, mm: add event counters for huge page alloc on write to a file Kirill A. Shutemov
2013-05-21 20:54   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 19/39] thp, mm: allocate huge pages in grab_cache_page_write_begin() Kirill A. Shutemov
2013-05-21 21:14   ` Dave Hansen
2013-05-30 13:20     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 20/39] thp, mm: naive support of thp in generic read/write routines Kirill A. Shutemov
2013-05-21 21:28   ` Dave Hansen
2013-06-07 15:17     ` Kirill A. Shutemov
2013-06-07 15:29       ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 21/39] thp, libfs: initial support of thp in simple_read/write_begin/write_end Kirill A. Shutemov
2013-05-21 21:49   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 22/39] thp: handle file pages in split_huge_page() Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 23/39] thp: wait_split_huge_page(): serialize over i_mmap_mutex too Kirill A. Shutemov
2013-05-21 22:05   ` Dave Hansen
2013-06-03 15:02     ` Kirill A. Shutemov [this message]
2013-06-03 15:53       ` Dave Hansen
2013-06-03 16:09         ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 24/39] thp, mm: truncate support for transparent huge page cache Kirill A. Shutemov
2013-05-21 22:39   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 25/39] thp, mm: split huge page on mmap file page Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 26/39] ramfs: enable transparent huge page cache Kirill A. Shutemov
2013-05-21 22:43   ` Dave Hansen
2013-05-22 14:22     ` Kirill A. Shutemov
2013-05-22 14:55       ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 27/39] x86-64, mm: proper alignment mappings with hugepages Kirill A. Shutemov
2013-05-21 22:56   ` Dave Hansen
2013-06-25 14:56     ` Kirill A. Shutemov
2013-06-25 16:46       ` Dave Hansen
2013-05-21 23:20   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 28/39] thp: prepare zap_huge_pmd() to uncharge file pages Kirill A. Shutemov
2013-05-22  7:26   ` Hillf Danton
2013-05-12  1:23 ` [PATCHv4 29/39] thp: move maybe_pmd_mkwrite() out of mk_huge_pmd() Kirill A. Shutemov
2013-05-21 23:23   ` Dave Hansen
2013-05-22 14:37     ` Kirill A. Shutemov
2013-05-22 14:56       ` Dave Hansen
2013-05-21 23:23   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 30/39] thp: do_huge_pmd_anonymous_page() cleanup Kirill A. Shutemov
2013-05-22 11:45   ` Hillf Danton
2013-05-12  1:23 ` [PATCHv4 31/39] thp: consolidate code between handle_mm_fault() and do_huge_pmd_anonymous_page() Kirill A. Shutemov
2013-05-21 23:38   ` Dave Hansen
2013-05-22  6:51   ` Hillf Danton
2013-05-12  1:23 ` [PATCHv4 32/39] mm: cleanup __do_fault() implementation Kirill A. Shutemov
2013-05-21 23:57   ` Dave Hansen
2013-05-12  1:23 ` [PATCHv4 33/39] thp, mm: implement do_huge_linear_fault() Kirill A. Shutemov
2013-05-22 12:47   ` Hillf Danton
2013-05-22 15:13     ` Kirill A. Shutemov
2013-05-22 12:56   ` Hillf Danton
2013-05-22 15:14     ` Kirill A. Shutemov
2013-05-22 13:24   ` Hillf Danton
2013-05-22 15:26     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 34/39] thp, mm: handle huge pages in filemap_fault() Kirill A. Shutemov
2013-05-22 11:37   ` Hillf Danton
2013-05-22 15:34     ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 35/39] mm: decomposite do_wp_page() and get rid of some 'goto' logic Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 36/39] mm: do_wp_page(): extract VM_WRITE|VM_SHARED case to separate function Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 37/39] thp: handle write-protect exception to file-backed huge pages Kirill A. Shutemov
2013-05-23 11:57   ` Hillf Danton
2013-05-23 12:08     ` Kirill A. Shutemov
2013-05-23 12:12       ` Hillf Danton
2013-05-23 12:33         ` Kirill A. Shutemov
2013-05-12  1:23 ` [PATCHv4 38/39] thp: vma_adjust_trans_huge(): adjust file-backed VMA too Kirill A. Shutemov
2013-05-23 11:01   ` Hillf Danton
2013-05-12  1:23 ` [PATCHv4 39/39] thp: map file-backed huge pages on fault Kirill A. Shutemov
2013-05-23 11:36   ` Hillf Danton
2013-05-23 11:48     ` Kirill A. Shutemov
2013-05-21 18:37 ` [PATCHv4 00/39] Transparent huge page cache Dave Hansen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130603150214.54C34E0090@blue.fi.intel.com \
    --to=kirill.shutemov@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave@sr71.net \
    --cc=dhillf@gmail.com \
    --cc=fengguang.wu@intel.com \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=kirill@shutemov.name \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=matthew.r.wilcox@intel.com \
    --cc=mgorman@suse.de \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox