From: Catalin Marinas <catalin.marinas@arm.com>
To: Petr Tesarik <ptesarik@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
Feng Tang <feng.tang@linux.alibaba.com>,
Harry Yoo <harry.yoo@oracle.com>, Peng Fan <peng.fan@nxp.com>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
David Rientjes <rientjes@google.com>,
Christoph Lameter <cl@linux.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
Robin Murphy <robin.murphy@arm.com>,
Sean Christopherson <seanjc@google.com>,
Halil Pasic <pasic@linux.ibm.com>
Subject: Re: slub - extended kmalloc redzone and dma alignment
Date: Wed, 9 Apr 2025 10:47:36 +0100 [thread overview]
Message-ID: <Z_ZCOLqxIVO0K5x3@arm.com> (raw)
In-Reply-To: <20250409110529.3ad65b3c@mordecai>
On Wed, Apr 09, 2025 at 11:05:29AM +0200, Petr Tesarik wrote:
> On Wed, 9 Apr 2025 10:39:04 +0200
> Petr Tesarik <ptesarik@suse.com> wrote:
> > I believe there is potential for a nasty race condition, and maybe even
> > info leak. Consider this:
> >
> > 1. DMA buffer is allocated by kmalloc(). The memory area previously
> > contained sensitive information, which had been written to main
> > memory.
> > 2. The DMA buffer is initialized with zeroes, but this new content
> > stays in a CPU cache (because this is kernel memory with a write
> > behind cache policy).
> > 3. DMA is set up, but nothing is written to main memory by the
> > bus-mastering device.
> > 4. The CPU cache line is now discarded in arch_sync_dma_for_cpu().
> >
> > IIUC the zeroes were never written to main memory, and previous content
> > can now be read by the CPU through the DMA buffer.
> >
> > I haven't checked if any architecture is affected, but I strongly
> > believe that the CPU cache MUST be flushed both before and after the
> > DMA transfer. Any architecture which does not do it that way should be
> > fixed.
> >
> > Or did I miss a crucial detail (again)?
>
> Just after sending this, I realized I did. :(
>
> There is a step between 2 and 3:
>
> 2a. arch_sync_dma_for_device() invalidates the CPU cache line.
> Architectures which do not write previous content to main memory
> effectively undo the zeroing here.
Good point, that's a problem on those architectures that invalidate the
caches in arch_sync_dma_for_device(). We fixed it for arm64 in 5.19 -
c50f11c6196f ("arm64: mm: Don't invalidate FROM_DEVICE buffers at start
of DMA transfer") - for the same reasons, information leak.
So we could ignore all those architectures. If they people complain
about redzone failures, we can ask them to fix. Well, crude attempt
below at fixing those. I skipped powerpc and for arch/arm I only
addressed cache-v7. Completely untested.
But I wonder whether it's easier to fix the callers of
arch_sync_dma_for_device and always pass DMA_BIDIRECTIONAL for security
reasons:
----------------------------8<------------------------------
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index cb7e29dcac15..73ee3826a825 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1103,7 +1103,7 @@ void iommu_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
swiotlb_sync_single_for_device(dev, phys, size, dir);
if (!dev_is_dma_coherent(dev))
- arch_sync_dma_for_device(phys, size, dir);
+ arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL);
}
void iommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl,
@@ -1134,7 +1134,8 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
sg->length, dir);
else if (!dev_is_dma_coherent(dev))
for_each_sg(sgl, sg, nelems, i)
- arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
+ arch_sync_dma_for_device(sg_phys(sg), sg->length,
+ DMA_BIDIRECTIONAL);
}
dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
@@ -1189,7 +1190,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
}
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- arch_sync_dma_for_device(phys, size, dir);
+ arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL);
iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
if (iova == DMA_MAPPING_ERROR)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1f65795cf5d7..6e508d7f4010 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -247,7 +247,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
done:
if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr))))
- arch_sync_dma_for_device(phys, size, dir);
+ arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL);
else
xen_dma_sync_for_device(dev, dev_addr, size, dir);
}
@@ -316,7 +316,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
if (!dev_is_dma_coherent(dev)) {
if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
- arch_sync_dma_for_device(paddr, size, dir);
+ arch_sync_dma_for_device(paddr, size, DMA_BIDIRECTIONAL);
else
xen_dma_sync_for_device(dev, dma_addr, size, dir);
}
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index b8fe0b3d0ffb..f4e8d23fd086 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -408,7 +408,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
if (!dev_is_dma_coherent(dev))
arch_sync_dma_for_device(paddr, sg->length,
- dir);
+ DMA_BIDIRECTIONAL);
}
}
#endif
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index d2c0b7e632fc..d5f575e1a623 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -61,7 +61,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
swiotlb_sync_single_for_device(dev, paddr, size, dir);
if (!dev_is_dma_coherent(dev))
- arch_sync_dma_for_device(paddr, size, dir);
+ arch_sync_dma_for_device(paddr, size, DMA_BIDIRECTIONAL);
}
static inline void dma_direct_sync_single_for_cpu(struct device *dev,
@@ -107,7 +107,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
}
if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- arch_sync_dma_for_device(phys, size, dir);
+ arch_sync_dma_for_device(phys, size, DMA_BIDIRECTIONAL);
return dma_addr;
}
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index abcf3fa63a56..1e21bd65b08c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1598,7 +1598,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
}
if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- arch_sync_dma_for_device(swiotlb_addr, size, dir);
+ arch_sync_dma_for_device(swiotlb_addr, size, DMA_BIDIRECTIONAL);
return dma_addr;
}
----------------------------8<------------------------------
And that's the partial change for most arches but I'd rather go with the
above:
----------------------------8<------------------------------
diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
index 6b85e94f3275..2902b3378b21 100644
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -51,22 +51,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
- switch (dir) {
- case DMA_TO_DEVICE:
- dma_cache_wback(paddr, size);
- break;
-
- case DMA_FROM_DEVICE:
- dma_cache_inv(paddr, size);
- break;
-
- case DMA_BIDIRECTIONAL:
- dma_cache_wback_inv(paddr, size);
- break;
-
- default:
- break;
- }
+ dma_cache_wback(paddr, size);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
index 201ca05436fa..3787c4b839dd 100644
--- a/arch/arm/mm/cache-v7.S
+++ b/arch/arm/mm/cache-v7.S
@@ -441,8 +441,6 @@ SYM_FUNC_END(v7_dma_flush_range)
*/
SYM_TYPED_FUNC_START(v7_dma_map_area)
add r1, r1, r0
- teq r2, #DMA_FROM_DEVICE
- beq v7_dma_inv_range
b v7_dma_clean_range
SYM_FUNC_END(v7_dma_map_area)
diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
index fecac107fd0d..b2432726b082 100644
--- a/arch/arm/mm/dma-mapping-nommu.c
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -17,11 +17,7 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
dmac_map_area(__va(paddr), size, dir);
-
- if (dir == DMA_FROM_DEVICE)
- outer_inv_range(paddr, paddr + size);
- else
- outer_clean_range(paddr, paddr + size);
+ outer_clean_range(paddr, paddr + size);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 88c2d68a69c9..ceae4c027f53 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -682,13 +682,7 @@ static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
phys_addr_t paddr;
dma_cache_maint_page(page, off, size, dir, dmac_map_area);
-
- paddr = page_to_phys(page) + off;
- if (dir == DMA_FROM_DEVICE) {
- outer_inv_range(paddr, paddr + size);
- } else {
- outer_clean_range(paddr, paddr + size);
- }
+ outer_clean_range(paddr, paddr + size);
/* FIXME: non-speculating: flush on bidirectional mappings? */
}
diff --git a/arch/csky/mm/dma-mapping.c b/arch/csky/mm/dma-mapping.c
index 82447029feb4..3862a56cb3ac 100644
--- a/arch/csky/mm/dma-mapping.c
+++ b/arch/csky/mm/dma-mapping.c
@@ -58,17 +58,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
- switch (dir) {
- case DMA_TO_DEVICE:
- cache_op(paddr, size, dma_wb_range);
- break;
- case DMA_FROM_DEVICE:
- case DMA_BIDIRECTIONAL:
- cache_op(paddr, size, dma_wbinv_range);
- break;
- default:
- BUG();
- }
+ cache_op(paddr, size, dma_wb_range);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
index 882680e81a30..8ca011acc4fc 100644
--- a/arch/hexagon/kernel/dma.c
+++ b/arch/hexagon/kernel/dma.c
@@ -14,22 +14,8 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
{
void *addr = phys_to_virt(paddr);
- switch (dir) {
- case DMA_TO_DEVICE:
- hexagon_clean_dcache_range((unsigned long) addr,
- (unsigned long) addr + size);
- break;
- case DMA_FROM_DEVICE:
- hexagon_inv_dcache_range((unsigned long) addr,
- (unsigned long) addr + size);
- break;
- case DMA_BIDIRECTIONAL:
- flush_dcache_range((unsigned long) addr,
- (unsigned long) addr + size);
- break;
- default:
- BUG();
- }
+ hexagon_clean_dcache_range((unsigned long) addr,
+ (unsigned long) addr + size);
}
/*
diff --git a/arch/m68k/kernel/dma.c b/arch/m68k/kernel/dma.c
index 16063783aa80..95902d306412 100644
--- a/arch/m68k/kernel/dma.c
+++ b/arch/m68k/kernel/dma.c
@@ -29,17 +29,5 @@ pgprot_t pgprot_dmacoherent(pgprot_t prot)
void arch_sync_dma_for_device(phys_addr_t handle, size_t size,
enum dma_data_direction dir)
{
- switch (dir) {
- case DMA_BIDIRECTIONAL:
- case DMA_TO_DEVICE:
- cache_push(handle, size);
- break;
- case DMA_FROM_DEVICE:
- cache_clear(handle, size);
- break;
- default:
- pr_err_ratelimited("dma_sync_single_for_device: unsupported dir %u\n",
- dir);
- break;
- }
+ cache_push(handle, size);
}
diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c
index 04d091ade417..68e6c946d273 100644
--- a/arch/microblaze/kernel/dma.c
+++ b/arch/microblaze/kernel/dma.c
@@ -14,14 +14,19 @@
#include <linux/bug.h>
#include <asm/cacheflush.h>
-static void __dma_sync(phys_addr_t paddr, size_t size,
- enum dma_data_direction direction)
+void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
+ enum dma_data_direction dir)
+{
+ flush_dcache_range(paddr, paddr + size);
+}
+
+void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
+ enum dma_data_direction dir)
{
switch (direction) {
case DMA_TO_DEVICE:
- case DMA_BIDIRECTIONAL:
- flush_dcache_range(paddr, paddr + size);
break;
+ case DMA_BIDIRECTIONAL:
case DMA_FROM_DEVICE:
invalidate_dcache_range(paddr, paddr + size);
break;
@@ -29,15 +34,3 @@ static void __dma_sync(phys_addr_t paddr, size_t size,
BUG();
}
}
-
-void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
- enum dma_data_direction dir)
-{
- __dma_sync(paddr, size, dir);
-}
-
-void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
- enum dma_data_direction dir)
-{
- __dma_sync(paddr, size, dir);
-}
diff --git a/arch/nios2/mm/dma-mapping.c b/arch/nios2/mm/dma-mapping.c
index fd887d5f3f9a..35730ad8787d 100644
--- a/arch/nios2/mm/dma-mapping.c
+++ b/arch/nios2/mm/dma-mapping.c
@@ -23,23 +23,12 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
{
void *vaddr = phys_to_virt(paddr);
- switch (dir) {
- case DMA_FROM_DEVICE:
- invalidate_dcache_range((unsigned long)vaddr,
- (unsigned long)(vaddr + size));
- break;
- case DMA_TO_DEVICE:
- /*
- * We just need to flush the caches here , but Nios2 flush
- * instruction will do both writeback and invalidate.
- */
- case DMA_BIDIRECTIONAL: /* flush and invalidate */
- flush_dcache_range((unsigned long)vaddr,
- (unsigned long)(vaddr + size));
- break;
- default:
- BUG();
- }
+ /*
+ * We just need to flush the caches here , but Nios2 flush
+ * instruction will do both writeback and invalidate.
+ */
+ flush_dcache_range((unsigned long)vaddr,
+ (unsigned long)(vaddr + size));
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c
index b3edbb33b621..747218e17237 100644
--- a/arch/openrisc/kernel/dma.c
+++ b/arch/openrisc/kernel/dma.c
@@ -101,25 +101,8 @@ void arch_sync_dma_for_device(phys_addr_t addr, size_t size,
unsigned long cl;
struct cpuinfo_or1k *cpuinfo = &cpuinfo_or1k[smp_processor_id()];
- switch (dir) {
- case DMA_TO_DEVICE:
- /* Flush the dcache for the requested range */
- for (cl = addr; cl < addr + size;
- cl += cpuinfo->dcache_block_size)
- mtspr(SPR_DCBFR, cl);
- break;
- case DMA_FROM_DEVICE:
- /* Invalidate the dcache for the requested range */
- for (cl = addr; cl < addr + size;
- cl += cpuinfo->dcache_block_size)
- mtspr(SPR_DCBIR, cl);
- break;
- default:
- /*
- * NOTE: If dir == DMA_BIDIRECTIONAL then there's no need to
- * flush nor invalidate the cache here as the area will need
- * to be manually synced anyway.
- */
- break;
- }
+ /* Flush the dcache for the requested range */
+ for (cl = addr; cl < addr + size;
+ cl += cpuinfo->dcache_block_size)
+ mtspr(SPR_DCBFR, cl);
}
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
index cb89d7e0ba88..2e6734c2a20b 100644
--- a/arch/riscv/mm/dma-noncoherent.c
+++ b/arch/riscv/mm/dma-noncoherent.c
@@ -69,30 +69,7 @@ static inline bool arch_sync_dma_cpu_needs_post_dma_flush(void)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
- switch (dir) {
- case DMA_TO_DEVICE:
- arch_dma_cache_wback(paddr, size);
- break;
-
- case DMA_FROM_DEVICE:
- if (!arch_sync_dma_clean_before_fromdevice()) {
- arch_dma_cache_inv(paddr, size);
- break;
- }
- fallthrough;
-
- case DMA_BIDIRECTIONAL:
- /* Skip the invalidate here if it's done later */
- if (IS_ENABLED(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) &&
- arch_sync_dma_cpu_needs_post_dma_flush())
- arch_dma_cache_wback(paddr, size);
- else
- arch_dma_cache_wback_inv(paddr, size);
- break;
-
- default:
- break;
- }
+ arch_dma_cache_wback(paddr, size);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
diff --git a/arch/sh/kernel/dma-coherent.c b/arch/sh/kernel/dma-coherent.c
index 6a44c0e7ba40..1e0491f9b026 100644
--- a/arch/sh/kernel/dma-coherent.c
+++ b/arch/sh/kernel/dma-coherent.c
@@ -17,17 +17,5 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
{
void *addr = sh_cacheop_vaddr(phys_to_virt(paddr));
- switch (dir) {
- case DMA_FROM_DEVICE: /* invalidate only */
- __flush_invalidate_region(addr, size);
- break;
- case DMA_TO_DEVICE: /* writeback only */
- __flush_wback_region(addr, size);
- break;
- case DMA_BIDIRECTIONAL: /* writeback and invalidate */
- __flush_purge_region(addr, size);
- break;
- default:
- BUG();
- }
+ __flush_wback_region(addr, size);
}
diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c
index 94955caa4488..3da1ee2b5d84 100644
--- a/arch/xtensa/kernel/pci-dma.c
+++ b/arch/xtensa/kernel/pci-dma.c
@@ -64,20 +64,8 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
- switch (dir) {
- case DMA_BIDIRECTIONAL:
- case DMA_TO_DEVICE:
- if (XCHAL_DCACHE_IS_WRITEBACK)
- do_cache_op(paddr, size, __flush_dcache_range);
- break;
-
- case DMA_NONE:
- BUG();
- break;
-
- default:
- break;
- }
+ if (XCHAL_DCACHE_IS_WRITEBACK)
+ do_cache_op(paddr, size, __flush_dcache_range);
}
void arch_dma_prep_coherent(struct page *page, size_t size)
--
Catalin
next prev parent reply other threads:[~2025-04-09 9:47 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-04 9:30 Vlastimil Babka
2025-04-04 10:30 ` Harry Yoo
2025-04-04 11:12 ` Petr Tesarik
2025-04-04 12:45 ` Vlastimil Babka
2025-04-04 13:53 ` Petr Tesarik
2025-04-06 14:02 ` Feng Tang
2025-04-07 7:21 ` Feng Tang
2025-04-07 7:54 ` Vlastimil Babka
2025-04-07 9:50 ` Petr Tesarik
2025-04-07 17:12 ` Catalin Marinas
2025-04-08 5:27 ` Petr Tesarik
2025-04-08 15:07 ` Catalin Marinas
2025-04-09 8:39 ` Petr Tesarik
2025-04-09 9:05 ` Petr Tesarik
2025-04-09 9:47 ` Catalin Marinas [this message]
2025-04-09 12:18 ` Petr Tesarik
2025-04-09 12:49 ` Catalin Marinas
2025-04-09 13:41 ` Petr Tesarik
2025-04-09 8:51 ` Vlastimil Babka
2025-04-09 11:11 ` Catalin Marinas
2025-04-09 12:22 ` Vlastimil Babka
2025-04-09 14:30 ` Catalin Marinas
2025-04-10 1:54 ` Feng Tang
2025-04-07 7:45 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z_ZCOLqxIVO0K5x3@arm.com \
--to=catalin.marinas@arm.com \
--cc=42.hyeyoo@gmail.com \
--cc=cl@linux.com \
--cc=feng.tang@linux.alibaba.com \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=pasic@linux.ibm.com \
--cc=peng.fan@nxp.com \
--cc=ptesarik@suse.com \
--cc=rientjes@google.com \
--cc=robin.murphy@arm.com \
--cc=seanjc@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox