* Re: dd hangs when reading large partitions [not found] ` <0cfe1ed2-41e1-66a4-8d98-ebc0d9645d21@free.fr> @ 2019-02-07 10:44 ` Marc Gonzalez 2019-02-07 16:56 ` Marc Gonzalez 0 siblings, 1 reply; 8+ messages in thread From: Marc Gonzalez @ 2019-02-07 10:44 UTC (permalink / raw) To: linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Joao Pinto, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko + linux-mm Summarizing the issue for linux-mm readers: If I read data from a storage device larger than my system's RAM, the system freezes once dd has read more data than available RAM. # dd if=/dev/sde of=/dev/null bs=1M & while true; do echo m > /proc/sysrq-trigger; echo; echo; sleep 1; done https://pastebin.ubuntu.com/p/HXzdqDZH4W/ A few seconds before the system hangs, Mem-Info shows: [ 90.986784] Node 0 active_anon:7060kB inactive_anon:13644kB active_file:0kB inactive_file:3797500kB [...] => 3797500kB is basically all of RAM. I tried to locate where "inactive_file" was being increased from, and saw two signatures: [ 255.606019] __mod_node_page_state | __pagevec_lru_add_fn | pagevec_lru_move_fn | __lru_cache_add | lru_cache_add | add_to_page_cache_lru | mpage_readpages | blkdev_readpages | read_pages | __do_page_cache_readahead | ondemand_readahead | page_cache_sync_readahead [ 255.637238] __mod_node_page_state | __pagevec_lru_add_fn | pagevec_lru_move_fn | __lru_cache_add | lru_cache_add | lru_cache_add_active_or_unevictable | __handle_mm_fault | handle_mm_fault | do_page_fault | do_translation_fault | do_mem_abort | el1_da Are these expected? NB: the system does not hang if I specify 'iflag=direct' to dd. According to the RCU watchdog: [ 108.466240] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: [ 108.466420] rcu: 1-...0: (130 ticks this GP) idle=79e/1/0x4000000000000000 softirq=2393/2523 fqs=2626 [ 108.471436] rcu: (detected by 4, t=5252 jiffies, g=133, q=85) [ 108.480605] Task dump for CPU 1: [ 108.486483] kworker/1:1H R running task 0 680 2 0x0000002a [ 108.489977] Workqueue: kblockd blk_mq_run_work_fn [ 108.496908] Call trace: [ 108.501513] __switch_to+0x174/0x1e0 [ 108.503757] blk_mq_run_work_fn+0x28/0x40 [ 108.507589] process_one_work+0x208/0x480 [ 108.511486] worker_thread+0x48/0x460 [ 108.515480] kthread+0x124/0x130 [ 108.519123] ret_from_fork+0x10/0x1c Can anyone shed some light on what's going on? Regards. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: dd hangs when reading large partitions 2019-02-07 10:44 ` dd hangs when reading large partitions Marc Gonzalez @ 2019-02-07 16:56 ` Marc Gonzalez 2019-02-08 15:33 ` Marc Gonzalez 0 siblings, 1 reply; 8+ messages in thread From: Marc Gonzalez @ 2019-02-07 16:56 UTC (permalink / raw) To: linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Joao Pinto, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko On 07/02/2019 11:44, Marc Gonzalez wrote: > + linux-mm > > Summarizing the issue for linux-mm readers: > > If I read data from a storage device larger than my system's RAM, the system freezes > once dd has read more data than available RAM. > > # dd if=/dev/sde of=/dev/null bs=1M & while true; do echo m > /proc/sysrq-trigger; echo; echo; sleep 1; done > https://pastebin.ubuntu.com/p/HXzdqDZH4W/ > > A few seconds before the system hangs, Mem-Info shows: > > [ 90.986784] Node 0 active_anon:7060kB inactive_anon:13644kB active_file:0kB inactive_file:3797500kB [...] > > => 3797500kB is basically all of RAM. > > I tried to locate where "inactive_file" was being increased from, and saw two signatures: > > [ 255.606019] __mod_node_page_state | __pagevec_lru_add_fn | pagevec_lru_move_fn | __lru_cache_add | lru_cache_add | add_to_page_cache_lru | mpage_readpages | blkdev_readpages | read_pages | __do_page_cache_readahead | ondemand_readahead | page_cache_sync_readahead > > [ 255.637238] __mod_node_page_state | __pagevec_lru_add_fn | pagevec_lru_move_fn | __lru_cache_add | lru_cache_add | lru_cache_add_active_or_unevictable | __handle_mm_fault | handle_mm_fault | do_page_fault | do_translation_fault | do_mem_abort | el1_da > > Are these expected? > > NB: the system does not hang if I specify 'iflag=direct' to dd. > > According to the RCU watchdog: > > [ 108.466240] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: > [ 108.466420] rcu: 1-...0: (130 ticks this GP) idle=79e/1/0x4000000000000000 softirq=2393/2523 fqs=2626 > [ 108.471436] rcu: (detected by 4, t=5252 jiffies, g=133, q=85) > [ 108.480605] Task dump for CPU 1: > [ 108.486483] kworker/1:1H R running task 0 680 2 0x0000002a > [ 108.489977] Workqueue: kblockd blk_mq_run_work_fn > [ 108.496908] Call trace: > [ 108.501513] __switch_to+0x174/0x1e0 > [ 108.503757] blk_mq_run_work_fn+0x28/0x40 > [ 108.507589] process_one_work+0x208/0x480 > [ 108.511486] worker_thread+0x48/0x460 > [ 108.515480] kthread+0x124/0x130 > [ 108.519123] ret_from_fork+0x10/0x1c > > Can anyone shed some light on what's going on? Saw a slightly different report from another test run: https://pastebin.ubuntu.com/p/jCywbKgRCq/ [ 340.689764] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: [ 340.689992] rcu: 1-...0: (8548 ticks this GP) idle=c6e/1/0x4000000000000000 softirq=82/82 fqs=6 [ 340.694977] rcu: (detected by 5, t=5430 jiffies, g=-719, q=16) [ 340.703803] Task dump for CPU 1: [ 340.709507] dd R running task 0 675 673 0x00000002 [ 340.713018] Call trace: [ 340.720059] __switch_to+0x174/0x1e0 [ 340.722192] 0xffffffc0f6dc9600 [ 352.689742] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 33s! [ 352.689910] Showing busy workqueues and worker pools: [ 352.696743] workqueue mm_percpu_wq: flags=0x8 [ 352.701753] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 [ 352.706099] pending: vmstat_update [ 384.693730] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 65s! [ 384.693815] Showing busy workqueues and worker pools: [ 384.700577] workqueue events: flags=0x0 [ 384.705699] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 [ 384.709351] pending: vmstat_shepherd [ 384.715587] workqueue mm_percpu_wq: flags=0x8 [ 384.719495] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 [ 384.723754] pending: vmstat_update ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: dd hangs when reading large partitions 2019-02-07 16:56 ` Marc Gonzalez @ 2019-02-08 15:33 ` Marc Gonzalez 2019-02-08 15:49 ` Bart Van Assche 0 siblings, 1 reply; 8+ messages in thread From: Marc Gonzalez @ 2019-02-08 15:33 UTC (permalink / raw) To: linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Joao Pinto, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko On 07/02/2019 17:56, Marc Gonzalez wrote: > Saw a slightly different report from another test run: > https://pastebin.ubuntu.com/p/jCywbKgRCq/ > > [ 340.689764] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: > [ 340.689992] rcu: 1-...0: (8548 ticks this GP) idle=c6e/1/0x4000000000000000 softirq=82/82 fqs=6 > [ 340.694977] rcu: (detected by 5, t=5430 jiffies, g=-719, q=16) > [ 340.703803] Task dump for CPU 1: > [ 340.709507] dd R running task 0 675 673 0x00000002 > [ 340.713018] Call trace: > [ 340.720059] __switch_to+0x174/0x1e0 > [ 340.722192] 0xffffffc0f6dc9600 > > [ 352.689742] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 33s! > [ 352.689910] Showing busy workqueues and worker pools: > [ 352.696743] workqueue mm_percpu_wq: flags=0x8 > [ 352.701753] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 > [ 352.706099] pending: vmstat_update > > [ 384.693730] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 65s! > [ 384.693815] Showing busy workqueues and worker pools: > [ 384.700577] workqueue events: flags=0x0 > [ 384.705699] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 > [ 384.709351] pending: vmstat_shepherd > [ 384.715587] workqueue mm_percpu_wq: flags=0x8 > [ 384.719495] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 > [ 384.723754] pending: vmstat_update Running 'dd if=/dev/sda of=/dev/null bs=40M status=progress' I got a slightly different splat: [ 171.513944] INFO: task dd:674 blocked for more than 23 seconds. [ 171.514131] Tainted: G S 5.0.0-rc5-next-20190206 #23 [ 171.518784] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 171.525728] dd D 0 674 672 0x00000000 [ 171.533525] Call trace: [ 171.538926] __switch_to+0x174/0x1e0 [ 171.541237] __schedule+0x1e4/0x630 [ 171.545041] schedule+0x34/0x90 [ 171.548261] io_schedule+0x20/0x40 [ 171.551401] blk_mq_get_tag+0x178/0x320 [ 171.554852] blk_mq_get_request+0x13c/0x3e0 [ 171.558587] blk_mq_make_request+0xcc/0x640 [ 171.562763] generic_make_request+0x1d4/0x390 [ 171.566924] submit_bio+0x5c/0x1c0 [ 171.571447] mpage_readpages+0x178/0x1d0 [ 171.574730] blkdev_readpages+0x3c/0x50 [ 171.578831] read_pages+0x70/0x180 [ 171.582364] __do_page_cache_readahead+0x1cc/0x200 [ 171.585843] ondemand_readahead+0x148/0x310 [ 171.590613] page_cache_async_readahead+0xc0/0x100 [ 171.594719] generic_file_read_iter+0x54c/0x860 [ 171.599565] blkdev_read_iter+0x50/0x80 [ 171.603998] __vfs_read+0x134/0x190 [ 171.607800] vfs_read+0x94/0x130 [ 171.611273] ksys_read+0x6c/0xe0 [ 171.614745] __arm64_sys_read+0x24/0x30 [ 171.617974] el0_svc_handler+0xb8/0x140 [ 171.621509] el0_svc+0x8/0xc For the record, I'll restate the problem: dd hangs when reading a partition larger than RAM, except when using iflag=direct or iflag=nocache # dd if=/dev/sde of=/dev/null bs=64M iflag=direct 64+0 records in 64+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 51.1532 s, 84.0 MB/s # dd if=/dev/sde of=/dev/null bs=64M iflag=nocache 64+0 records in 64+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 60.6478 s, 70.8 MB/s # dd if=/dev/sde of=/dev/null bs=64M count=56 56+0 records in 56+0 records out 3758096384 bytes (3.8 GB, 3.5 GiB) copied, 50.5897 s, 74.3 MB/s # dd if=/dev/sde of=/dev/null bs=64M /*** CONSOLE LOCKS UP ***/ I've been looking at the differences between iflag=direct and no-flag. Using the following script to enable relevant(?) logs: mount -t debugfs nodev /sys/kernel/debug/ cd /sys/kernel/debug/tracing/events echo 1 > filemap/enable echo 1 > pagemap/enable echo 1 > vmscan/enable echo 1 > kmem/mm_page_free/enable echo 1 > kmem/mm_page_free_batched/enable echo 1 > kmem/mm_page_alloc/enable echo 1 > kmem/mm_page_alloc_zone_locked/enable echo 1 > kmem/mm_page_pcpu_drain/enable echo 1 > kmem/mm_page_alloc_extfrag/enable echo 1 > kmem/kmalloc_node/enable echo 1 > kmem/kmem_cache_alloc_node/enable echo 1 > kmem/kmem_cache_alloc/enable echo 1 > kmem/kmem_cache_free/enable # dd if=/dev/sde of=/dev/null bs=64M count=1 iflag=direct https://pastebin.ubuntu.com/p/YWp4pydM6V/ (114942 lines) # dd if=/dev/sde of=/dev/null bs=64M count=1 https://pastebin.ubuntu.com/p/xpzgN5H3Hp/ (247439 lines) Does anyone see what's going sideways in the no-flag case? Regards. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: dd hangs when reading large partitions 2019-02-08 15:33 ` Marc Gonzalez @ 2019-02-08 15:49 ` Bart Van Assche 2019-02-09 11:57 ` Marc Gonzalez 2019-02-11 16:36 ` Marc Gonzalez 0 siblings, 2 replies; 8+ messages in thread From: Bart Van Assche @ 2019-02-08 15:49 UTC (permalink / raw) To: Marc Gonzalez, linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Joao Pinto, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko On Fri, 2019-02-08 at 16:33 +0100, Marc Gonzalez wrote: > Does anyone see what's going sideways in the no-flag case? Hi Marc, Does this problem only occur with block devices backed by the UFS driver or does this problem also occur with other block drivers? Thanks, Bart. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: dd hangs when reading large partitions 2019-02-08 15:49 ` Bart Van Assche @ 2019-02-09 11:57 ` Marc Gonzalez 2019-02-11 16:36 ` Marc Gonzalez 1 sibling, 0 replies; 8+ messages in thread From: Marc Gonzalez @ 2019-02-09 11:57 UTC (permalink / raw) To: Bart Van Assche, linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Joao Pinto, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko On 08/02/2019 16:49, Bart Van Assche wrote: > On Fri, 2019-02-08 at 16:33 +0100, Marc Gonzalez wrote: > >> Does anyone see what's going sideways in the no-flag case? > > Does this problem only occur with block devices backed by the UFS driver > or does this problem also occur with other block drivers? So far, I've only been able to test with UFS storage. The board has no PATA/SATA. SDHC is not supported yet. With Jeffrey's help, I was able to get a semi-functional USB3 stack running. I'll test USB3 mass storage on Monday. FWIW, I removed most (all?) locks from the UFSHC driver, by dropping scaling and gating support. I could also drop runtime suspend, if someone thinks that could help, but I'm thinking the problem might be in the mm or block layers? (It doesn't look like a locking problem, but more a memory exhaustion problem.) Regards. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: dd hangs when reading large partitions 2019-02-08 15:49 ` Bart Van Assche 2019-02-09 11:57 ` Marc Gonzalez @ 2019-02-11 16:36 ` Marc Gonzalez 2019-02-11 17:27 ` Marc Gonzalez 1 sibling, 1 reply; 8+ messages in thread From: Marc Gonzalez @ 2019-02-11 16:36 UTC (permalink / raw) To: Bart Van Assche, linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko, James Bottomley On 08/02/2019 16:49, Bart Van Assche wrote: > Does this problem only occur with block devices backed by the UFS driver > or does this problem also occur with other block drivers? Yes, same issue with a USB3 mass storage device: usb 2-1: new SuperSpeed Gen 1 USB device number 2 using xhci-hcd usb 2-1: New USB device found, idVendor=05dc, idProduct=a838, bcdDevice=11.00 usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-1: Product: USB Flash Drive usb 2-1: Manufacturer: Lexar usb 2-1: SerialNumber: AAYW2W7I13BAR0JC usb-storage 2-1:1.0: USB Mass Storage device detected scsi host0: usb-storage 2-1:1.0 scsi 0:0:0:0: Direct-Access Lexar USB Flash Drive 1100 PQ: 0 ANSI: 6 sd 0:0:0:0: [sda] 62517248 512-byte logical blocks: (32.0 GB/29.8 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sd 0:0:0:0: [sda] Attached SCSI removable disk # dd if=/dev/sda of=/dev/null bs=1M status=progress 3879731200 bytes (3.9 GB, 3.6 GiB) copied, 56.0097 s, 69.3 MB/s This definitively rules out drivers/scsi/ufs (Dropping UFS people) So the problem could be in SCSI glue, or block, or mm? How can I pinpoint the bug? Problem statement and logs: https://lore.kernel.org/linux-block/66419195-594c-aa83-c19d-f091ad3b296d@free.fr/ Regards. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: dd hangs when reading large partitions 2019-02-11 16:36 ` Marc Gonzalez @ 2019-02-11 17:27 ` Marc Gonzalez 2019-02-12 15:26 ` [SOLVED] " Marc Gonzalez 0 siblings, 1 reply; 8+ messages in thread From: Marc Gonzalez @ 2019-02-11 17:27 UTC (permalink / raw) To: Bart Van Assche, linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko, James Bottomley On 11/02/2019 17:36, Marc Gonzalez wrote: > On 08/02/2019 16:49, Bart Van Assche wrote: > >> Does this problem only occur with block devices backed by the UFS driver >> or does this problem also occur with other block drivers? > > Yes, same issue with a USB3 mass storage device: > > usb 2-1: new SuperSpeed Gen 1 USB device number 2 using xhci-hcd > usb 2-1: New USB device found, idVendor=05dc, idProduct=a838, bcdDevice=11.00 > usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 > usb 2-1: Product: USB Flash Drive > usb 2-1: Manufacturer: Lexar > usb 2-1: SerialNumber: AAYW2W7I13BAR0JC > usb-storage 2-1:1.0: USB Mass Storage device detected > scsi host0: usb-storage 2-1:1.0 > scsi 0:0:0:0: Direct-Access Lexar USB Flash Drive 1100 PQ: 0 ANSI: 6 > sd 0:0:0:0: [sda] 62517248 512-byte logical blocks: (32.0 GB/29.8 GiB) > sd 0:0:0:0: [sda] Write Protect is off > sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00 > sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA > sda: sda1 > sd 0:0:0:0: [sda] Attached SCSI removable disk > > # dd if=/dev/sda of=/dev/null bs=1M status=progress > 3879731200 bytes (3.9 GB, 3.6 GiB) copied, 56.0097 s, 69.3 MB/s > > So the problem could be in SCSI glue, or block, or mm? Unlikely. Someone else would have been affected... A colleague pointed out that some memory areas are reserved downstream. Perhaps the FW goes haywire once the kernel touches reserved memory? I'm off to test that hypothesis. Regards. ^ permalink raw reply [flat|nested] 8+ messages in thread
* [SOLVED] dd hangs when reading large partitions 2019-02-11 17:27 ` Marc Gonzalez @ 2019-02-12 15:26 ` Marc Gonzalez 0 siblings, 0 replies; 8+ messages in thread From: Marc Gonzalez @ 2019-02-12 15:26 UTC (permalink / raw) To: Bart Van Assche, linux-mm, linux-block Cc: Jianchao Wang, Christoph Hellwig, Jens Axboe, fsdevel, SCSI, Jeffrey Hugo, Evan Green, Matthias Kaehlcke, Douglas Anderson, Stephen Boyd, Tomas Winkler, Adrian Hunter, Bart Van Assche, Martin Petersen, Bjorn Andersson, Ming Lei, Omar Sandoval, Roman Gushchin, Andrew Morton, Michal Hocko, James Bottomley On 11/02/2019 18:27, Marc Gonzalez wrote: > A colleague pointed out that some memory areas are reserved downstream. > Perhaps the FW goes haywire once the kernel touches reserved memory? Bingo! FW quirk. https://patchwork.kernel.org/patch/10808173/ Once the reserved memory range is extended, I am finally able to read large partitions: # dd if=/dev/sda of=/dev/null bs=1M 55256+0 records in 55256+0 records out 57940115456 bytes (58 GB, 54 GiB) copied, 786.165 s, 73.7 MB/s Thanks to everyone who provided suggestions and guidance. Regards. ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-02-12 15:27 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <f792574c-e083-b218-13b4-c89be6566015@free.fr>
[not found] ` <398a6e83-d482-6e72-5806-6d5bbe8bfdd9@oracle.com>
[not found] ` <ef734b94-e72b-771f-350b-08d8054a58f3@kernel.dk>
[not found] ` <20190119095601.GA7440@infradead.org>
[not found] ` <07b2df5d-e1fe-9523-7c11-f3058a966f8a@free.fr>
[not found] ` <985b340c-623f-6df2-66bd-d9f4003189ea@free.fr>
[not found] ` <b3910158-83d6-21fe-1606-33e88912404a@oracle.com>
[not found] ` <d082bdee-62e5-d470-b63b-196c0fe3b9fb@free.fr>
[not found] ` <5132e41b-cb1a-5b81-4a72-37d0f9ea4bb9@oracle.com>
[not found] ` <7bd8b010-bf0c-ad64-f927-2d2187a18d0b@free.fr>
[not found] ` <0cfe1ed2-41e1-66a4-8d98-ebc0d9645d21@free.fr>
2019-02-07 10:44 ` dd hangs when reading large partitions Marc Gonzalez
2019-02-07 16:56 ` Marc Gonzalez
2019-02-08 15:33 ` Marc Gonzalez
2019-02-08 15:49 ` Bart Van Assche
2019-02-09 11:57 ` Marc Gonzalez
2019-02-11 16:36 ` Marc Gonzalez
2019-02-11 17:27 ` Marc Gonzalez
2019-02-12 15:26 ` [SOLVED] " Marc Gonzalez
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox