dd interview questions
Top dd frequently asked interview questions
This is a continuation of my question about reading the superblock.
Let's say I want to target the HFS+ file system in Mac OS X. How could I read sector 2 of the boot disk? As far as I know Unix only provides system calls to read from files, which are never stored at that location.
Does this require either 1) the program to run kernel mode, or 2) the program to be written in Assembly? I would prefer to avoid either of these restrictions, particularly the latter.
Source: (StackOverflow)
How do you calculate the optimal blocksize when running a dd
? I've researched it a bit and I've not found anything suggesting how this would be accomplished.
I am under the impression that a larger blocksize would result in a quicker dd
... is this true?
I'm about to dd
two identical 500gb Hitachi HDDs that run at 7200rpm on a box running an Intel Core i3 with 4GB DDR3 1333mhz RAM, so I'm trying to figure out what blocksize to use. (I'm going to be booting Ubuntu 10.10 x86 from a flash drive, and running it from that.)
Source: (StackOverflow)
I have a binary file and i want to replace the value A2
at address DEADBEEF
with some other value, say A1
.
How can I do this with dd
? If there are other tools that can do this, please suggest. But I plan to do this on iPhone so I can only work with most basic Unix tools.
Source: (StackOverflow)
We have a smart media card with a linux install on it that we need to duplicate. We created an img with DD and then used dd to write the img back to a couple of new smart media cards. We have compared the MD5 checksum of both the original and the new copies and they are different.
Here is what we used:
dd if=/dev/sdb of=myimage.img
dd if=myimage.img of=/dev/sdb
dd if=/dev/sdb of=newimage.img
Anyone have any ideas of why these come out different?
Source: (StackOverflow)
I wanted to measure my disk throughput using the following command:
dd if=/dev/zero of=/mydir/junkfile bs=4k count=125000
If the junkfile exists, my disk throughput is 6 times smaller than if junkfile does not exist. I have repeated that many times and the results hold. Anybody knows why?
Thanks,
Amir.
Source: (StackOverflow)
If you look up how to clone an entire disk to another one on the web, you will find something like that:
dd if=/dev/sda of=/dev/sdb conv=notrunc,noerror
While I understand the noerror
, I am getting a hard time understanding why people think that notrunc
is required for "data integrity" (as ArchLinux's Wiki states, for instance).
Indeed, I do agree on that if you are copying a partition to another partition on another disk, and you do not want to overwrite the entire disk, just one partition. In thise case notrunc
, according to dd's manual page, is what you want.
But if you're cloning an entire disk, what does notrunc
change for you? Just time optimization?
Source: (StackOverflow)
I have been trying all day to get this to work.
Does anyone know how to get grep, or something of the like, to retrieve offsets of hex strings in a file?
I have a bunch of hexdumps that I need to check for strings and then run again and check if the value has changed.
I have tried hexdump and dd, but the problem is because it's a stream, I lose my offset for the files.
Someone must have had this problem and a workaround. What can I do?
To clarify, I have a series of dumped memory regions from GDB.
I am trying to narrow down a number by searching out all the places the number is stored, then doing it again and checking if the new value is stored at the same memory location.
I cannot get grep to do anything because I am looking for hex values so all the times I have tried (like a bazillion, roughly) it will not give me the correct output.
The hex dumps are just complete binary files, the paterns are within float values at larges so 8? bytes?
The patterns are not wrapping the lines that I am aware of. I am aware of the what it changes to, and I can do the same process and compare the lists to see which match.
The hex dumps normally end up (in total) 100 megs-ish.
Perl COULD be a option, but at this point, I would assume my lack of knowledge with bash and its tools is the main culprit.
Its a little hard to explain the output I am getting since I really am not getting any output..
I am anticipating (and expecting) something along the lines of:
<offset>:<searched value>
Which is the pretty well standard output I would normally get with grep -URbFo <searchterm> . > <output>
Problem is, when I try to search for hex values, I get the problem of if just not searching for the hex values, so if I search for 00 I should get like a million hits, because thats always the blankspace, but instead its searching for 00 as text, so in hex, 3030.
Any idea's?
I CAN force it through hexdump or something of the link but because its a stream it will not give me the offsets and filename that it found a match in.
Using grep -b
option doesnt seem to work either, I did try all the flags that seemed useful to my situation, and nothing worked.
Using xxd -u /usr/bin/xxd
as an example I get a output that would be useful, but I cannot use that for searching..
0004760: 73CC 6446 161E 266A 3140 5E79 4D37 FDC6 s.dF..&j1@^yM7..
0004770: BF04 0E34 A44E 5BE7 229F 9EEF 5F4F DFFA ...4.N[."..._O..
0004780: FADE 0C01 0000 000C 0000 0000 0000 0000 ................
Nice output, just what I wana see, but it just doesnt work for me in this situation..
This is some of the things i've tried since posting this:
xxd -u /usr/bin/xxd | grep 'DF'
00017b0: 4010 8D05 0DFF FF0A 0300 53E3 0610 A003 @.........S.....
root# grep -ibH "df" /usr/bin/xxd
Binary file /usr/bin/xxd matches
xxd -u /usr/bin/xxd | grep -H 'DF'
(standard input):00017b0: 4010 8D05 0DFF FF0A 0300 53E3 0610 A003 @.........S.....
Source: (StackOverflow)
In shell script i need to redirect output from dd command to /dev/null - how to do that?
( dd if=/dev/zero of=1.txt count=1 ) 2>&1 /dev/null
didn't work!
Source: (StackOverflow)
For a load test of my application (under Linux), I'm looking for a tool that outputs data on stdout at a specific rate (like 100 bytes/s), so that I can pipe the output to netcat which sends it to my application. Some option for dd would be ideal, but I didn't find anything so far. It doesn't really matter what kind of data is printed (NUL bytes are OK). Any hints?
Source: (StackOverflow)
In linux we can do dd
# dd if=/dev/sdb of=bckup.img
but if the disk is of 32GB with only 4GB is used; the 32GB image file is a waste of space. Is there anyway or any tool to create images with valid data only?
--EDIT--
what concerns most is time taken to write 32GB of image file to sd card rather the size itself.
Source: (StackOverflow)
I am writing a program using the Sleuth Kit Library that is designed to printout the File Allocation Table of a FAT32 filesystem. Everything in my program works fine until I call the tsk_fs_open_img() function. At that point the program returns and error stating "Invalid magic value (Not a FATFS file system(magic))." The FS is indeed a FAT32 FS and I have verified the magic value (AA55 @ offset 1FE) using a hex editor. Also using mmls and fls, which are command-line tools included in the Sleuth Kit Library, work on this drive image that I am using and show that it is indeed a FAT32 FS and also provide the offset of 63 for the FS.
If anyone could help me figure out why this function is not working it would be greatly appreciated. Thanks in advance.
Here is the link to the API for the function: TSK_FS_OPEN_IMG()
Here is my code:
using namespace std;
#include <tsk3/libtsk.h>
#include <iostream>
#include <string.h>
int main (int argc, const char * argv[])
{
TSK_IMG_TYPE_ENUM imgtype = TSK_IMG_TYPE_DETECT;
TSK_IMG_INFO *img;
TSK_FS_TYPE_ENUM fstype = TSK_FS_TYPE_FAT32;
TSK_FS_INFO *fs;
TSK_DADDR_T imgOffset = 0x00000000;
TSK_OFF_T fsStartBlock = 0x00000063;
TSK_VS_INFO *vs;
TSK_VS_TYPE_ENUM vstype = TSK_VS_TYPE_DETECT;
const TSK_VS_PART_INFO *part;
TSK_PNUM_T partLocation = part -> addr;
TSK_TCHAR *driveName;
TSK_DADDR_T startAddress = 0x00000000;
TSK_DADDR_T numBlocksToRead = 0x00000001;
TSK_FS_BLKCAT_FLAG_ENUM flags = TSK_FS_BLKCAT_ASCII;
int numOfDrives = 1;
uint sectorSize = 0;
uint8_t blockBytes = 0;
if (argc < 1) {
printf("You must enter a drive name.\n");
exit(EXIT_FAILURE);
}
driveName = (TSK_TCHAR*) argv[1];
cout << "\nOpening Drive\n\n";
if((img = tsk_img_open(numOfDrives, &driveName, imgtype, sectorSize)) == NULL) {
tsk_error_print(stderr);
exit(EXIT_FAILURE);
}
cout << "Drive opened successfuly.\n\n";
cout << "Opening File System\n\n";
if((fs = tsk_fs_open_img(img, fsStartBlock, fstype)) == NULL) {
tsk_error_print(stderr);
if (tsk_errno == TSK_ERR_FS_UNSUPTYPE)
tsk_fs_type_print(stderr);
img -> close(img);
exit(EXIT_FAILURE);
}
cout << "File system opened successfuly.\n\n";
blockBytes = tsk_fs_blkcat(fs, flags, startAddress, numBlocksToRead);
fs -> close(fs);
img -> close(img);
return 0;
}
Source: (StackOverflow)
I want to create a file in linux of size 128k which has data has 128k ones.
What is the fastest way to do that?
Source: (StackOverflow)
I have a server with a RAID50 configuration of 24 drives (two groups of 12), and if I run:
dd if=/dev/zero of=ddfile2 bs=1M count=1953 oflag=direct
I get:
2047868928 bytes (2.0 GB) copied, 0.805075 s, 2.5 GB/s
But if I run:
dd if=/dev/zero of=ddfile2 bs=1M count=1953
I get:
2047868928 bytes (2.0 GB) copied, 2.53489 s, 808 MB/s
I understand that O_DIRECT causes the page cache to be bypassed. But as I understand it bypassing the page cache basically means avoiding a memcpy. Testing on my desktop with the bandwidth tool I have a worst case sequential memory write bandwidth of 14GB/s, and I imagine on the newer much more expensive server the bandwidth must be even better. So why would an extra memcpy cause a >2x slowdown? Is there really a lot more involved when using the page cache? Is this atypical?
Source: (StackOverflow)
Through Python's subprocess module, I'm trying to capture the output of the dd command.
Here's the snippet of code:
r = subprocess.check_output(['dd', 'if=/Users/jason/Desktop/test.cpp', 'of=/Users/jason/Desktop/test.out'])
however, when I do something like
print r
I get a blank line.
Is there a way to capture the output of the dd command into some sort of data structure so that I can access it later?
What I essentially want is to have the output below be stored into a list so that I can later do operations on say the number of bytes.
1+0 records in
1+0 records out
4096 bytes transferred in 0.000409 secs (10011579 bytes/sec)
Source: (StackOverflow)
How could I get device size in bytes?
In Mac OS X 10.6 I am using this:
$ diskutil information /dev/disk0s2
Device Identifier: disk0s2
Device Node: /dev/disk0s2
Part Of Whole: disk0
Device / Media Name: macOSX106
Volume Name: macOSX106
Escaped with Unicode: macOSX106
Mounted: Yes
Mount Point: /
Escaped with Unicode: /
File System: Journaled HFS+
Type: hfs
Name: Mac OS Extended (Journaled)
Journal: Journal size 8192 KB at offset 0x12d000
Owners: Enabled
Partition Type: Apple_HFS
Bootable: Is bootable
Media Type: Generic
Protocol: SATA
SMART Status: Verified
Volume UUID: E2D5E93F-2CCC-3506-8075-79FD232DC63C
Total Size: 40.0 GB (40013180928 Bytes) (exactly 78150744 512-Byte-Blocks)
Volume Free Space: 4.4 GB (4424929280 Bytes) (exactly 8642440 512-Byte-Blocks)
Read-Only Media: No
Read-Only Volume: No
Ejectable: No
Whole: No
Internal: Yes
and it's work fine. But in Mac OS X 10.4 the output will be
$ diskutil info disk0s2
Device Node: /dev/disk1s2
Device Identifier: disk1s2
Mount Point:
Volume Name:
Partition Type: Apple_HFS
Bootable: Not bootable
Media Type: Generic
Protocol: SATA
SMART Status: Not Supported
Total Size: 500.0 MB
Free Space: 0.0 B
Read Only: No
Ejectable: Yes
and there is no something like (40013180928 Bytes) (exactly 78150744 512-Byte-Blocks)
My bash script parses the diskutil output, extract Total Size in bytes and grab last 10 Mb of the disk with the dd
command, so in 10.4 it doesn't work...
How could I get the size in bytes another way?
Source: (StackOverflow)