EzDevInfo.com

fsck interview questions

Top fsck frequently asked interview questions

git commit broken time zone

> git fsck
error in commit %hash%: invalid author/committer line - bad time zone

> git show %hash%
Date: Mon Mar 18 23:57:14 2201 -5274361

How this can be fixed? With git rebase in master branch and delete\update commit info, or do some magic in project .git directory, or somehow else?


Source: (StackOverflow)

Hadoop fsck shows missing replicas

I am running Hadoop 2.2.0 cluster with two datanodes and one namenode. When I try checking the system using hadoop fsck command on namenode or any of the datanodes, I get the following:

Target Replicas is 3 but found 2 replica(s). 

I tried changing the configuration in hdfs-site.xml (dfs.replication to 2 ) and restarted the cluster services. On running hadoop fsck / it is still showing the same status:

Target Replicas is 3 but found 2 replica(s).

Please clarify, is this a caching issue or a bug?


Source: (StackOverflow)

Advertisements

Target Replicas is 10 but found 3 replica

How can I fixed this?

/tmp/hadoop-yarn/staging/ubuntu/.staging/job_1450038005671_0025/job.jar: Under replicated BP-938294433-10.0.1.190-1450037861153:blk_1073744219_3398. Target Replicas is 10 but found 3 replica(s).

I get this when I run hadoop fsck / in my master node. I assume I should change a .xml file in conf or something similar I just don't know which file to change.

Note that dfs.replication in hdfs-site.xml is already set to 3. I don't have dfs.replication.max in my hdfs-site.xml file.


Source: (StackOverflow)

How to check a ubifs filesystem?

ubifs has no fsck program, so how do you check the filesystem integrity when using ubifs?

My target system is ARM, Linux 3.2.58.


Source: (StackOverflow)

Where can I get fsck code?

I have been trying to find out fsck code. I cannot find it in the coreutils package in Ubuntu. Could someone please let me know, where I would be able to take a look at the fsck code?


Source: (StackOverflow)

Running fsck with other program and UDEV in bash

After much trouble I got the UDEV rule to run after inserting an USB. It runs a program to convert the names of pictures and movies. I use {} & to run the program in the background: The only thing is that by unplugging the usb it is easy corrupted. So I would like to also run fsck. Does anybody has an idea?

Here is the UDEV rule:

CTION=="add", SUBSYSTEM=="block", ATTRS{idVendor}=="14cd", ATTRS{idProduct}=="121f", RUN+="/home/pi/bashtest.sh"

Here is the program:

#!/bin/bash
sudo umount /dev/sda1
sudo fsck -y /dev/sda1
{
dd=1234567890aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ
sleep 5
sudo mount -t vfat /dev/sda1 /media/usb1
cd /media/usb1/DCIM/Camera
sudo find /media/usb1/DCIM/Camera -regextype posix-egrep -regex ".*[^/]{13}.JPG"|
for i in *.JPG
do
ddate=$(exiv2 "${i}"|grep timestamp)
SPEC=$ddate
read X X YEAR MONTH DAY HOUR MINUTE SECOND <<<${SPEC//:/ }
d1=${YEAR:2}
d2=${dd:(10#$MONTH-1):1}
d3=${dd:(10#$DAY-1):1}
d4=${dd:(10#$HOUR-1):1}
d5=${dd:(10#$MINUTE-1):1}
d6=${dd:(10#$SECOND-1):1}
d7=0
sudo cp -nrv --preserve=all "$i" /media/usb1/DCIM/"${d1}${d2}${d3}${d4}${d5}${d6}${d7}.JPG"
find . -name '*.JPG' -size -1 -delete
done
for i in *.MP4
do
#exiftool -createdate -S -s 20140308_133017.MP4
dddate=$(exiftool "${i}" |grep "Media Create Date" | awk -F':' '{print $2, $3, $4, $5, $6, $7}')
SPEC=$dddate
read YEAR MONTH DAY HOUR MINUTE SECOND <<<${SPEC//:/ }
d1=${YEAR:2}
d2=${dd:(10#$MONTH-1):1}
d3=${dd:(10#$DAY-1):1}
d4=${dd:(10#$HOUR-1):1}
d5=${dd:(10#$MINUTE-1):1}
d6=${dd:(10#$SECOND-1):1}
d7=0
sudo cp -nrv --preserve=all "$i" /media/usb1/DCIM/"${d1}${d2}${d3}${d4}${d5}${d6}${d7}.MP4"
done
sudo umount -l /media/usb1
sleep 5
sudo shutdown -h now
} &

Probably the code can be written better, but it works for me.


Source: (StackOverflow)

HDFS disk usage showing different information

I got below details through hadoop fsck / Total size: 41514639144544 B (Total open files size: 581 B) Total dirs: 40524 Total files: 124348 Total symlinks: 0 (Files currently being written: 7) Total blocks (validated): 340802 (avg. block size 121814540 B) (Total open file blocks (not validated): 7) Minimally replicated blocks: 340802 (100.0 %)

I am usign 256MB block size. so 340802 blocks * 256 MB = 83.2TB * 3(replicas) =249.6 TB but in cloudera manager it shows 110 TB disk used. how is it possible?


Source: (StackOverflow)

Are badblocks related to a partition or permanent?

I ran a check on a partition :

sudo e2fsck -c /dev/sdb3

It found some bad blocks. As far as I understood, it marked the badblocks, so that no files will use them.

My question is : is that "marking" persistent or is it linked to the partition ? More specifically, if I reformat the partition with something like

sudo mkfs.ext4 /dev/sdb3

are the badblocks still marked ?


Source: (StackOverflow)

Windows 7 seems to mess up ext4 group descriptors

I recently installed Linux Mint Debian Edition - it was installed into a logical partition (formatted to ext4) under a 40 GB extended partition that was previously used as a backup/recovery disk in Windows 7. It works quite well - the only problem is when I boot to Windows, then next time, Linux won't boot. I then need to use a recovery distro and run fsck.ext4 which detects some problems with group descriptors, fixes them and it's all good again. My feeling is that Windows tries to mount (and fix / defrag / whatever) that partition which messes it up - I suspect that, because 1) it only happens after I boot to Windows, 2) Windows still displays the old recovery/backup disk D: (although you cannot access it and it doesn't show a free/total space etc.). Any idea how to fix it?


Source: (StackOverflow)

Debian cant find file but ls shows the file

my MySQL Server doesnt starts because a file is missing.

The funny thing is that the file is visible for ls -al but other commands cant find the file.

ls -al shows the files

And thats the error. A few files works.

stat cant find the file or directory

If i use shutdown -rF now to check the filesystem i get no error.

Whats the problem? :(

Filesystem: EXT3
Debian version: 6.0.10

Some informations about the disk


Source: (StackOverflow)

Slave VMs are down in CloudLab

Two of my three slave VMs are down and I can't ssh them. We have performed a hard reboot but still they are down. Any idea how to bring them back or how to debug to find the reason. Here's what jps:

3542 RunJar
9920 SecondaryNameNode
10094 ResourceManager
10244 NodeManager
8677 DataNode
31634 Jps
8536 NameNode

Here's also another detail:

ubuntu@anmol-vm1-new:~$ sudo netstat -atnp | grep 8020 
tcp        0      0 10.0.1.190:8020         0.0.0.0:*               LISTEN      8536/java       
tcp        0      0 10.0.1.190:50957        10.0.1.190:8020         ESTABLISHED 8677/java       
tcp        0      0 10.0.1.190:8020         10.0.1.190:50957        ESTABLISHED 8536/java       
tcp        0      0 10.0.1.190:8020         10.0.1.193:46627        ESTABLISHED 8536/java       
tcp        0      0 10.0.1.190:44300        10.0.1.190:8020         TIME_WAIT   -               
tcp        0      0 10.0.1.190:8020         10.0.1.190:44328        ESTABLISHED 8536/java       
tcp        0      0 10.0.1.190:8020         10.0.1.193:44610        ESTABLISHED 8536/java       
tcp6       0      0 10.0.1.190:44292        10.0.1.190:8020         TIME_WAIT   -               
tcp6       0      0 10.0.1.190:44328        10.0.1.190:8020         ESTABLISHED 10244/java      
tcp6       0      0 10.0.1.190:44252        10.0.1.190:8020         TIME_WAIT   -               
tcp6       0      0 10.0.1.190:44247        10.0.1.190:8020         TIME_WAIT   -               
tcp6       0      0 10.0.1.190:44287        10.0.1.190:8020         TIME_WAIT   -               

When I run the following command:

hadoop fsck /

the result is:

The filesystem under path '/' is CORRUPT

Here's more details in this pastebin.


Source: (StackOverflow)

GCE: Is there access to the maintenance shell?

One of my instances is having trouble with a disk. The serial console teasingly displays the erro and a console prompt:

fsck.ext4: No such file or directory while trying to open /dev/sdbt
Possibly non-existent device?
fsck died with exit status 8
[?25l[?1c7[1G[[31mFAIL[39;49m8[?25h[?0c[31mfailed (code 8).[39;49m
[....] File system check failed. A log is being saved in /var/log/fsck/checkfs if that location is writable. Please repair the file system manually. ...[?25l[?1c7[1G[[31mFAIL[39;49m8[?25h[?0c [31mfailed![39;49m
[....] A maintenance shell will now be started. CONTROL-D will terminate this shell and resume system boot. ...[?25l[?1c7[1G[[33mwarn[39;49m8[?25h[?0c [33m(warning).[39;49m
sulogin: root account is locked, starting shell
root@(none):~# 

Is there any way to make the serial console interactive? It'd be great to look at the fstab file for starters, or even hit ctl-D to kick it along.

Also can anything be done to cleanup the gibberish on the console?


Source: (StackOverflow)

Android: disable fsck_msdos on a particular SD card

Using a Samsung Galaxy SIII (AT&T version) running Android 4.0.4 with stock Samsung customizations, I am just trying to mount an SD card (through a card reader connected via an OTG adapter to the USB host port).

Unfortunately, this device seems to "have too high standards" for filesys correctness - it refuses to mount the card, saying Unable to read FAT: Success.

Other Android devices do not have this problem, perhaps running less extensive checking.

The SIII is able to mount brand new SD cards, but after even a few days of normal use of a class 10 card, the card is not mountable in the SIII, despite being mountable on other Android devices, as well as Mac, Linux, and Windows.

Just like you can put a .nomedia file on a disk to disable the MediaScanner, is there something like .nofsck that I can put on the cards to disable or restrict the fscking? Alternatively, is there anything that a non-root app can do to disable or restrict the fscking?


Source: (StackOverflow)

How to know if fsck is running?

My Android app needs to store 15 GB of data, so it needs to use removable storage. So I need to check if fsck is running while my app is loading (to quit if that is the case). Of course I can do a

Runtime.getRuntime().exec("ps");

but I have read that this solution is not reliable... So I tried with

ActivityManager manager = (ActivityManager) getSystemService(ACTIVITY_SERVICE);
for (RunningAppProcessInfo tasks : manager.getRunningAppProcesses()) {
     System.out.println(" ########## " + tasks.processName + " #######");
}

but I don't see any fsck. Actually, using ps I see it as being executed as

/system/bin/fsck.exfat -R -f /dev/...

Is the only way using Runtime.getRuntime().exec("ps")?

Thanks!

L.

EDIT

The command Environment.getExternalStorageState() does not work for me. After unmounting/mounting the SD card, with the code below, I get AT THE SAME TIME, MOUNTED and /system/bin/fsck.exfat -R -f /dev/...

String line;
BufferedReader br;
try {
    br = new BufferedReader(new InputStreamReader( Runtime.getRuntime().exec("ps").getInputStream()));
    while ((line = br.readLine()) != null) {
    if (line.contains("/fsck")) 
        System.out.println("### PS:" + Environment.getExternalStorageState() + " ### " +line);
} catch (IOException e1) {
     e1.printStackTrace();
}

This is the output I get:

PS:mounted ### root  [...] D /system/bin/fsck.exfat /dev/...

Source: (StackOverflow)

Verifying a git repository (Stash) restore

We have disaster recovery plans that mean we take a backup of our git installation (Atlassian Stash in our case) and restore it on a test server to verify the backup was a good one. If the restore process fails then we have a problem but we're wondering about going a bit further when the restore is a success and verifying the restored repositories.

Would using git fsck be a good idea here? Running it locally as a developer throws some dangling or unreachable objects, I believe this is a normal thing that happens. But on a fresh git clone there shouldn't be any issues right? So if fsck had errors then we're having a bad time?

As a second option we could also point our CI server at a restored repository and have it build and run tests. As our main branch should always be healthy then any build failures would indicate an issue.

Any other ideas on verifying a repository is good and healthy?

Gog


Source: (StackOverflow)