EzDevInfo.com

xfs interview questions

Top xfs frequently asked interview questions

xfs, it is possible to disable the log?

I'd like to disable log in the xfs.

I didn't find an option in

mkfs.xfs

So my question is: Is it possibile to disable them or move them in RAM? If yes, how? Thanks


Source: (StackOverflow)

Storing & accessing up to 10 million files in Linux

I'm writing an app that needs to store lots of files up to approx 10 million.

They are presently named with a UUID and are going to be around 4MB each but always the same size. Reading and writing from/to these files will always be sequential.

2 main questions I am seeking answers for:

1) Which filesystem would be best for this. XFS or ext4? 2) Would it be necessary to store the files beneath subdirectories in order to reduce the numbers of files within a single directory?

For question 2, I note that people have attempted to discover the XFS limit for number of files you can store in a single directory and haven't found the limit which exceeds millions. They noted no performance problems. What about under ext4?

Googling around with people doing similar things, some people suggested storing the inode number as a link to the file instead of the filename for performance (this is in a database index. which I'm also using). However, I don't see a usable API for opening the file by inode number. That seemed to be more of a suggestion for improving performance under ext3 which I am not intending to use by the way.

What are the ext4 and XFS limits? What performance benefits are there from one over the other and could you see a reason to use ext4 over XFS in my case?


Source: (StackOverflow)

Advertisements

large PAGE size for XFS block size

I am using ubuntu 14 LTS x86_64 with Page size 4096 (bytes). XFS documentation suggests that block size of XFS can not exceed kernel PAGE Size. Do I need to use huge Pages to increase File system block size

Could you also suggest if there are alternative possible as I could not find any


Source: (StackOverflow)

How to find Logical Name for PinPad XFS if it is not mentioned in Manual

I have started XFS implementation of SZZT Pinpad .I am facing an issue with the WFSOpen command Its giving an error “ – 14 “which is mentioned as WFS_ERR_HARDWARE_ERROR in the Manual. Please let us know if we are missing out on any parameter Value for the same . Also we are unable to find the logical Name for SZZT Pinpad in the Manual . As of now we are using the same name which is been mentioned in the Registry


Source: (StackOverflow)

Directories created by boost::filesystem::create_directories() not immediately accessible?

I am using boost::filesystem::create_directories() to create new directories. When I try to access these directories shortly after creation, I get an error saying: no such directory. But if I sleep for a while after creating directories everything is fine (I do not get the error). Also, I tried using fsync() and sync() after creating directories but it made no difference. I am testing it on ext4 and xfs file systems and my boost version is boost 1.44

My questions are

  1. Does boost::create_directories() create directories instantly? Or is it possible that something is wrong there?
  2. Also, are sync() and fsync() guaranteed to flush everything to disc on ext4/xfs?

Source: (StackOverflow)

XFS No space left on device

I have a server setup of an XFS partition on LVM. While copying files to the home partition, "No space left on device" is displayed.

df -h displays sufficient space:

/dev/mapper/prod--vg-home     35G   21G   15G  60% /home

df -i also displays sufficient inodes:

/dev/mapper/prod--vg-home   36700160  379390 36320770    2% /home

I did verify the impact of changing the maximum percentage of inodes:

xfs_growfs -m 25 /dev/mapper/prod--vg-home

This amount can easily be decreased and increased.

While experimenting with this setting, I noticed that decreasing it to 3% and increasing it again to 25%, and deleting some files, allows me to add a lot more files again.

xfs_info displays:

meta-data=/dev/mapper/prod--vg-home isize=256    agcount=14, agsize=655360 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=9175040, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

I did read about 64-bit inodes, but it seems to be applicable only for large drives (over 1TB).

Is there any other setting which could cause the "No space left on device" message.

Thank you


Source: (StackOverflow)

Is EXT4 outdated for production? [closed]

I just downloaded OpenSuse 13.2, and now, it gives two different file system option than the ext4 , but i see people debating about problems btrfs and xfs

So, if ext4 is outdated, then which one is the best for production? (to serve static files from a webserver)?


Source: (StackOverflow)

Can Btrfs use SSD for metadata and leave bulk data on HDD?

is it possible for Btrfs to use SSD for metadata only & leave bulk data on less costly storage such as HDD? I refered to this page Using_Btrfs_with_Multiple_Devices and didn't find a solution.

Thanks!


Source: (StackOverflow)

XFS grow not working

So I have the following setup:

[ec2-user@ip-172-31-9-177 ~]$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  80G  0 disk 
├─xvda1 202:1    0   6G  0 part /
└─xvda2 202:2    0   4G  0 part /data

All the tutorials I find say to use xfs_growfs <mountpoint> but that has no effect, nor has the -d option:

[ec2-user@ip-172-31-9-177 ~]$ sudo xfs_growfs -d /
meta-data=/dev/xvda1             isize=256    agcount=4, agsize=393216 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=1572864, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data size unchanged, skipping

I should add that I am using:

[ec2-user@ip-172-31-9-177 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[ec2-user@ip-172-31-9-177 ~]$ xfs_info -V
xfs_info version 3.2.0-alpha2
[ec2-user@ip-172-31-9-177 ~]$ xfs_growfs -V
xfs_growfs version 3.2.0-alpha2

Source: (StackOverflow)

Race condition while moving files on Linux

Suppose I have two scripts. The first one puts (with mv command) some files into a directory, the second one checks the directory once in a while and processes the files. The situation I'm concerned about is when the second script starts processing of the file which is only partly moved at the moment. Can this happen in real life on XFS file system?


Source: (StackOverflow)

How to reduce the default metadata size for an XFS file system?

I have a special-purpose 12-disk volume, 48 TB total. After mkfs with default parameters, mounting using inode_64, the reported available space for files is 44 TB. So there is 4 TB metadata overhead, almost 10%.

I'm thinking this metadata size is probably intended to accomodate tens of millions of inodes, whereas I use only large files and would need 1-2 million files max. Given this, my question is whether it's possible to recover 2-3 TB out of the 4 TB metadata, to use for file data.

In the man page I see a maxpct option, possibly others, but I cannot figure out what is the correct way to use them in my case. I still need to make sure that the volume can hold the 2 million files. Also, I understand some metadata space is used for journaling and here I don't know how much would be enough.


Source: (StackOverflow)

Using filesystem as database for 15M files - is it efficient?

I have 15 million simple key/value records. The key sizes are all single words, the values they contain range in size from a few bytes to 10MB each.

Random keys will need to be frequently accessed.

I'm thinking that it would be much more efficient to just store these as files in a directory instead of in a database. So instead of having massive table with all of these entries all I need is a directory with the filename as the key and the value inside the file.

This means that if I want the value for key azpdk I just need to file_get_contents('/my/directory/azpdk') in PHP instead of troubling MySQL with such a request.

In my head this makes sense and I expect it to be more efficient to use the filesystem instead of a database for this. Am I correct in this assumption? Will this still be fast and efficient with 15 million files in one directory?

FYI the filesystem is xfs.


Source: (StackOverflow)

How do I check the filesystem type of a device?

I formatted a partition using mkfs.xfs /dev/mydevice in Ubuntu and then I mounted it using /etc/fstab. When I type mount, it tells me that my device is mounted as ext3.

Output of mount:

/dev/mydevice on /mnt/mymount type ext3 (rw,_netdev)

First question: How do I know if it's xfs or ext3? What am I missing?

Second question: If it's xfs, how do I know if it's xfs-256 or xfs-512?


Source: (StackOverflow)

Which linux file system is more suitable to serve for video file streaming?

Currently I'm choosing among XFS, ReiserFS and ext4, not sure which one will be better.

My application is a video on demand service, with thousands of video files.

Any suggestions?


Source: (StackOverflow)

Xfs file size, inode size and block size

ll /srv/node/dcodxx/test.sh
-rw-r--r--. 1 root root 7 Nov  5 11:18 /srv/node/dcodxx/test.sh

The size of the file is shown in bytes. This file is stored in an xfs filesystem with block size 4096 bytes.

xfs_info /srv/node/sdaxx/
meta-data=/dev/sda               isize=256    agcount=32, agsize=7630958 blks
         =                       sectsz=4096  attr=2, projid32bit=0
data     =                       bsize=4096   blocks=244190646, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=119233, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Does this mean that a block can house more than one file, if not what happens to the remaining bytes (4096-7)? Also, where is the 256 bytes reserved for an inode stored, if it stored in the same block as the file, shouldn't the file size be larger(256+7)?


Source: (StackOverflow)