EzDevInfo.com

glusterfs interview questions

Top glusterfs frequently asked interview questions

How to disable the page cache in linux kernel?

how to avoid page cache in kernel,the application can directly write or read data from disk?In kernel,how to set?


Source: (StackOverflow)

GlusterFS 3.4 striped volume disk usage

In GlusterFS 3.4.3 server,

When I create a volume with

gluster volume create NEW-VOLNAME stripe COUNT NEW-BRICK-LIST...

and store some files, the volume consumes 1.5 times space of actual data stored, regardless of the number of stripes. e.g. If I create 1GB file in the volume with

dd if=/dev/urandom of=random1gb bs=1M count=1000

It consumes 1.5GB of total disk space of the bricks. "ls -alhs", "du -hs", and "df -h" all indicate the same fact - 1.5GB of space used for an 1GB file. Inspecting each brick and summing up the usage also shows the same result.

Interestingly, this doesn't happen with the newer version, GlusterFS 3.5 server. i.e. 1GB file uses 1GB of total brick space - normal. It's good that it is fixed in 3.5, but I cannot use 3.5 right now due to another issue.

I couldn't find any document or article about this. Do I have a wrong option(I left everything default)? Or is it a bug in 3.4? It seems to be too serious a problem to be just a bug. If it is by design, why?? To me it looks like huge waste of storage for a storage system.

To be fair, I'd like to point out that GlusterFS works very well except for this issue. Excellent performance (especially with qemu-libgfapi integration), easy setup, and flexibility.


Source: (StackOverflow)

Advertisements

Gluster strange issue with shared mount point like seprate mount.

I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate.

node1

10.101.140.10:/nova-gluster-vol
                      2.0G  820M  1.2G  41% /mnt

node2

10.101.140.10:/nova-gluster-vol
                      2.0G   33M  2.0G   2% /mnt

volume info split brian

$ sudo gluster volume heal nova-gluster-vol info split-brain
Gathering Heal info on volume nova-gluster-vol has been successful

Brick 10.101.140.10:/brick1/sdb
Number of entries: 0

Brick 10.101.140.20:/brick1/sdb
Number of entries: 0

test

node1

$ echo "TEST" > /mnt/node1
$ ls -l /mnt/node1
-rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1

node2 (file isn't there, while they are shared mount)

$ ls -l /mnt/node1
ls: cannot access /mnt/node1: No such file or directory

What i am missing??


Source: (StackOverflow)

Clustered file system for storing large number of backups

Need your advice in choosing distributed file system.
So, I need distributed file system for storing many backups (regular files, sql dumps, etc).
Ideal will be:

  • distributed
  • actively maintained (at least not dead)
  • quick failover (for geographically distributed nodes)
  • large community
  • Open Source

So far I've two choices: XtreemFS and GlusterFS. First seems to be cool, but it hasn't large community and in generally develops slow (also it's Java-based).
Gluster - RedHat and other nice things, but the are some negative reviews.

Need help with this :)


Source: (StackOverflow)

Enable Direct I/O mode in GlusterFS

  1. The GlusterFS server will ignore the O_DIRECT flag by default, how to make the server work in direct-io mode?
  2. By mount -t glusterfs XXX:/testvol -o direct-io-mode=enable mountpoint, the GlusterFS client will work in direct-io mode, but the file will be cached in the hosted server.

How to solve this problem that both of the client and the server work in direct-io mode?


Source: (StackOverflow)

Can several nodes access mounted docker containers

I have a VM running an app that uses about 8 Docker containers.

If I move /var/lib/docker to /mnt/containers/, where /mnt/containers is mounted via glusterfs to a larger system, I start getting errors like this:

kernel@192.168.68.14: Jun 17 16:05:10 stackato-ft9y kernel: [ 2174.535122] aufs au_xino_set:1176:docker[7572]: I/O Error, failed creating xino(-27).

kernel@192.168.68.14: Jun 17 16:05:10 stackato-ft9y kernel: [ 2174.538613] aufs au_xino_set:1176:docker[7572]: I/O Error, failed creating xino(-27).

dockerd@192.168.68.14: [error] mount.go:11 [warning]: couldn't run auplink before unmount: exit status 22^M

dockerd@192.168.68.14: file too large

dockerd@192.168.68.14: [d954f89b] -job create(fence_app_staging_fibo_1a992a98_id-3dd68) = ERR (1)^M

dockerd@192.168.68.14: [error] server.go:1025 Error: file too large^M

dockerd@192.168.68.14: [error] server.go:90 HTTP Error: statusCode=500 file too large

I don't see these errors when running out of /var/lib/docker, or even if I move the contents of /var/lib/docker to a different local directory.

Two of us have independently stumbled on http://osdir.com/ml/linux.file-systems.aufs.user/2008-08/msg00016.html , but that doesn't quite look right. So I'm here hoping to get the attention of the resident docker/aufs/glusterfs experts.


Source: (StackOverflow)

Can I use GlusterFS volume storage directly without mounting?

I have setup small cluster of GlusterFS with 3+1 nodes. They're all on the same LAN. There are 3 servers and 1 laptop (via Wifi) that is also GlusterFS node. A laptop often disconnects from the network. ;)

Use case I want to achieve is this:
I want my laptop to automatically synchronize with GlusterFS filesystem when it reconnects. (That's easy and done.) But, when laptop is disconnected from cluster I still want to access filesystem "offline". Modify, add, remove files..

Obviously the only way I can access GlusterFS filesystem when it's offline from cluster, is accessing volume storage directly. The one I configured creating a gluster volume. I guess it's the brick.

Is it safe to modify files inside storage? Will they be replicated to the cluster when the node re-connects?


Source: (StackOverflow)

Cannot mount GlusterFS from one of the server machines

Trying to mount glusterfs (4 servers, 2-replication). Running this command:

sudo mount -t glusterfs xx.xx.xx.xx:/spark-volume01 /glustermnt

Mount failed. Please check the log file for more details.

In the logs we have this:

[2016-02-10 15:24:09.689276] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.7 (args: /usr/sbin/glusterfs --volfile-server=46.4.68.142 --volfile-id=/spark-volume01 /glustermnt)
[2016-02-10 15:24:09.691961] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-02-10 15:24:09.695285] E [MSGID: 108040] [afr.c:418:init] 0-spark-volume01-replicate-1: Unable to fetch afr pending changelogs. Is op-version >= 30707? [Invalid argument]
[2016-02-10 15:24:09.695299] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-spark-volume01-replicate-1: Initialization of volume 'spark-volume01-replicate-1' failed, review your volfile again
[2016-02-10 15:24:09.695307] E [graph.c:322:glusterfs_graph_init] 0-spark-volume01-replicate-1: initializing translator failed
[2016-02-10 15:24:09.695311] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
[2016-02-10 15:24:09.695467] W [glusterfsd.c:1236:cleanup_and_exit] (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x327) [0x40d9f7] -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x117) [0x408927] -->/usr/sbin/glusterfs(cleanup_and_exit+0x4d) [0x40805d] ) 0-: received signum (0), shutting down
[2016-02-10 15:24:09.695480] I [fuse-bridge.c:5654:fini] 0-fuse: Unmounting '/glustermnt'.

All the versions are the same on all machines. Also, this is a new volume with no data on it. Please help!

P.s. Non-replicated distribute volume has been mounted without any problems:

xxx@xxxxx:~$ sudo mount -t glusterfs x.x.x.x:/spark-volume-non-replicated /glusterfsmnt-non-replicated
xxx@xxxxx:~$ df -h
Filesystem                                Size  Used Avail Use% Mounted on
x.x.x.x:/spark-volume-non-replicated  3.4T  1.7T  1.5T  54% /glusterfsmnt-non-replicated

Source: (StackOverflow)

Slow uploads Nginx and Gluster

We are having trouble with uploads to my site with django, gunicorn, running behind nginx. we also have a gluster mount on the app server where the files are uploaded and distributed-replicated across several servers. (All tiers are on AWS)

When we go to upload a file(~15mb), we get a 502 Bad Gateway. we also check the nginx logs which show a upstream prematurely closed connection while reading response header from upstream, client. Our upload speeds are being extremely slow (<5k). we can upload to other sites just fine, and our internet upload is around 10MB with anything else.

Is there any configuration file that I am missing to allow uploads of a file through gunicorn or nginx?

nginx.conf

user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    server_names_hash_bucket_size 256;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/x-javascript text/xml          
    application/xml application/xml+rss text/javascript;

    ##
    # nginx-naxsi config
    ##
    # Uncomment it if you installed nginx-naxsi
    ##

    #include /etc/nginx/naxsi_core.rules;

    ##
    # nginx-passenger config
    ##
    # Uncomment it if you installed nginx-passenger
    ##

    #passenger_root /usr;
    #passenger_ruby /usr/bin/ruby;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

conf.d files:

client_max_body_size 256m;

_

proxy_read_timeout 10m;
proxy_buffering off;
send_timeout 5m;

_

We have a feeling that it may be either nginx or the gluster mount. We have been working on this for days, and have looked all through the timeout* variables in nginx and gunicorn and haven't made any progress.

Any help would be appreciated, Thank you!


Source: (StackOverflow)

Any example on using GlusterFS from NodeJS

I have used Mongo GridFS and WeedFS to store media files before using NodeJS and now I would like to evaluate GlusterFS. I have found NodeJS module at https://github.com/qrpike/GlusterFS-NodeJS but examples shown below are confusing

var GlusterFS, gfs;

GlusterFS = require('glusterfs');

gfs = new GlusterFS;

gfs.peer('status', null, function(res) {
   return console.log('Peer Status:', JSON.stringify(res));
});

gfs.volume('info', 'all', function(res) {
   return console.log('Volume Info (All Volumes)', JSON.stringify(res));
});

In above example I do not get straight forward way to store media files as I do using gridfsstream at https://github.com/aheckmann/gridfs-stream or as weedfs node wrapper at https://github.com/cruzrr/node-weedfs.

Am I understanding GlusterFS wrong? I would like to have basic examples on how to store and retrieve files from GlusterFS through NodeJS API. Please help me on this. Thanks.


Source: (StackOverflow)

How can i mount volume of glusterfs to /var/lib/docker in ubuntu 14.04

I installed gluster-server and docker on ubuntu 14.04

# install Glusterfs
sudo apt-get update;
sudo apt-get install -y python-software-properties;
sudo add-apt-repository -y ppa:gluster/glusterfs-3.6;
sudo apt-get update;
sudo apt-get install -y glusterfs-server;

gluster peer probe $NODE1_DNS;
gluster volume create file_store_docker replica 2 transport tcp $NODE1_DNS:/brickdocker $PUBLIC_DNS:/brickdocker force;
gluster volume start file_store_docker;
sudo mkdir /var/lib/docker;
mount -t glusterfs $PUBLIC_DNS:/file_store_docker /var/lib/docker;

# install Docker with AUFS
sudo apt-get update;
sudo apt-get -y install linux-image-extra-$(uname -r);
sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -";
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\ > /etc/apt/sources.list.d/docker.list";
sudo apt-get update;
sudo apt-get -y install lxc-docker;

When i run below line

sudo docker run -p 80:80 --name docker-wordpress-nginx -d eugeneware/docker-wordpress-nginx

And i got this message:

Error response from daemon: error creating aufs mount to /var/lib/docker/aufs/mnt/0b78a98c13f26eebcdef6517654ff80bdf6b35f433ac06be632aa55e8f3bb4a1-init: file too large

Can you help me to understand this error? How do i mount volume of glusterfs to /var/lib/docker in ubuntu 14.04


Source: (StackOverflow)

Shared storage FS (GFS2, GlusterFS, ?) comparison and test

Problem description: For our application (RHEL 5,6) we use shared storage (EVA) and need to find OCFS2 replacement (not supported on RHEL 6) for several FS shared between nodes (2-7). Current tips are GFS2 and GlusterFS.

Usage: System receives (SFTP/SCP) and process files size 10-100 MB which process (create, rename in directory, move between directories, read, remove).

Limitations: The amount of data processed this way (create, remove) is up to 3 TB/day (max 60 MB/s). The filesystem shall be able to handle thousands of such files in a single directory during backlog.

Reason for GFS2/GlucterFS: Both are RedHat. Reason for trying GlusterFS comparing to GFS2 is simplicity, GFS2 requires RH cluster installation while GlusterFS not. The question is performance.

It could be really helpful to both get some more recommendations and/or find some comparison (I know they are generally of different types, but anyway).

Thanks Jan


Source: (StackOverflow)

Glusterfs Gett Attribute IOPS Caching

I'm building a GlusterFS share for a heavy Get Attribute workload. This cluster is configured in "Replicated fashion" and needs to host several TB of data.

In this scenario the largest part of request from clients is of type "Get Attribute" ( eg. Apache last modified ).

I'm wondering about IO ( disk ) performance:

  • How does GlusterFS performs in terms of IO on disks for Get Attribute?
  • Does each Get Attribute request reach the underlying disk or are they served from the server without impacting disk IO?
  • Is gluster caching effective in this scenario?
  • Is there a request to the replica server each time a Get Attribute reaches the Gluster Server?
  • Considering Gluster as the preferred server solution, is it better to use GlusterFS client of NFS client under this specific workload ?

Thanks in advance.


Source: (StackOverflow)

Glusterfs denied mount

I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume is "ARCHIVE80"

I can mount the volume on Server2; if I touch a new file, it appears inside the brick on Server1.

However, if I try to mount the volume on Server1, I have an error:

Mount failed. Please check the log file for more details.

The log gives:

[2013-11-11 03:33:59.796431] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-0: changing port to 24011 (from 0)
[2013-11-11 03:33:59.796810] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-1: changing port to 24009 (from 0)
[2013-11-11 03:34:03.794182] I [client-handshake.c:1614:select_server_supported_programs] 0-ARCHIVE80-client-0: Using Program GlusterFS 3.3.2, Num (1298437), Version (330)
[2013-11-11 03:34:03.794387] W [client-handshake.c:1320:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to set the volume (Permission denied)
[2013-11-11 03:34:03.794407] W [client-handshake.c:1346:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to get 'process-uuid' from reply dict
[2013-11-11 03:34:03.794418] E [client-handshake.c:1352:client_setvolume_cbk] 0-ARCHIVE80-client-0: SETVOLUME on remote-host failed: Authentication failed
[2013-11-11 03:34:03.794426] I [client-handshake.c:1437:client_setvolume_cbk] 0-ARCHIVE80-client-0: sending AUTH_FAILED event
[2013-11-11 03:34:03.794443] E [fuse-bridge.c:4256:notify] 0-fuse: Server authenication failed. Shutting down.

How comes I can mount on one server and not on the other one???


Source: (StackOverflow)

Glusterfs Not Replicating data

I have a glusterfs setup with two nodes (Node1 and Node2). I see connection has made between two connection. Problem is when I create some folders on Node1 it does not replicate it on Node2. Please suggest me to overcome if any one had fixed it?

If I mount it on some other server as glusterfs client and create files and folders then its replicating to glusterfs nodes. Is this behavior normal?

Volume Name: testvol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster1.example.com:/var/www/drupal7/sites/default/files
Brick2: gluster2.example.com:/var/www/drupal7/sites/default/files

Source: (StackOverflow)