EzDevInfo.com

redhat interview questions

Top redhat frequently asked interview questions

What version of RHEL am I using?

From the shell and without root privileges, how can I determine what Red Hat Enterprise Linux version I'm running?

Ideally, I'd like to get both the major and minor release version, for example RHEL 4.0 or RHEL 5.1, etc.


Source: (StackOverflow)

How to unzip files bigger than 4GB?

I'm using RHEL 5.6 and unzip-5.52-3.el5. I'm trying to unzip a big file, but I get the error:

unzip -o test.zip -d unzip/
error:  Zip file too big (greater than 4294959102 bytes)
Archive:  test.zip
warning [test.zip]:  4294967296 extra bytes at beginning or within zipfile

Is there another program that can work with large zip files or do I have to wait until unzip 6 comes to RHEL? (might be years!)

Thanks


Source: (StackOverflow)

Advertisements

How to update cURL CA bundle on RedHat?

I am running into issues where the CA bundle that has been bundled with my version of cURL is outdated.

curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html

Reading through the documentation didn't help me because I didn't understand what I needed to do or how to do it. I am running RedHat and need to update the CA bundle. What do I need to do to update my CA bundle on RedHat?


Source: (StackOverflow)

What is the value of JAVA_HOME for CentOS?

I have install java through yum on CentOS, however another java programme needs to know what the JAVA_HOME environmental variable is. I know all about setting environmental variables, but what do I set it to? java is installed in /usr/bin/java, it can't be there!


Source: (StackOverflow)

SElinux: allow httpd to connect to a specific port

My system is running CentOS 6.4 with apache2.2.15. SElinux is enforcing and I'm trying to connect to a local instance of redis through my python/wsgi app. I get Error 13, Permission denied. I could fix this via the command:

setsebool -P httpd_can_network_connect

However, I don't exactly want httpd to be able to connect to all tcp ports. How can I specify which ports/networks httpd is allowed to connect to? If I could make a module to allow httpd to connect to port 6379 ( redis ) or any tcp on 127.0.0.1, that would be preferable. Not sure why my paranoia is so strong on this, but hey...

Anyone know?


Source: (StackOverflow)

KVM Guest installed from console. But how to get to the guest's console?

I'm trying to install a fully virtualized guest (Fedora 14 x86_64) on KVM (RHEL 6), using command-line only (both hypervisor and guest). It goes without errors, and without a tangible result . I'd like to know how to do a text-only installation.

So, here's what I've done:

# virt-install \
--name=FE --ram=756 --vcpus=1 \
--file=/var/lib/libvirt/images/FE.img  --network bridge:br0 \
--nographics --os-type=linux  \
--extra-args='console=tty0' -v \
--cdrom=/media/usb/Fedora-14-x86_64-Live-Desktop.iso 


Starting install...
Creating domain...                                                                                                                                                        |    0 B     00:00     
Connected to domain FE
Escape character is ^]
ÿ

Now what? As I understand after googling for a couple of days, I should see the guest's output from the text installation, but nothing happens. virt-viewer cannot connect to it, kindly suggesting that I explore all the options by adding --help (which I did). If I reconnect with virsh, I see this:

Domain installation still in progress. You can reconnect to 
the console to complete the installation process.
[root@v ~]
# virsh console FEConnected to domain FE
Escape character is ^]

This shows that VM is running

# virsh list
 Id Name                 State
----------------------------------
  8 FE                   running

Qemu log:

LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 756 -smp 1,sockets=1,cores=1,threads=1 -name FE -uuid 6989d008-7c89-424c-d2d3-f41235c57a18 -nographic -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/FE.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvirt/images/FE.img,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/media/usb/Fedora-14-x86_64-Live-Desktop.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0a:65:8d,bus=pci.0,addr=0x2 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 
char device redirected to /dev/pts/1

Output of /etc/libvirt/qemu/FE.xml

# cat /etc/libvirt/qemu/FE.xml 
<domain type='kvm'>
  <name>FE</name>
  <uuid>6989d008-7c89-424c-d2d3-f41235c57a18</uuid>
  <memory>774144</memory>
  <currentMemory>774144</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/FE.img'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' unit='0'/>
    </disk>
    <disk type='block' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' unit='0'/>
    </disk>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:0a:65:8d'/>
      <source bridge='br0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target port='0'/>
    </console>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </memballoon>
  </devices>
</domain>

I'm obviously missing something that many others don't, but what is it? Thanx in advance!


Source: (StackOverflow)

When does `cron.daily` run?

When do entries in cron.daily (and .weekly and .hourly) run, and is it configurable?

I haven't found a definitive answer to this, and am hoping there is one.

I'm running RHEL5 and CentOS 4, but for other distros/platforms would be great, too.


Source: (StackOverflow)

How to handle security updates within Docker containers?

When deploying applications onto servers, there is typically a separation between what the application bundles with itself and what it expects from the platform (operating system and installed packages) to provide. One point of this is that the platform can be updated independently of the application. This is useful for example when security updates need to be applied urgently to packages provided by the platform without rebuilding the entire application.

Traditionally security updates have been applied simply by executing a package manager command to install updated versions of packages on the operating system (for example "yum update" on RHEL). But with the advent of container technology such as Docker where container images essentially bundle both the application and the platform, what is the canonical way of keeping a system with containers up to date? Both the host and containers have their own, independent, sets of packages that need updating and updating on the host will not update any packages inside the containers. With the release of RHEL 7 where Docker containers are especially featured, it would be interesting to hear what Redhat's recommended way to handle security updates of containers is.

Thoughts on a few of the options: Letting the package manager update packages on the host will not update packages inside the containers. Having to regenerate all container images to apply updates seems to break the separation between the application and the platform (updating the platform requires access to the application build process which generates the Docker images). Running manual commands inside each of the running containers seems cumbersome and changes are at risk of being overwritten the next time containers are updated from the application release artifacts. So none of these approaches seems satisfactory.


Source: (StackOverflow)

How can you tell what a server actually does? [closed]

I've been handed 3 Linux boxes, 1 front facing with apache on it and another 2 which, as far as I can tell, don't do an awful lot. All running on Redhat.

The question is simple: How can I tell what the server is actually doing? Zero documentation is available from the creator.


Source: (StackOverflow)

How do admins maintain user accounts across hundreds of linux servers?

Dealing with hundreds of RHEL servers, how can we maintain local root accounts and network user accounts? Is there an active directory type solution that manages these from a central location?


Source: (StackOverflow)

Reasons to disable / enable SELinux

In the line of this question on StackOverflow and the completely different crowd we have here, I wonder: what are your reasons to disable SELinux (assuming most people still do)? Would you like to keep it enabled? What anomalies have you experienced by leaving SELinux on? Apart from Oracle, what other vendors give trouble supporting systems with SELinux enabled?

Bonus question: Anyone has managed to get Oracle running on RHEL5 with SELinux in enforcing targeted mode? I mean, strict would be awesome, but I don't that is even remotely possible yet, so let's stay with targeted first ;-)


Source: (StackOverflow)

Why are my XFS filesystems suddenly consuming more space and full of sparse files?

I've run XFS filesystems as data/growth partitions for nearly 10 years across various Linux servers.

I've noticed a strange phenomenon with recent CentOS/RHEL servers running version 6.2+.

Stable filesystem usage became highly variable following the move to the newer OS revision from EL6.0 and EL6.1. Systems initially installed with EL6.2+ exhibit the same behavior; showing wild swings in disk utilization on the XFS partitions (See the blue line in the graph below).

Before and after. The upgrade from 6.1 to 6.2 occurred on Saturday. xfs graph

The past quarter's disk usage graph of the same system, showing the fluctuations over the last week. enter image description here

I started to check the filesystems for large files and runaway processes (log files, maybe?). I discovered that my largest files were reporting different values from du and ls. Running du with and without the --apparent-size switch illustrates the difference.

# du -skh SOD0005.TXT
29G     SOD0005.TXT

# du -skh --apparent-size SOD0005.TXT
21G     SOD0005.TXT

A quick check using the ncdu utility across the entire filesystem yielded:

Total disk usage: 436.8GiB  Apparent size: 365.2GiB  Items: 863258

The filesystem is full of sparse files, with nearly 70GB of lost space compared to the previous version of the OS/kernel!

I pored through the Red Hat Bugzilla and change logs to see if there were any reports of the same behavior or new announcements regarding XFS.

Nada.

I went from kernel version 2.6.32-131.17.1.el6 to 2.6.32-220.23.1.el6 during the upgrade; no change in minor version number.

I checked file fragmentation with the filefrag tool. Some of the biggest files on the XFS partition had thousands of extents. Running on online defrag with xfs_fsr -v during a slow period of activity helped reduce disk usage temporarily (See Wednesday in the first graph above). However, usage ballooned as soon as heavy system activity resumed.

What is happening here?


Source: (StackOverflow)

Hardware Firewall Vs. Software Firewall (IP Tables, RHEL)

TL;DR
My hosting company says IP Tables is useless and doesn't provide any protection. Is this BS?

I have two, co-located servers. Yesterday my DC company contacted me to tell me that because I'm using a software firewall my server is "Vulnerable to multiple, critical security threats" and my current solution offers "No protection from any form of attack".

They say I need to get a dedicated Cisco firewall ($1000 installation then $200/month EACH) to protect my servers. I was always under the impression that, while hardware firewalls are more secure, something like IPTables on RedHat offered enough protection for your average server.

Both servers are just web-servers, there's nothing critically important on them but I've used IPTables to lock down SSH to just my static IP address and block everything except the basic ports (HTTP(S), FTP and a few other standard services).

I'm not going to get the firewall, if ether of the servers were hacked it would be an inconvenience but all they run is a few Wordpress and Joomla sites so I definitely don't think it's worth the money.


Source: (StackOverflow)

Is there still a use for irqbalance on modern hardware?

This question has been asked before, but I believe that the world has changed enough for it to be asked again.

Does irqbalance have any use on today’s systems where we have NUMA-capable CPUs with memory sharing between their cores?

Running irqbalance --oneshot --debug shows that a virtual guest on a modern VMware ESXi environment is sharing the NUMA nodes between cores.

# irqbalance --oneshot --debug 3
Package 0:  numa_node is 0 cpu mask is 0000000f (load 0)
    Cache domain 0:  numa_node is 0 cpu mask is 0000000f  (load 0)
            CPU number 0  numa_node is 0 (load 0)           
            CPU number 1  numa_node is 0 (load 0)
            CPU number 2  numa_node is 0 (load 0)
            CPU number 3  numa_node is 0 (load 0)

irqbalance will in this case detect that it is being run on a NUMA system, and exit. This messes with our process monitoring.

Should we look into running numad instead of irqbalance on such systems?

This is mostly interesting for VMware virtualised servers.


Source: (StackOverflow)

What is needed for a linux service to be supported by chkconfig?

I am trying to add to the auto start at boottime a linux service through the

chkconfig -add <servicename>

and I get a message saying

service <servicename> does not support chkconfig

I am using Red Hat Enterprise 4. The script I am trying to add to the autostart at boottime is the following:

#!/bin/sh

soffice_start() {   if [ -x /opt/openoffice.org2.4/program/soffice ]; then
        echo "Starting Open Office as a Service"
        #echo " soffice -headless -accept=socket,port=8100;urp;StarOffice.ServiceManager
-nofirststartwizard"
        /opt/openoffice.org2.4/program/soffice
-headless -accept="socket,host=0.0.0.0,port=8100;urp;StarOffice.ServiceManager"
-nofirststartwizard &   else
        echo "Error: Could not find the soffice program. Cannot Start SOffice."   fi }

soffice_stop() {   if [ -x /usr/bin/killall ]; then
        echo "Stopping Openoffice"
        /usr/bin/killall soffice 2> /dev/null   else
        echo "Eroor: Could not find killall.  Cannot Stop soffice."   fi }

case "$1" in  'start')    soffice_start    ;;  'stop')    soffice_stop    sleep 2    ;;  'restart')    soffice_stop    sleep 5  soffice_start    ;;  *)    if [ -x /usr/bin/basename ]; then
        echo "usage: '/usr/bin/basename $0' start| stop| restart"    else
        echo "usage: $0 start|stop|restart"    fi esac

Source: (StackOverflow)