EzDevInfo.com

systemd interview questions

Top systemd frequently asked interview questions

How can i use iptables on centos 7?

I installed CentOS 7 with minimal configuration (os + dev tools). I am trying to open 80 port for httpd service, but something wrong with my iptables service ... what's wrong with it? What am I doing wrong?

# ifconfig/sbin/service iptables save
bash: ifconfig/sbin/service: No such file or directory


# /sbin/service iptables save
The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.

# sudo service iptables status
Redirecting to /bin/systemctl status  iptables.service
iptables.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

# /sbin/service iptables save
The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.

# sudo service iptables start
Redirecting to /bin/systemctl start  iptables.service
Failed to issue method call: Unit iptables.service failed to load: No such file or directory.

Source: (StackOverflow)

Connecting to systemd DBUS signals using gdbus-codegen

I am not able to receive systemd DBus signals when using gdbus-codegen generated manager proxy. But I am able to successfully call methods provided by systemd over DBus.

I searched online and looked these links without much success. There aren't much examples on how to do it when gdbus-codegen is used for systemd API.

Here is what I did along with code snippets.

1) I generated systemd introspection and used that XML as input to gdbus-codegen.

...snip

<interface name="org.freedesktop.systemd1.Manager">
<signal name="JobRemoved">
<arg type="u"/> <arg type="o"/> <arg type="s"/> <arg type="s"/>
</signal>

...snip

2) Wrote my client code to use C APIs generated by gdbus-codegen and created a manager proxy. (Everything is on system bus).

SystemdManager *systemdProxy = systemd_manager_proxy_new_for_bus_sync(
    G_BUS_TYPE_SYSTEM, G_DBUS_PROXY_FLAGS_NONE,
    "org.freedesktop.systemd1", "/org/freedesktop/systemd1",
    NULL, error);

3) Define a signal handler

static void on_done(GDBusProxy *proxy,
        gchar *sender_name,
        gchar *signal_name,
        GVariant *parameters,
        gpointer user_data)
{
    LOG_ERROR("on_done");
}

4) Connected a signal handler to that proxy for JobRemoved signal.

if (g_signal_connect(systemdProxy, "job-removed",
                     G_CALLBACK(on_done), NULL) <= 0 )
{
    LOG_ERROR("Failed to connect to signal job-removed");
}

5) Used the proxy to start a systemd service. This returns success and I could see the unit start and run for a second or two and terminate.

ret = systemd_manager_call_start_unit_sync(
    systemdProxy, unit_name, unit_mode, &job_obj,
    NULL, &error);

6) systemd generates a JobRemoved signal. dbus-monitor shows it.

signal sender=:1.0 -> dest=(null destination) serial=11931
        path=/org/freedesktop/systemd1;
        interface=org.freedesktop.systemd1.Manager;
        member=JobRemoved
   uint32 7009
   object path "/org/freedesktop/systemd1/job/7009"
   string "mysample.service"
   string "done"

7) My signal handler never gets called. (Everything uses system bus, there are no other buses). I have tried various strings for detailed_signal 2nd parameter for g_signal_connect (like: JobRemoved, job_removed, ::job-removed, some are not accepted by g_signal_connect).

Any help is greatly appreciated!


Source: (StackOverflow)

Advertisements

systemd start service after specific service

General question: How does one start a systemd .service after a particular .service has started successfully?

Specific question: How do I start website.service only after mongodb.service has started? In other words website.service should depend on mongodb.service.


Source: (StackOverflow)

running a persistent python script from systemd?

I have a python script that decodes the input from a usb device and sends commands to a php script. The script works beautifully when run from the console, but I need it to be run on startup.

I created a systemd service to start the script, which appears to work well, except that the systemctl start service-name process never returns me to the command prompt. While it is running, I can interact with the input device, exactly as expected. However, if I exit the systemctl start process with ctr-z, the script only remains running for a few seconds.

Here is the .service file that I wrote:

[Unit]
After=default.target

[Service]
ExecStart=/usr/bin/python /root/pidora-keyboard.py

[Install]
WantedBy=default.target

and here is my python script:

#!/usr/bin/env python
import json, random
from evdev import InputDevice, categorize, ecodes
from urllib.request import urlopen

dev = InputDevice('/dev/input/event2')

def sendCommand(c):
    return json.loads(urlopen("http://127.0.0.1/api.php?command="+c).read().decode("utf-8"))
def getRandomStation():
    list = sendCommand('stationList')
    list = list['stations']
    index = random.randint(0, (len(list)-1))
    print(list[index]['id'] + " - " + list[index]['name'])
    sendCommand('s' + list[index]['id'])

print(dev)
for event in dev.read_loop():
    if event.type == ecodes.EV_KEY:
        key_pressed = str(categorize(event))
        if ', down' in key_pressed:
            print(key_pressed)
            if 'KEY_PLAYPAUSE' in key_pressed:
                print('play')
                sendCommand('p')
            if 'KEY_FASTFORWARD' in key_pressed:
                print('fastforward')
                sendCommand('n')
            if 'KEY_NEXTSONG' in key_pressed:
                print('skip')
                sendCommand('n')
            if 'KEY_POWER' in key_pressed:
                print('power')
                sendCommand('q')
            if 'KEY_VOLUMEUP' in key_pressed:
                print('volume up')
                sendCommand('v%2b')
            if 'KEY_VOLUMEDOWN' in key_pressed:
                print('volume down')
                sendCommand('v-')
            if 'KEY_CONFIG' in key_pressed:
                print('Random Station')
                getRandomStation()

how do I make the script run asynchronously from the service file, so that the start command can complete, and the script can continue running in the background?


Source: (StackOverflow)

Starting bottle web server through systemd?

I am trying to start a bottle web app I wrote using systemd. I made the file /etc/systemd/user/bottle.service with the following contents:

[Unit]
Description=Bottled fax service
After=syslog.target

[Service]
Type=simple
User=fax
Group=fax
WorkingDirectory=/home/fax/bottlefax/
ExecStart=/usr/bin/env python3 server.py
StandardOutput=syslog
StandardError=syslog
Restart=always
RestartSec=2

[Install]
WantedBy=bottle.target

However, when I try to start it, it fails and this is printed in journalctl:

Jun 10 17:33:31 nano systemd[1]: Started Bottled fax service.
Jun 10 17:33:31 nano systemd[1]: Starting Bottled fax service...
Jun 10 17:33:31 nano systemd[2380]: Failed at step GROUP spawning /usr/bin/env: No such process
Jun 10 17:33:31 nano systemd[1]: bottle.service: main process exited, code=exited, status=216/GROUP
Jun 10 17:33:31 nano systemd[1]: Unit bottle.service entered failed state.
Jun 10 17:33:31 nano systemd[1]: bottle.service failed.

How should I fix this?

Edit:

Changing to /usr/bin/python3 as others have suggested results in the same error (changed file though):

Jun 10 18:43:48 nano systemd[1]: Started Bottled fax service.
Jun 10 18:43:48 nano systemd[1]: Starting Bottled fax service...
Jun 10 18:43:48 nano systemd[2579]: Failed at step GROUP spawning /usr/bin/python3: No such process
Jun 10 18:43:48 nano systemd[1]: bottle.service: main process exited, code=exited, status=216/GROUP
Jun 10 18:43:48 nano systemd[1]: Unit bottle.service entered failed state.
Jun 10 18:43:48 nano systemd[1]: bottle.service failed.

Source: (StackOverflow)

Can I control a user systemd using 'systemctl --user' after sudo su - myuser?

I have a service that I want to start with system startup. I have built a ap@.service definition for it as a template, because there could be many instances.

Defined in the root systemd, this works well, and starts and stops the service with the system. The service instance is installed with systemctl enable ap@inst1 as would be expected. Root is also able to start and stop the service without problems. The service runs in its own account (myuser), not root, controlled by User=myuser in the ap@.service template.

But I want user 'myuser' to be able to start and stop their own service, without compromising system security.

I switched to using a user systemd, and enabled lingering with loginctl enable-linger myuser. I then enable the service defined in the ~myuser/.config/systemd/user directory. The service now starts and stops cleanly with the system, as designed. If I log in to a terminal as 'myuser', systemctl --user start ap@inst1, and systemctl --user stop ap@inst1 both work perfectly.

However, if I log in as a different user (user2) and perform sudo su - myuser in a terminal, then systemctl --user commands now fail with error message "Failed to get D-Bus connection: no such file or directory".

How do I enable systemctl --user to work after a sudo su - myuser command to switch the user?


Source: (StackOverflow)

CoreOS systemd journal remote logging

I run multiple CoreOS instances on Google Compute Engine (GCE). CoreOS uses systemd's journal logging feature. How can I push all logs to a remote destination? As I understand, systemd journal doesn't come with remote logging abilities. My current work-around looks like this:

journalctl -o short -f | ncat <addr> <ip>

With https://logentries.com using their Token-based input via TCP:

journalctl -o short -f | awk '{ print "<token>", $0; fflush(); }' | ncat data.logentries.com 10000

Are there better ways?

EDIT: https://medium.com/coreos-linux-for-massive-server-deployments/defb984185c5


Source: (StackOverflow)

Start systemd service from C/C++ application or call a D-Bus service

I have a .service for a process that i don't want to start at boot-time, but to call it somehow from another already running application, at a given time.

The other option would be to put a D-Bus (i'm using glib dbus in my apps ) service file in /usr/share/dbus-1/services and somehow call it from my application. Also, i don't manage to do this either.

Let's say that my dbus service file from /usr/share/dbus-1/services is com.callThis.service and my main service file from /lib/systemd/system is com.startThis.service

If i run a simple introspect from command line:

/home/root # dbus-send --session --type=method_call --print-reply \
--dest=com.callThis  /com/callThis org.freedesktop.DBus.Introspectable.Introspect

the D-Bus service file will get called and it will start what is in the Exec ( com.starThis ). The problem is that i want to achieve this from C/C++ code using D-Bus glib.


Source: (StackOverflow)

Nginx log to stderr

I want to redirect nginx access logs to stdout to be able to analyze them through journalctl (systemd).

There is the same question with approved answer. Have nginx access_log and error_log log to STDOUT and STDERR of master process But it does not work for me. With /dev/stderr I get open() "/dev/stderr" failed (6: No such device or address). With /dev/stdout I get no access logs in journalctl -u nginx.

nginx.conf

daemon off;

http {
    access_log /dev/stdout;
    error_log /dev/stdout;
    ...
}
...

sitename.conf

server {
    server_name sitename.com;
    root /home/username/sitename.com;

    location / {
        proxy_pass http://localhost:3000/;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        access_log on;
    }
}

nginx.service

[Service]
Type=forking
PIDFile=/run/nginx.pid
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=nginx
ExecStartPre=/usr/sbin/nginx -t -q -g 'master_process on;'
ExecStart=/usr/sbin/nginx -g 'master_process on;'
ExecReload=/usr/sbin/nginx -g 'master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5

I've tried my best to workaround that by changing every possible parameter in the code above and with different nginx versions (1.2, 1.6) but without any success.

I'm really, really interested how to make this work so I raise this question again on a different thread as I consider previous answer is wrong, speculative or environment specific.

$ journalctl -u nginx

contains only lines like

 Feb 08 13:05:23 Username systemd[1]: Started A high performance web server and a reverse proxy server.

and no sign of access logs :(


Source: (StackOverflow)

Linux: Start daemon on connected USB-serial dongle

On my Linux (Angstrom distro on BeagleBone Black) I have a USB dongle which presents as a serial port and per default is available as /dev/ttyUSB0

I want to start a daemon, which will connect to the serial port and make it available as a socket. I have the code for this USB-to-socket bridge and it works when started by hand.

I want it to start automatically whenever the system boots, supposing the USB dongle is plugged in. How should I do this?

Attempts so far:

  1. systemd: I created a systemd service with conditions After: remote-fs.target and After:syslog.target , but (it seems) the USB dongle is not ready at that point and the startup of the daemon fails.

    Are there other systemd targets or services to condition to, so that the daemon is started only when the udev has finished installing devices and the network is ready?

  2. udev: I created a udev rule like

    KERNEL=="ttyUSB?", RUN+="/path/to/daemon.sh"

    which executes successfully. But the daemon (which is started as a background process with a "&" within that script) seems not to execute. Also it seems to be frowned upon, to fork long running processes from udev rules.

What is the correct way to do it?


Source: (StackOverflow)

How to set Varnish to run on port 80. Malfunction of DAEMON_OPTS set in /etc/default/varnish

I have installed varnish and fallowed the exact instruction for setting it up, however, it is not working as expected.

My /etc/default/varnish setup is:

DAEMON_OPTS="-a :80 \
             -T localhost:1234 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m"

My /etc/varnish/default.vlc setup is

backend default {
    .host = "localhost";
    .port = "8080";
}

My apache port.conf setup is:

NameVirtualHost 127.0.0.1:8080
Listen 127.0.0.1:8080

<IfModule ssl_module>
        Listen 443
</IfModule>

<IfModule mod_gnutls.c>
        Listen 443
</IfModule>

I am running ubuntu 15.04 with Apache 2.4.10. When I start varnish and check the process i get the fallowing:

0:00 /usr/sbin/varnishd -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m

Seems like neither of the Listen address or the Management interface work as set in /etc/varnish/default.vcl. None of my virtual machines work as a result. How can I solve this ?


Source: (StackOverflow)

How can I turn either a Unix POSIX file descriptor or standard input Handle into a Socket?

In inetd and systemd type systems, it is possible for the system to bind a socket and launch the application with the socket already existing, for example to provide socket based service starting. I would like to take advantage of this functionality in one of my Haskell daemons.

The daemon currently calls, socket, bindSocket, listen to create a Socket object that I can later call accept on. To change this to an inetd type system I would need to use standard input as a Socket, but all I can find so far is stdin :: Handle, or fdToHandle :: CInt -> Handle - neither are quite what I need.

I can't seem to find anything that has type Handle -> Socket, nor anything that is like stdin :: Socket. The nearest I can find is mkSocket which is very low-level, and most other languages (ie, Ruby) provide a call to turn a file descriptor into a socket without having to specify various other parameters.


Source: (StackOverflow)

execute commands in a CoreOS cloud-config (e.g. to add swap)

I see that unlike the standard cloud-config file, there is no runcmd option in a CoreOS cloud-config file. Currently, I enable swap on a CoreOS machine by adding the following to my cloud-config:

units:
    - name: swap.service
      command: start
      content: |
        [Unit]
        Description=Turn on swap

        [Service]
        Type=oneshot
        Environment="SWAPFILE=/1GiB.swap"
        RemainAfterExit=true
        ExecStartPre=/usr/sbin/losetup -f ${SWAPFILE}
        ExecStart=/usr/bin/sh -c "/sbin/swapon $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)"
        ExecStop=/usr/bin/sh -c "/sbin/swapoff $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)"
        ExecStopPost=/usr/bin/sh -c "/usr/sbin/losetup -d $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)"

        [Install]
        WantedBy=local.target

Then after initializing my CoreOS image I have to ssh into the machine and run:

sudo fallocate -l 1024m /1GiB.swap && sudo chmod 600 /1GiB.swap \
&& sudo chattr +C /1GiB.swap && sudo mkswap /1GiB.swap

sudo reboot

before swap will be enabled (e.g. as evidenced by top).

It seems like I should be able to accomplish the latter commands in the cloud-config file itself, but I'm not clear on how I can run such commands without a runmcd field in cloud-config. Perhaps this can be done either by editing my swap.service unit or perhaps by adding another unit, but I haven't figured out quite how.

So, that leaves me with two questions: (1) Can this be done or will it always be necessary to run the last commands manually? (2) If the former, then how?


Source: (StackOverflow)

How to Pipe Output to a File When Running as a Systemd Service?

I'm having trouble piping the STDOUT & STDERR to a file when running a program as a systemd service. I've tried adding the following to the .service file:

ExecStart=/apppath/appname > /filepath/filename 2>&1

But this doesn't work. The output is ending up in /var/log/messages and is viewable using journalctl but I'd like a separate file.

I've also tried setting StdOutput=tty but can't find a way of redirecting this to a file.

Any help would be appreciated.


Source: (StackOverflow)

Use of CPUQuota in systemd

I am trying to put a hard limit in CPU usage for a dd command . I have created the following unit file

[Unit]
Description=Virtual Distributed Ethernet

[Service]
ExecStart=/usr/bin/ddcommand
CPUQuota=10%

[Install]
WantedBy=multi-user.target

which call the following simple script

#!/bin/sh
dd if=/dev/zero of=/dev/null bs=1024k

As I have seen in this guide: http://www.freedesktop.org/software/systemd/man/systemd.resource-control.html The CPU usage for my dd service should not exceed the 10%. But when I run the system-cgtop command the usage is about 70-75% .

Any ideas of what am I doing wrong and how can I fix it?

P.S. When I execute systemctl show dd I get the following results regarding CPU

CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=100ms
LimitCPU=18446744073709551615

Source: (StackOverflow)