EzDevInfo.com

syslog-ng interview questions

Top syslog-ng frequently asked interview questions

Syslog-ng log template \\011 character

I have a problem with syslog-ng. I want to make syslog-ng to format the logs like below:

template("$YEAR-$MONTH-$DAY\\011$HOUR:$MIN:$SEC\\011$HOST\\011$MSGHDR$MSGONLY\n")

But it logs without the "\". Just "011". Example:

Expected: 2012-11-28\\01116:33:51\\011host_name\\011app_name[26250]: message

Happened: 2012-11-2801116:33:51011host_name011app_name[26250]: message

How to achieve this? Any ideas? :) Thanks in advance ;)


Source: (StackOverflow)

Python logging.DEBUG level doesn't logging

I have a problem with python's logging lib. With the code below I create a "logger":

logger = logging.getLogger()
def logger_init(level):
    try:
        syslog = SysLogHandler(address=LOG_DESTINATION)
    except Exception, ex:
        return
    formatter = logging.Formatter('%(module)s[%(process)d]: %(message)s')
    syslog.setFormatter(formatter)
    syslog.setLevel(level)
    logger.addHandler(syslog)

And I call it like:

logger.debug(SOME_STR_TO_BE_LOGGED)

OR like:

logger.error(SOME_STR_TO_BE_LOGGED)

And I initialize the logger with:

log_level = logging.ERROR
if options.DEBUG_LOG: ####  This comes from options parser and it is True.
    log_level = logging.DEBUG
logger_init(log_level)

The problem is that the error, and warn is working very well, but neither info nor debug methods prints anything to syslog.

I'm using syslog-ng and I designed my filter, that is it will accept every level from debug to emerg.

What is the problem here? Any ideas?


Source: (StackOverflow)

Advertisements

Confused with syslog message format

I am a bit confused about syslog message format. I have to write a program that parses syslog messages. When I read what I get in my syslog-ng instance I get messages like this:

Jan 12 06:30:00 1.2.3.4 apache_server: 1.2.3.4 - - [12/Jan/2011:06:29:59 +0100] "GET /foo/bar.html HTTP/1.1" 301 96 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 ( .NET CLR 3.5.30729)" PID 18904 Time Taken 0

I can clearly determine the real message (which is, in this case an Apache access log message) The rest is metadata about the syslog message itself.

However when I read the RFC 5424 the message examples look like:

without structured data

 <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 - BOM'su root' failed for lonvick on /dev/pts/8

or with structured data

<165>1 2003-10-11T22:14:15.003Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"] BOMAn application event log entry...

So now I am a bit confused. What is the correct syslog message format ? It is a matter of spec version where RFC 5424 obsoleted RFC 3164 ?


Source: (StackOverflow)

Bind9 logging to named pipe

The Goal

I'm wanting to configure bind9 on Ubuntu 12.04 to log out to a named pipe. The purpose is to redirect logging to the syslog-ng service.

The Problem

My problem is that when I direct the logging channel to the named pipe file, the bind service will not start. This is the logging clause, where query.log is the FIFO file :

logging {
  channel query.log {
      file "/var/log/named/query.log";
      severity info;
      print-time yes;
      print-category yes;
  };

  category queries  { query.log; };
  category ....
};

This is the output found in syslog:

Jun 12 12:37:53 hostname named[19400]: isc_file_isplainfile '/var/log/named/query.log' failed: invalid file
Jun 12 12:37:53 hostname named[19400]: configuring logging: invalid file
Jun 12 12:37:53 hostname named[19400]: loading configuration: invalid file

What I've Tried

I have validated that the permissions are correct, and logging to a standard file works without issue. I have also validated that I can send data through the pipe, by running

sudo -u bind bash -c 'echo "test" > /var/log/named/query.log'

I see the data appear in syslog-ng as expected. I've also set /usr/sbin/named to both complain and disabled in Apparmor, yet I'm still experiencing the issue.

Help?

Is what I'm proposing to do possible? If so, any pointers on what I might be doing wrong.


Source: (StackOverflow)

Rails logger.error not showing up in SysLog

I have a question on how to configure correctly to get rails logger.error message showing up in SysLog. We used SyslogLogger gem. In our Syslog config, we have filter like this:

if $programname == 'rails' and ($syslogseverity-text == 'emerg') then  @somehost                   

if $programname == 'rails' and ($syslogseverity-text == 'alert') then  @somehost                      

if $programname == 'rails' and ($syslogseverity-text == 'crit') then  @somehost                      

if $programname == 'rails' and ($syslogseverity-text == 'err') then @somehost                        

if $programname == 'rails' and ($syslogseverity-text == 'warn') then @somehost

if $programname == 'rails' then                         ~

When there is exception or fatal error, the log of stack trace will be showing up. However, any statements that we use logger.error to log them are not showing up.

   


Source: (StackOverflow)

Log4j2 SyslogAppender TCP to syslog-ng

I want to use log4j2 to send my log messages to an syslog server(syslog-ng in my case). I have two issues right now

If I stop the syslog service, and start my application it says

2015-04-22 09:56:14,582 ERROR TcpSocketManager (TCP:192.168.0.81:1000)   java.net.ConnectException:

And from then on, my application hangs, until a Exception is thrown and my application crashes.

2015-04-22 09:59:21,064 ERROR Unable to write to stream TCP:192.168.0.81:1000 for appender RFC5424

What I want is that log4j tries to send the logmessage, and If the server is not available it should buffer it, until the server is available again(is this possible at all?, I thought immediateFail=false should do it) Which comes to my second problem.

If I start my application (with syslog-ng available) and then I stop it after a while and start it again. It get not reconnected(EDIT: Ok It does after a while, but lost a lot of logmessages).

Here is my log42.xml

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyAppx" packages="">
<Appenders>
    <Console name="STDOUT" target="SYSTEM_OUT">
        <PatternLayout
            pattern="%d{HH:mm:ss.SSS} %-5level %class{36} %L %M - %msg%xEx%n" />
    </Console>
    <Syslog name="RFC5424" format="RFC5424" host="192.168.0.81" port="1000"
    protocol="TCP" appName="app" mdcId="mdc" includeMDC="true"
    facility="LOCAL0" enterpriseNumber="18060" newLine="true" immediateFail="false"
    messageId="Audit" id="App" ignoreExceptions="true"/>
</Appenders>
<Loggers>
    <Root level="trace">
        <AppenderRef ref="STDOUT" />
        <AppenderRef ref="RFC5424" />
    </Root>
</Loggers>

I am open for other solutions, but what I want is that no logmessage is lost at all. And sooner or later all logmessages should be available on an central stations(thats because I wanna use a syslogserver instead of multiple local files)


Source: (StackOverflow)

syslog-ng keep source hostname and last relay

I have SSB and syslog-ng clients writing to it. I need to have in logs hostname of source of log (using keep_hostname(yes)) and IP of last relay, like with option keep_hostname(no) in log. I need both of them in the same log, how can I achieve that?


Source: (StackOverflow)

Unable to get Rsyslog structured data in syslog messages

I am trying to log messages with structured data . But it is showing null value for structured data. I am working with rsyslog 8.9.0.Can someone tell me either i need to load some module or modify source to get structure data SD-IDs in logged message.??

Template:

<%PRI%>%TIMESTAMP:::daterfc3339%%HOSTNAME%%syslogtag%%APPNAME%%PROCID%%MSGID% %msg% %STRUCTURED-DATA%\n

getting message format as below:

<142>  2015-01-29T06:43:53.081641-05:00 localhost login[2116]: login 2116 -  [2116 : 2116 INFO]SERIAL Login from IP:127.0.0.1 user:admin -

Source: (StackOverflow)

writing a custom template/parser/filter for use in syslog-ng

My application generates logs and sends them to syslog-ng. I want to write a custom template/parser/filter for use in syslog-ng to correctly store the fields in tables of an SQLite database (MyDatabase).

This is the legend of my log:

unique-record-id usename date Quantity BOQ possible,item,profiles Count Vendor applicable,vendor,categories known,request,types vendor_code credit

All these 12 fields are tab separated, and the parser must store them into 12 columns of table MyTable1 in MyDatabase. Some of the fields: the 6th, 9th, and 10th however also contain "sub-fields" as comma-separated values. The number of values within each of these sub-fields, is variable, and can change in each line of log.

I need these fields to be stored in respective separate tables MyItem_type, MyVendor_groups, MyReqs

These "secondary" tables have 3 columns, record the Unique-Record-ID, and Quantity against each of their occurence in the log So the schema in MyItem_type table looks like:

Unique-Record-ID | item_profile | Quantity

Similarly the schema of MyVendor_groups looks like:

Unique-Record-ID | vendor_category | Quantity

and the schema of MyReqs looks like:

Unique-Record-ID | req_type | Quantity

Consider these sample lines from the log:

unique-record-id usename date Quantity BOQ possible,item,profiles Count Vendor applicable,vendor,categories known,request,types vendor_code credit

234.44.tfhj Sam 22-03-2016  22  prod1   cat1,cat22,cat36,cat44  66  ven1    t1,t33,t43,t49  req1,req2,req3,req4 blue    64.22

234.45.tfhj Alex    23-03-2016  100 prod2   cat10,cat36,cat42   104 ven1    t22,t45 req1,req2,req33,req5    red 66

234.44.tfhj Vikas   24-03-2016  88  prod1   cat101,cat316,cat43 22  ven2    t22,t43 req1,req23,req3,req6    red 77.12

234.47.tfhj Jane    25-03-2016  22  prod7   cat10,cat36,cat44   43  ven3    t77 req1,req24,req3,req7    green   45.89

234.48.tfhj John    26-03-2016  97  serv3   cat101,cat36,cat45  69  ven5    t1  req11,req2,req3,req8    orange  33.04

234.49.tfhj Ruby    27-03-2016  85  prod58  cat10,cat38,cat46   88  ven9    t33,t55,t99 req1,req24,req3,req9    white   46.04

234.50.tfhj Ahmed   28-03-2016  44  serv7   cat110,cat36,cat47  34  ven11   t22,t43,t77 req1,req20,req3,req10   red 43

My parser should store the above log into MyDatabase.Mytable1 as:

unique-record-id    |   usename |   date    |   Quantity    |   BOQ |   item_profile    |   Count   |   Vendor  |   vendor_category |   req_type    |   vendor_code |   credit
234.44.tfhj |   Sam |   22-03-2016  |   22  |   prod1   |   cat1,cat22,cat36,cat44  |   66  |   ven1    |   t1,t33,t43,t49  |   req1,req2,req3,req4 |   blue    |   64.22
234.45.tfhj |   Alex    |   23-03-2016  |   100 |   prod2   |   cat10,cat36,cat42   |   104 |   ven1    |   t22,t45 |   req1,req2,req33,req5    |   red |   66
234.44.tfhj |   Vikas   |   24-03-2016  |   88  |   prod1   |   cat101,cat316,cat43 |   22  |   ven2    |   t22,t43 |   req1,req23,req3,req6    |   red |   77.12
234.47.tfhj |   Jane    |   25-03-2016  |   22  |   prod7   |   cat10,cat36,cat44   |   43  |   ven3    |   t77 |   req1,req24,req3,req7    |   green   |   45.89
234.48.tfhj |   John    |   26-03-2016  |   97  |   serv3   |   cat101,cat36,cat45  |   69  |   ven5    |   t1  |   req11,req2,req3,req8    |   orange  |   33.04
234.49.tfhj |   Ruby    |   27-03-2016  |   85  |   prod58  |   cat10,cat38,cat46   |   88  |   ven9    |   t33,t55,t99 |   req1,req24,req3,req9    |   white   |   46.04
234.50.tfhj |   Ahmed   |   28-03-2016  |   44  |   serv7   |   cat110,cat36,cat47  |   34  |   ven11   |   t22,t43,t77 |   req1,req20,req3,req10   |   red |   43

And also parse the "possible,item,profiles" to record into MyDatabase.MyItem_type as:

Unique-Record-ID | item_profile | Quantity
234.44.tfhj |   cat1    |   22
234.44.tfhj |   cat22   |   22
234.44.tfhj |   cat36   |   22
234.44.tfhj |   cat44   |   22
234.45.tfhj |   cat10   |   100
234.45.tfhj |   cat36   |   100
234.45.tfhj |   cat42   |   100
234.44.tfhj |   cat101  |   88
234.44.tfhj |   cat316  |   88
234.44.tfhj |   cat43   |   88
234.47.tfhj |   cat10   |   22
234.47.tfhj |   cat36   |   22
234.47.tfhj |   cat44   |   22
234.48.tfhj |   cat101  |   97
234.48.tfhj |   cat36   |   97
234.48.tfhj |   cat45   |   97
234.48.tfhj |   cat101  |   97
234.48.tfhj |   cat36   |   97
234.48.tfhj |   cat45   |   97
234.49.tfhj |   cat10   |   85
234.49.tfhj |   cat38   |   85
234.49.tfhj |   cat46   |   85
234.50.tfhj |   cat110  |   44
234.50.tfhj |   cat36   |   44
234.50.tfhj |   cat47   |   44

We also need to similarly parse "applicable,vendor,categories" and store them into MyDatabase.MyVendor_groups. And parse "known,request,types" for storage into MyDatabase.MyReqs The first column for MyDatabase.MyItem_type, MyDatabase.MyVendor_groups and MyDatabase.MyReqs will always be the Unique-Record-ID that was witnessed in the log.

Therefore yes, this column does not contain unique data, like other columns, in these three tables. The third column will always be the Quantity that was witnessed in the log.

I know a bit of PCRE, but it is the use of nested parsers in syslog-ng that's completely confusing me.

Documentation of Syslog-ng suggests this is possible, but simply failed to get a good example. If any kind hack around here has some reference or sample to share, it will be so useful.

Thanks in advance.


Source: (StackOverflow)

how to increase log message size to more than 8K in syslog-ng

Syslog-ng seems to only allow up to 8192 bytes of log_msg_size and after which it splits up the log message into multiple log messages. Setting this up in the global options or on the source option to use more than 8192 does not seem to work. I was wondering if there are other options that I need to put in order that very long logs aren't split up. I realize this might be a very rare case on the need for long log messages, but the application logging was designed poorly and we need this functionality currently while the logging is being fixed.

In looking at the source code, it seems that the log_msg_size is stored as a gint type, which in my recollection allows me to store up to +32787 right?

If the max I could put is 8192, then I guess I'll have to come up with something else to process the split logs, otherwise any help would be appreciated.


Source: (StackOverflow)

Newlines get stripped in syslog-ng

We have implemented centralised logging using syslog-ng on our load balanced servers. The history of that setup can be seen here: How do I set up PHP Logging to go to a remote server? .
It's working fine but the newlines are getting stripped at the destination. Is there any way to keep the newlines intact? Here's our config:

Source

destination php { tcp("server.com" port(xxxx)); };  
log { source(s_all); filter(f_php); destination(php); };  
filter f_php { facility(user); };  

Destination

destination d_php { file("$PROGRAM" owner(www-data) group(www-data) perm(0644)); };  
filter f_php { program("^\/var\/log\/"); };  
log { source(s_all); filter(f_php); destination(d_php); flags(final); };  

Source: (StackOverflow)

Using Apache Kafka for log aggregation

I am learning Apache Kafka from their quickstart tutorial: http://kafka.apache.org/documentation.html#quickstart. Upto now, I have done the setup as follows. A producer node, where a web server is running at port 8888. A Kafka server(broker), Consumer and Zookeeper instance on another node. And I have tested the default console/file enabled producer and consumer with 3 partitions. The setup is perfect, and I am able to see the messages I sent in the order they created (with in each partition).

Now, I want to send the logs generated from the web server to Kafka Broker. These messages will be processed by consumer later. Currently I am using syslog-ng to capture server logs to a text file. I have come up with 3 rough ideas on how to implement producer to use kafka for log aggregation

Producer Implementations

First Kind: Listen to tcp port of syslog-ng. Fetch each message and send to kafka server. Here we have two middle processes: Producer and syslog-ng
Second Kind: Using syslog-ng as Producer. Should find a way to send messages to Kafka server instead of writing to a file. Syslog-ng, the producer is the middle process.
Third Kind: Configuring the webserver itself as producer.

Am I correct in my thinking. In the last case we don't have any middle process. But I doubt its implementation will effect server performance. Can anyone let me know the best way of using Apache Kafka(if the above 3 are not good) and guide me through appropriate configuration of server?..

P.S.: I am using node.js for my web server

Thanks,
Sarath


Source: (StackOverflow)

How do you know if syslog-ng stops your listening daemon?

I wrote a PHP program that hooks into syslog-ng (via syslog-ng.conf) and it's basically this:

while (!feof(STDIN)) {
    $input = fgets(STDIN);
    process($input);
}
cleanup();

where process() and cleanup() are defined by me.

The problem I am facing is that cleanup(2) is never called and I need it to be executed before the program exits.

I have tried to catch SIGTERM, SIGHUP and SIGPIPE using pcntl_signal() and that part of the application seems to work fine (if I use kill -1 on the PHP process, my signal handler gets called and it runs cleanup()), but it appears that I am not getting those meessages from syslog-ng.

I have tried setting STDIN to non-blocking, thinking that PHP wouldn't call the signal handlers because the stream was blocking. That didn't work either, my signal handlers wouldn't get called.

How do I know when syslog-ng is about to stop my application, so I can do some cleanup?

Thanks, Tom

UPDATE: I've tried to catch all the signals, from 1 to 31 and it still doesn't receive anything when syslog-ng is restarted (or killed with SIGTERM).


Source: (StackOverflow)

Splunk: Apache Access & Apache Errors in One Project

I want to use rsyslog to send to Splunkstorm. I want to send Apache Access & Apache Errors to the same project. According to this answer, I can either create two separate projects (which to me seems like a waste of the maximum 3 allowed) OR I can add a marker to my event. Therefore

  1. Can the marker be any value/pair, such as tag=\"access\" and tag=\"error\" (I'm using rsyslog $template line syntax )?
  2. Which sourcetype do I use? syslog or generic single line data
  3. Do these two rsyslog configs look acceptable? access_log is ncsa combined & the error_log is Apache 2.2 standard.

$template access,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% tag=\"access\"] %msg%"

$template error,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% tag=\"error\"] %msg%"


Source: (StackOverflow)

syslog-ng to redis problems, able to write to file however unable to write to redis

I need to use Redis as a message key-value store for Logstash to read from. The idea is to use the existing Syslog-ng server to route the syslog for all servers to the Redis server so Logstash is able to read from it. I have my Redis server set up and am able to connect and write to Redis from Syslog-ng server using:

telnet redis.somedomain.com 6379

So the port is open and can be written to however the key value stores are not being sent. I already have the majority of this system working utilizing UDP as well as appending to individual hosts under /var/log/hosts. The change that I have made to my existing syslog-ng.conf file is as follows:

# In Redis Protocol Notation
# $5 = 5 characters(LPUSH), $4 = 4 characters(logs), $(length $MSG) = character length of $MSG,
# $MSG = Log Message per syslog-ng symbols

template t_redis_lpush { template("*3\r\n$5\r\nLPUSH\r\n$4\r\nlogs\r\n$(length $MSG)\r\n$MSG\r\n"); };
destination d_redis_tcp { tcp("redis.somedomain.com" port(6379) template(t_redis_lpush)); };
log { source(remote); source(noforward); filter(f_messages);  destination(d_redis_tcp); flags(final); };

I did not include the f_messages filter content since it already works and is in use to send logs to UDP and to /var/log/hosts. If anyone would like me to extract the filter functions I can post those as well. filter(f_messages) end up processing the result to something along the lines of

"Jan 21 14:27:23 www1/www1 10.252.4.152 - - [21/Jan/2014:14:27:23 -0700] "POST /service.php?session_name=6tiqbpfeu1uc31pg1eimjqpvt0&url=%2Fseo%2FinContentLinks%2Fblogs.somedomain.com%7Cmusic%7C2013%7C12%7Cinterview_fredo.php%2F HTTP/1.1" 200 536 www1.nyc.somedomain.com "66.156.238.1" "-" "Arch Quickcurl" "8126464" 0 92878"

Does anyone have any idea why my Redis template, destination and log shipper for Syslog-ng is not working?

Thanks in advance! Cole


Source: (StackOverflow)