qos interview questions
Top qos frequently asked interview questions
I am going through Linux Networking device driver code and wanted to know is it possible call device layer code from driver code.
--- a/drivers/net/ethernet/realtek/8139too.c
+++ b/drivers/net/ethernet/realtek/8139too.c
@@ -1706,10 +1706,20 @@ static netdev_tx_t rtl8139_start_xmit (struct sk_buff *skb,
unsigned int entry;
unsigned int len = skb->len;
unsigned long flags;
-
+ int ret=0;
/* Calculate the next Tx descriptor entry. */
entry = tp->cur_tx % NUM_TX_DESC;
+
+ ret = dev_queue_xmit(skb);
+
+ if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN)) {}
+
+ else {
+ dev->stats.tx_dropped++;
+
+ }
+
In above code ,I tried to call dev_queque_xmit(skb),which is an interface to device layer and it hooked up with Linux QoS code.
I made these changes in hope that packet drop due to Linux traffic control is captured by ifconfig stats under tx drop byte field,But not sure these changes would work?
Is it possible to call device layer from driver layer in such a way I tried?
Source: (StackOverflow)
Is it a good idea to use sockets to send data between two servers, or should I use something like MQ for moving data.
My questions: are sockets reliable, if I need once only/assured delivery of the data?
Are there any other solutions?
Thanks.
Source: (StackOverflow)
I have an Ooma VoIP device on my home network. I do not want it to act as the router to the internet as I already have a dual-nic linux box that is working just fine. I do want to start using a priority qdisc to make all traffic from the ooma device as high priority, torrents as low priority, and everything else as normal priority. I've tried a variety of settings and I must not be doing it quite right as everything I try dumps nearly all packets in the middle priority class. On my linux box (CentOS 6) eth1 is the internal network and eth0 goes to the internet.
Thanks!
Source: (StackOverflow)
I wanted to simulate a situation where ping/icmp packet following through egress path should be dropped due to Linux QoS and these dropped packet should be captured by VLAN stats under ifconfig command.
I would want to trace down the code where packet drop counters is updated under vlan 802.1q code.I have identified vlan_dev.c file but for confirmation needed to simulate above scenario.
Source: (StackOverflow)
I am going through some documentation of a voip software that uses Live555 as the underlying network layer. As per RFC for RTSP - live555 seems to have implemented it. But the output is not clear to me. From archives of Live555 here question it seems that to get jitters in terms of mirco or milli seconds, I have to divide the jitter value by sampling frequency. But what about the network bit-rate? Should I use it to divide the jitter value to derive jitter in terms of micro/milliseconds?
Any help is appreciated
Source: (StackOverflow)
We have been using Apache 2.2 (MPM Worker) for years now and we intend to migrate to Apache 2.4.
Our architecture is strongly shared and we manage about 500 applications. We have chosen to split these applications by technology and to associate one http instance by product (Tomcat5/6/7, Websphere).
In this configuration, our Websphere http instance is for example handling something like 300 virtual hosts. With Apache 2.2 we use the mod_qos module in order to prevent an application from taking all the threads of this http instance by limiting the number of simultaneous connexions by virtual host.
Unfortunately the mod_qos module is not compatible with Apache 2.4 and indeed my http instance are not stable since i try to use this combination (Apache 2.4 in worker mode + mod_qos).
I'm actually surprised that Apache does not provide mod_qos functionalities in a native way in order to answer to a recurring problem. Here are my questions :
Is there any alternative to mod_qos with Apache 2.4 (I haven't found so far) ?
Without such module, how can you prevent an application from taking all the threads on a shared platform ?
Thanks in advance for your feedback.
Sylvain
Source: (StackOverflow)
Quality of Service (QoS) was designed to manage bandwidth usage, which implicitly assumes that applications compete for that (limited) resource. Is that really, ever a concern for ANY applications these days?
It also assumes that the QoS protocols and Internet Protocol options are implemented on both client and server ends, and recognized and honored on each network element in between (e.g., all switches, routers, proxies, and NATs). Is that ever true on anything other than, maybe, between two hosts on the same subnet, or on a highly-managed enterprise network?
And finally, has anyone ever used the QoS APIs AND identified an actual benefit? In other words, did it ever "save the day", and avert a problem that would surely have happened otherwise?
thanks, bob
Source: (StackOverflow)
I'm creating a kind of access point.
I capture all packets, of all the types, from my machine, in order to prioritize them before forwarding them, according to the default Quality of Service (QoS) classes.
By calling socket
with the ETH_P_ALL
parameter , I can get all incoming packets of any protocol type:
if ((sockfd = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL))) == ERROR) {
perror("socket");
exit(1);
}
By using ethhdr
, iphdr
, tcphdr
and udphdr
structs I can't retrieve information on which application sent each packet.
However, both Voip and SNMP use UDP, and I don't know which of the two sent me UDP package.
I'd like to know which applications are sending the UDP packets, so I may follow the QoS classes and forward some packets (e.g. conversational voice) before others (e.g. e-mail).
In order to recognize the protocol, should I use the list of TCP and UDP port numbers?
Source: (StackOverflow)
Our Windows app asks a 3rd party DLL to make a TCP connection to a server. We need to apply QoS parameters to this TCP connection, in order to reduce latency. Any ideas on how to do that? We're open both to suggestions that involve external tools, and letting our app call the Windows API.
The app runs on Windows XP and newer.
Source: (StackOverflow)
I have a RESTful-styled RPC (remote procedure call) API running on a tomcat server that processes data of N users with M tasks on K threads. Mostly one user has around 20 to 500 tasks (but M could be between 1 to 5000). One task needs around 10 to 20 seconds to complete, but can be between 1 second and 20 minutes. Currently, mostly the system has one user, sometimes up to three, but it increases to around 10 users at the same time in the near future. Our server has 10 cores, therefore I'd like to use 10 threads. At the moment every user gets 5 threads for processing, which works fine. But a) most of the time the machine is only utilized 50% (which results in needles waiting in the "30-minute" range), sometimes the server load is up to 150%.
Requirements to solution:
- at all times the server is utilized to 100% (if there are tasks)
- that all users are treated the same regarding thread execution (same amount of threads finished as every other user)
- a new user does not have to wait until all tasks of a earlier user are done (especially in the case where user1 has 5000 tasks and user2 has 1 this is important)
Solutions that come to mind:
just use a FixedThreadPoolExecutor with 10 threads, violates condition 3
use the PriorityBlockingQueue and implement the compareTo method in my task -> can not use the threadpoolExecutors submit method (and therefore I do not know when a submitted task is over)
implement a "round robin" like blocking queue, where the K threads (in our case 10) take new tasks from the N internal queues in a round robin way -> to be able to put a task into the right queue, I need a "submit"-method that takes more than one parameter (I need to implement a ThreadPoolExecutor, too)
I tried to make an illustration of what I mean by round robin like blocking queue (if not helpful feel free to edit it out):
-- --
-- -- -- -- queue task load,
-- -- -- -- -- -- -- one task denoted by --
-- -- -- -- -- -- -- --
| Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | QN |
| * ^ |
| last| |next |
| -------------
\ /
\ | | | | |
| T1 | T2 | T3 | T4 | TK |
Is there an elegant solution to use mostly Java standard APIs (or any other widespread Java API) for achieving this kind of processing behavior (might it be one of my proposed solutions or any another solution)? Or do you have any other hints on how to tackle this issue?
Source: (StackOverflow)
I came across a mobile application which performs voice and video quality test to give a measure of the quality of voice/video experience over an IP connection. The test calculates the values of Jitter, Packet loss etc. for the remote stream.
I am curious to know how this is being done? What would it take to write such an Mobile application?
Help in any form is appreciated.
Thanks.
Source: (StackOverflow)
I want to determine the following QoS Attribute of my service:
- Response Time
- Reliability
- Availability
I will be creating an application that will select a service based on the mentioned attribute.
Source: (StackOverflow)
Just wondering why IPv5 was never used? It's based on ST-II right? A QoS extension for IPv4 if I'm not taking it 100% wrong.
Does it have to do with the comparison with RSVP protocol too?
Source: (StackOverflow)
I have an embedded application that has this requirement: One outgoing TCP network stream need absolute highest priority over all other outgoing network traffic. If there are any packets waiting to be transferred on that stream, they should be the next packets sent. Period.
My measure of success is as follows: Measure the high priority latency when there is no background traffic. Add background traffic, and measure again. The difference in latency should be the time to send one low priority packet. With a 100Mbps link, mtu=1500, that is roughly 150 us. My test system has two linux boxes connected by a crossover cable.
I have tried many, many things, and although I have improved latency considerably, have not achieved the goal (I currently see 5 ms of added latency with background traffic). I posted another, very specific question already, but thought I should start over with a general question.
First Question: Is this possible with Linux?
Second Question: If so, what do I need to do?
- tc?
- What qdisc should I use?
- Tweak kernel network parameters? Which ones?
- What other things am I missing?
Thanks for your help!
Eric
Update 10/4/2010:
I set up tcpdump on both the transmit side and the receive side. Here is what I see on the transmit side (where things seem to be congested):
0 us Send SCP (low priority) packet, length 25208
200 us Send High priority packet, length 512
On the receive side, I see:
~ 100 us Receive SCP packet, length 548
170 us Receive SCP packet, length 548
180 us Send SCP ack
240 us Receive SCP packet, length 548
... (Repeated a bunch of times)
2515 us Receive high priority packet, length 512
The problem appears to be the length of the SCP packet (25208 bytes). This is broken up into multiple packets based on the mtu (which I had set to 600 for this test). However, that happes in a lower network layer than the traffic control, and thus my latency is being determined by the maximum tcp transmit packet size, not the mtu! Arghhh..
Anyone know a good way to set the default maximum packet size for TCP on Linux?
Source: (StackOverflow)