EzDevInfo.com

embedded interview questions

Top embedded frequently asked interview questions

Are Exceptions still undesirable in Realtime environment?

A couple of years ago I was taught, that in real-time applications such as Embedded Systems or (Non-Linux-)Kernel-development C++-Exceptions are undesirable. (Maybe that lesson was from before gcc-2.95). But I also know, that Exception Handling has become better.

So, are C++-Exceptions in the context of real-time applications in practice

  • totally unwanted?
  • even to be switched off via via compiler-switch?
  • or very carefully usable?
  • or handled so well now, that one can use them almost freely, with a couple of things in mind?
  • Does C++11 change anything w.r.t. this?

Update: Does exception handling really require RTTI to be enabled (as one answerer suggested)? Are there dynamic casts involved, or similar?


Source: (StackOverflow)

Direct Memory Access in Linux

I'm trying to access physical memory directly for an embedded Linux project, but I'm not sure how I can best designate memory for my use.

If I boot my device regularly, and access /dev/mem, I can easily read and write to just about anywhere I want. However, in this, I'm accessing memory that can easily be allocated to any process; which I don't want to do

My code for /dev/mem is (all error checking, etc. removed):

mem_fd = open("/dev/mem", O_RDWR));
mem_p = malloc(SIZE + (PAGE_SIZE - 1));
if ((unsigned long) mem_p % PAGE_SIZE) {
    mem_p += PAGE_SIZE - ((unsigned long) mem_p % PAGE_SIZE);
}
mem_p = (unsigned char *) mmap(mem_p, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, mem_fd, BASE_ADDRESS);

And this works. However, I'd like to be using memory that no one else will touch. I've tried limiting the amount of memory that the kernel sees by booting with mem=XXXm, and then setting BASE_ADDRESS to something above that (but below the physical memory), but it doesn't seem to be accessing the same memory consistently.

Based on what I've seen online, I suspect I may need a kernel module (which is OK) which uses either ioremap() or remap_pfn_range() (or both???), but I have absolutely no idea how; can anyone help?

EDIT: What I want is a way to always access the same physical memory (say, 1.5MB worth), and set that memory aside so that the kernel will not allocate it to any other process.

I'm trying to reproduce a system we had in other OSes (with no memory management) whereby I could allocate a space in memory via the linker, and access it using something like

*(unsigned char *)0x12345678

EDIT2: I guess I should provide some more detail. This memory space will be used for a RAM buffer for a high performance logging solution for an embedded application. In the systems we have, there's nothing that clears or scrambles physical memory during a soft reboot. Thus, if I write a bit to a physical address X, and reboot the system, the same bit will still be set after the reboot. This has been tested on the exact same hardware running VxWorks (this logic also works nicely in Nucleus RTOS and OS20 on different platforms, FWIW). My idea was to try the same thing in Linux by addressing physical memory directly; therefore, it's essential that I get the same addresses each boot.

I should probably clarify that this is for kernel 2.6.12 and newer.

EDIT3: Here's my code, first for the kernel module, then for the userspace application.

To use it, I boot with mem=95m, then insmod foo-module.ko, then mknod mknod /dev/foo c 32 0, then run foo-user , where it dies. Running under gdb shows that it dies at the assignment, although within gdb, I cannot dereference the address I get from mmap (although printf can)

foo-module.c

#include <linux/module.h>
#include <linux/config.h>
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/mm.h>
#include <asm/io.h>

#define VERSION_STR "1.0.0"
#define FOO_BUFFER_SIZE (1u*1024u*1024u)
#define FOO_BUFFER_OFFSET (95u*1024u*1024u)
#define FOO_MAJOR 32
#define FOO_NAME "foo"

static const char *foo_version = "@(#) foo Support version " VERSION_STR " " __DATE__ " " __TIME__;

static void    *pt = NULL;

static int      foo_release(struct inode *inode, struct file *file);
static int      foo_open(struct inode *inode, struct file *file);
static int      foo_mmap(struct file *filp, struct vm_area_struct *vma);

struct file_operations foo_fops = {
    .owner = THIS_MODULE,
    .llseek = NULL,
    .read = NULL,
    .write = NULL,
    .readdir = NULL,
    .poll = NULL,
    .ioctl = NULL,
    .mmap = foo_mmap,
    .open = foo_open,
    .flush = NULL,
    .release = foo_release,
    .fsync = NULL,
    .fasync = NULL,
    .lock = NULL,
    .readv = NULL,
    .writev = NULL,
};

static int __init foo_init(void)
{
    int             i;
    printk(KERN_NOTICE "Loading foo support module\n");
    printk(KERN_INFO "Version %s\n", foo_version);
    printk(KERN_INFO "Preparing device /dev/foo\n");
    i = register_chrdev(FOO_MAJOR, FOO_NAME, &foo_fops);
    if (i != 0) {
        return -EIO;
        printk(KERN_ERR "Device couldn't be registered!");
    }
    printk(KERN_NOTICE "Device ready.\n");
    printk(KERN_NOTICE "Make sure to run mknod /dev/foo c %d 0\n", FOO_MAJOR);
    printk(KERN_INFO "Allocating memory\n");
    pt = ioremap(FOO_BUFFER_OFFSET, FOO_BUFFER_SIZE);
    if (pt == NULL) {
        printk(KERN_ERR "Unable to remap memory\n");
        return 1;
    }
    printk(KERN_INFO "ioremap returned %p\n", pt);
    return 0;
}
static void __exit foo_exit(void)
{
    printk(KERN_NOTICE "Unloading foo support module\n");
    unregister_chrdev(FOO_MAJOR, FOO_NAME);
    if (pt != NULL) {
        printk(KERN_INFO "Unmapping memory at %p\n", pt);
        iounmap(pt);
    } else {
        printk(KERN_WARNING "No memory to unmap!\n");
    }
    return;
}
static int foo_open(struct inode *inode, struct file *file)
{
    printk("foo_open\n");
    return 0;
}
static int foo_release(struct inode *inode, struct file *file)
{
    printk("foo_release\n");
    return 0;
}
static int foo_mmap(struct file *filp, struct vm_area_struct *vma)
{
    int             ret;
    if (pt == NULL) {
        printk(KERN_ERR "Memory not mapped!\n");
        return -EAGAIN;
    }
    if ((vma->vm_end - vma->vm_start) != FOO_BUFFER_SIZE) {
        printk(KERN_ERR "Error: sizes don't match (buffer size = %d, requested size = %lu)\n", FOO_BUFFER_SIZE, vma->vm_end - vma->vm_start);
        return -EAGAIN;
    }
    ret = remap_pfn_range(vma, vma->vm_start, (unsigned long) pt, vma->vm_end - vma->vm_start, PAGE_SHARED);
    if (ret != 0) {
        printk(KERN_ERR "Error in calling remap_pfn_range: returned %d\n", ret);
        return -EAGAIN;
    }
    return 0;
}
module_init(foo_init);
module_exit(foo_exit);
MODULE_AUTHOR("Mike Miller");
MODULE_LICENSE("NONE");
MODULE_VERSION(VERSION_STR);
MODULE_DESCRIPTION("Provides support for foo to access direct memory");

foo-user.c

#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <sys/mman.h>

int main(void)
{
    int             fd;
    char           *mptr;
    fd = open("/dev/foo", O_RDWR | O_SYNC);
    if (fd == -1) {
        printf("open error...\n");
        return 1;
    }
    mptr = mmap(0, 1 * 1024 * 1024, PROT_READ | PROT_WRITE, MAP_FILE | MAP_SHARED, fd, 4096);
    printf("On start, mptr points to 0x%lX.\n",(unsigned long) mptr);
    printf("mptr points to 0x%lX. *mptr = 0x%X\n", (unsigned long) mptr, *mptr);
    mptr[0] = 'a';
    mptr[1] = 'b';
    printf("mptr points to 0x%lX. *mptr = 0x%X\n", (unsigned long) mptr, *mptr);
    close(fd);
    return 0;
}

Source: (StackOverflow)

Advertisements

Learning kernel hacking and embedded development at home? [closed]

I was always attracted to the world of kernel hacking and embedded systems.
Has anyone got good tutorials (+easily available hardware) on starting to mess with such stuff?
Something like kits for writing drivers etc, which come with good documentation and are affordable?

Thanks!


Source: (StackOverflow)

When is CRC more appropriate to use than MD5/SHA1?

When is it appropriate to use CRC for error detection versus more modern hashing functions such as MD5 or SHA1? Is the former easier to implement on embedded hardware?


Source: (StackOverflow)

Understanding Linux /proc/id/maps

I am trying to understand my embedded linux application's memory use. The /proc/pid/maps utility/file seems to be a good resource for seeing the details. Unfortunately I don't understand all the columns and entries.

Is there good resource/documentation for the proc//maps utility/file?

What does the anonymous inode 0 entries mean? These seem to be some of the larger memory segments.


Source: (StackOverflow)

Unit Testing C Code [closed]

I worked on an embedded system this summer written in straight C. It was an existing project that the company I work for had taken over. I have become quite accustomed to writing unit tests in Java using JUnit but was at a loss as to the best way to write unit tests for existing code (which needed refactoring) as well as new code added to the system.

Are there any projects out there that make unit testing plain C code as easy as unit testing Java code with JUnit? Any insight that would apply specifically to embedded development (cross-compiling to arm-linux platform) would be greatly appreciated.


Source: (StackOverflow)

Anyone using Python for embedded projects? [closed]

My company is using Python for a relatively simple embedded project. Is anyone else out there using Python on embedded platforms? Overall it's working well for us, quick to develop apps, quick to debug. I like the overall "conciseness" of the language.

The only real problem I have in day to day work is that the lack of static checking vs a regular compiler can cause problems to be thrown at run-time, e.g. a simple accidental cat of a string and an int in a print statement can bring the whole application down.


Source: (StackOverflow)

Optimizing for space instead of speed in C++

When you say "optimization", people tend to think "speed". But what about embedded systems where speed isn't all that critical, but memory is a major constraint? What are some guidelines, techniques, and tricks that can be used for shaving off those extra kilobytes in ROM and RAM? How does one "profile" code to see where the memory bloat is?

P.S. One could argue that "prematurely" optimizing for space in embedded systems isn't all that evil, because you leave yourself more room for data storage and feature creep. It also allows you to cut hardware production costs because your code can run on smaller ROM/RAM.

P.P.S. References to articles and books are welcome too!

P.P.P.S. These questions are closely related: 404615, 1561629


Source: (StackOverflow)

Embedded systems worst practices?

What would you consider "worst practices" to follow when developing an embedded system?

Some of my ideas of what not to do are:

  • Avoid abstracting the hardware layer, instead spreading hardware accesses throughout the code.
  • Not having any type of emulation environment, having only the actual hardware to exe/cute on.
  • Avoiding unit tests, perhaps due to the above two points
  • Not developing the system in a layered structure, so that higher up layers could depend on lower layers functionality debugged and working
  • Selecting hardware without considering the software & tools that will use it
  • Using hardware designed for easy debugging, e.g. no test points, no debug LEDs, no JTAG etc.

    I'm sure there are plenty of good ideas out there on what not to do, let's hear them!


    Source: (StackOverflow)

  • Simple serial point-to-point communication protocol

    I need a simple communication protocol between two devices (a PC and a microcontroller). The PC must send some commands and parameters to the micro. The micro must transmit an array of bytes (data from sensor).

    The data must be noise protected (besides parity checking, I think I need some other data correction method).

    Is there any standard solution to do this? (I need only an idea, not the complete solution).

    P.S. Any advice is appreciated. P.P.S Sorry for any grammar mistakes, I hope you understand.

    Edit 1. I have not decided whether it will be master/slave protocol or both sides can initiate communication. The PC must know when micro have done a job and can send data. It can continuously poll the micro if data is ready, or the micro can send data, when a job is done. I don't know which is better and simpler.

    Edit 2. Hardware and physical layer protocol. Since RS-232C serial standard used in the PC, I will use asynchronous communication. I will use only RxD, TxD and GND signals. I can't use additional wires because the microcontroller AFAIK doesn't support them. BTW I'm using the AVR ATmega128 chip.

    So I will use fixed baud rate, 8 bits of data, 2 stop bits without parity checking (or with?).

    Data link protocol. That's what my question primarily concerned about. Thanks for suggesting HDLC, PPP and Modbus protocols. I will research on it.


    Source: (StackOverflow)

    C++: optimizing member variable order?

    I was reading a blog post by a game coder for Introversion and he is busily trying to squeeze every CPU tick he can out of the code. One trick he mentions off-hand is to

    "re-order the member variables of a class into most used and least used."

    I'm not familiar with C++, nor with how it compiles, but I was wondering if

    1. This statement is accurate?
    2. How/Why?
    3. Does it apply to other (compiled/scripting) languages?

    I'm aware that the amount of (CPU) time saved by this trick would be minimal, it's not a deal-breaker. But on the other hand, in most functions it would be fairly easy to identify which variables are going to be the most commonly used, and just start coding this way by default.


    Source: (StackOverflow)

    Alternatives to Lua as an embedded language?

    I am working on an embedded system running Linux on a DSP. Now we want to make some parts of it scriptable and we are looking for a nice embeddable scripting language. These scripts should integrate nicely with our existing C++ code base, be small and fast.

    I understand that Lua is the industry choice for problems like this. We will probably go with Lua because it is tried-and-true and proven to be stable and so on. However, as a programming language it has some rather quirky corners.

    So, what alternatives are out there for embeddable languages?

    EDIT:

    This is about a year later.

    We actually used Lua on our embedded system and it performs marvelously well. Over time, we added more and more scripting support to more and more parts of the project and that really helped to bring it along.

    Performance is outstanding, really. Even rather complex operations that involve searching through long arrays or fancy string operations perform surprisingly well. We basically never ran into Lua related performance problems at all.

    Interfacing with C functions is very straightforward and works really well. This allowed us to grow the scripting system painlessly.

    Finally, we were astounded at how flexible Lua proved to be. Our Lua interpreter has to run on a system with a nonstandard memory allocator and without support for the double data type. There are two well-documented places in one header file we had to modify to make Lua work on that system. It is really well suited for embedding!


    Source: (StackOverflow)

    Power Efficient Software Coding

    In a typical handheld/portable embedded system device Battery life is a major concern in design of H/W, S/W and the features the device can support. From the Software programming perspective, one is aware of MIPS, Memory(Data and Program) optimized code. I am aware of the H/W Deep sleep mode, Standby mode that are used to clock the hardware at lower Cycles or turn of the clock entirel to some unused circutis to save power, but i am looking for some ideas from that point of view:

    Wherein my code is running and it needs to keep executing, given this how can I write the code "power" efficiently so as to consume minimum watts?

    Are there any special programming constructs, data structures, control structures which i should look at to achieve minimum power consumption for a given functionality.

    Are there any s/w high level design considerations which one should keep in mind at time of code structure design, or during low level design to make the code as power efficient(Least power consuming) as possible?


    Source: (StackOverflow)

    How much footprint does C++ exception handling add

    This issue is important especially for embedded development. Exception handling adds some footprint to generated binary output. On the other hand, without exceptions the errors need to be handled some other way, which requires additional code, which eventually also increases binary size.

    I'm interested in your experiences, especially:

    1. What is average footprint added by your compiler for the exception handling (if you have such measurements)?
    2. Is the exception handling really more expensive (many say that), in terms of binary output size, than other error handling strategies?
    3. What error handling strategy would you suggest for embedded development?

    Please take my questions only as guidance. Any input is welcome.

    Addendum: Does any one have a concrete method/script/tool that, for a specific C++ object/executable, will show the percentage of the loaded memory footprint that is occupied by compiler-generated code and data structures dedicated to exception handling?


    Source: (StackOverflow)

    Using Haskell for sizable real-time systems: how (if?)?

    I've been curious to understand if it is possible to apply the power of Haskell to embedded realtime world, and in googling have found the Atom package. I'd assume that in the complex case the code might have all the classical C bugs - crashes, memory corruptions, etc, which would then need to be traced to the original Haskell code that caused them. So, this is the first part of the question: "If you had the experience with Atom, how did you deal with the task of debugging the low-level bugs in compiled C code and fixing them in Haskell original code ?"

    I searched for some more examples for Atom, this blog post mentions the resulting C code 22KLOC (and obviously no code:), the included example is a toy. This and this references have a bit more practical code, but this is where this ends. And the reason I put "sizable" in the subject is, I'm most interested if you might share your experiences of working with the generated C code in the range of 300KLOC+.

    As I am a Haskell newbie, obviously there may be other ways that I did not find due to my unknown unknowns, so any other pointers for self-education in this area would be greatly appreciated - and this is the second part of the question - "what would be some other practical methods (if) of doing real-time development in Haskell?". If the multicore is also in the picture, that's an extra plus :-)

    (About usage of Haskell itself for this purpose: from what I read in this blog post, the garbage collection and laziness in Haskell makes it rather nondeterministic scheduling-wise, but maybe in two years something has changed. Real world Haskell programming question on SO was the closest that I could find to this topic)

    Note: "real-time" above is would be closer to "hard realtime" - I'm curious if it is possible to ensure that the pause time when the main task is not executing is under 0.5ms.


    Source: (StackOverflow)