scheme interview questions
Top scheme frequently asked interview questions
What are the popular (ok, popular is relative) web frameworks for the various flavours of LISP?
Source: (StackOverflow)
I do most of my development in Common Lisp, but there are some moments when I want to switch to Scheme (while reading Lisp in Small Pieces, when I want to play with continuations, or when I want to do some scripting in Gauche, for example). In such situations, my main source of discomfort is that I don't have Slime (yes, you may call me an addict).
What is Scheme's closest counterpart to Slime? Specifically, I am most interested in:
- Emacs integration (this point is obvious ;))
- Decent tab completion (ideally, c-w-c-c TAB should expand to call-with-current-continuation). It may be even symbol-table based (ie. it doesn't have to notice a function I defined in a
let
at once).
- Function argument hints in the minibuffer (if I have typed
(map |)
(cursor position is indicated by |
)), I'd like to see (map predicate . lists)
in the minibuffer
- Sending forms to the interpreter
- Integration with a debugger.
I have ordered the features by descending importance.
My Scheme implementations of choice are:
- MzScheme
- Ikarus
- Gauche
- Bigloo
- Chicken
It would be great if it worked at least with them.
Source: (StackOverflow)
I have been working alongside The Little Schemer to learn Scheme and using PLT-Scheme for my environment.
The Little Schemer has helped me tremendously with recursion (it is straightforward for me now) but I'm stuck on a portion of the book that introduces "collectors" and calls the function as a whole a continuation.
Here is the example code they have used. I understand the recursive elements but I am stuck, in particular on the lambda functions - my mind can't follow the path and how the arguments for that lambda function are set (since their only call is to call them again in recursion, there is no concrete use within the function body).
If someone could more-or-less give me a break down of the path of computation through the recursion of the function into the lambda collectors, that may help me.
;; Build a nested list of even numbers by removing the odd ones from its
;; argument and simultaneously multiply the even numbers and sum the odd
;; numbers that occur in its argument.
(define (even-only-collector l col)
(cond
((null? l)
(col (quote ()) 1 0))
((atom? (car l))
(cond
((even? (car l))
(even-only-collector (cdr l)
(lambda (newl p s)
(col (cons (car l) newl)
(* (car l) p) s))))
(else
(even-only-collector (cdr l)
(lambda (newl p s)
(col newl
p (+ (car l) s)))))))
(else
(even-only-collector (car l)
(lambda (al ap as)
(even-only-collector (cdr l)
(lambda (dl dp ds)
(col (cons al dl)
(* ap dp)
(+ as ds)))))))))
;; The collector function
(define (collector newl product sum)
(cons sum
(cons product newl)))
Thank you in advance!!
Source: (StackOverflow)
I have experimented with Lisp (actually Scheme) and found it to be a very beautiful language that I am interested in learning more about. However, it appears that Lisp is never used in serious projects, and I haven't seen it listed as a desired skill on any job posting. I am interested in hearing from anyone who has used Lisp or seen it used in the "real world", or who knows whether it is considered a purely academic language.
Source: (StackOverflow)
I'm used to lazy evaluation from Haskell, and find myself getting irritated with eager-by-default languages now that I've used lazy evaluation properly. This is actually quite damaging, as the other languages I use mainly make lazily evaluating stuff very awkward, normally involving the rolling out of custom iterators and so forth. So just by acquiring some knowledge, I've actually made myself less productive in my original languages. Sigh.
But I hear that AST macros offer another clean way of doing the same thing. I've often heard statements like 'Lazy evaluation makes macros redundant' and vice-versa, mostly from sparring Lisp and Haskell communities.
I've dabbled with macros in various Lisp variants. They just seemed like a really organized way of copy and pasting chunks of code around to be handled at compile time. They certainly weren't the holy grail that Lispers like to think it is. But that's almost certainly because I can't use them properly. Of course, having the macro system work on the same core data structure that the language itself is assembled with is really useful, but it's still basically an organized way of copy-and-pasting code around. I acknowledge that basing a macro system on the same AST as the language that allows full runtime alteration is powerful.
What I want to know is, is how can macros be used to concisely and succinctly do what lazy-evaluation does? If I want to process a file line by line without slurping up the whole thing, I just return a list that's had a line-reading routine mapped over it. It's the perfect example of DWIM (do what I mean). I don't even have to think about it.
I clearly don't get macros. I've used them and not been particularly impressed given the hype. So there's something I'm missing that I'm not getting by reading over documentation online. Can someone explain all of this to me?
Source: (StackOverflow)
I wonder what the difference is between those operations. I have seen similar questions in Stackoverflow but they are about Lisp, and there is not a comparison between three of those operators. So if this is asked already, please let me know.
I am writing the different types of commands in Scheme, and i get the following outputs:
(eq? 5 5) -->#t
(eq? 2.5 2.5) -->#f
(equal? 2.5 2.5) --> #t
(= 2.5 2.5) --> #t
Can someone explain why this is the case?
Source: (StackOverflow)
A phrase that I've noticed recently is the concept of "point free" style...
First, there was this question, and also this one.
Then, I discovered here they mention "Another topic that may be worth discussing is the authors' dislike of point free style."
What is "point free" style? Can someone give a concise explanation? Does it have something to do with "automatic" currying?
To get an idea of my level - I've been teaching myself Scheme, and have written a simple Scheme interpreter... I understand what "implicit" currying is, but I don't know any Haskell or ML.
Source: (StackOverflow)
I like to study languages outside my comfort zone, but I've had a hard time finding a place to start for functional languages. I heard a lot of good things about Structure and Interpretations of Computer Programs, but when I tried to read through it a couple of years ago it just seemed to whiz over my head. I do way better with books than web sites, but when I visit the local book store the books on LISP look kind of scary.
So what's a good starting point? My goal is to be able to use a functional programming language to solve simple problems in 6 months or so, and the ability to move to more advanced topics, recognize when a functional language is the right tool for the job, and use the language to solve more problems over the course of 2-3 years. I like books that are heavy on examples but also include challenges to work through. Does such a thing exist for functional languages?
Source: (StackOverflow)
On this site they say there are 10 LISP primitives.
The primitives are: atom, quote, eq, car, cdr, cons, cond, lambda, label, apply
.
http://hyperpolyglot.wikidot.com/lisp#ten-primitives
Stevey reckons there are seven (or five):
Its part of the purity of the idea of LISP: you only need the seven (or is
it five?) primitives to build the full machine.
http://steve-yegge.blogspot.com/2006/04/lisp-is-not-acceptable-lisp.html
What is the minimum number of primitives to build a LISP machine (ie something that can run an eval/value function on LISP code)? (And which ones are they?)
(I can understand you could live without atom, label and apply
)
Source: (StackOverflow)
How useful is the feature of having an atom data type in a programming language?
A few programming languages have the concept of atom or symbol to represent a constant of sorts. There are a few differences among the languages I have come across (Lisp, Ruby and Erlang), but it seems to me that the general concept is the same. I am interested in programming language design, and I was wondering what value does having an atom type provide in real life. Other languages such as Python, Java, C# seem to be doing quite well without it.
I have no real experience of Lisp or Ruby (I know the syntaxes, but haven't used either in a real project). I have used Erlang enough to be used to the concept there.
Source: (StackOverflow)
To begin, not only are there two main dialects of the language (Common Lisp and Scheme), but each of the dialects has many individual implementations. For example, Chicken Scheme, Bigloo, etc... each with slight differences.
From a modern point of view this is strange, as languages these days tend to have definitive implementations/specs. Think Java, C#, Python, Ruby, etc, where each has a single definitive site you can go to for API docs, downloads, and such. Of course Lisp predates all of these languages. But then again, even C/C++ are standardized (more or less).
Is the fragmentation of this community due to the age of Lisp? Or perhaps different implementations/dialects are intended to solve different problems? I understand there are good reasons why Lisp will never be as united as languages that have grown up around a single definitive implementation, but at this point is there any good reason why the Lisp community should not move in this direction?
Source: (StackOverflow)
I am very very interested in Macros and just beginning to understand its true power. Please help me collect some great usage of macro systems.
So far I have these constructs:
Pattern Matching:
Andrew Wright and Bruce Duba. Pattern
matching for Scheme, 1995
Relations in the spirit of Prolog:
Dorai Sitaram. Programming in schelog.
http://www.ccs.neu.edu/home/dorai/schelog/schelog.html
Daniel P. Friedman, William E. Byrd,
and Oleg Kiselyov. The Reasoned
Schemer. The MIT Press, July 2005
Matthias Felleisen. Transliterating
Prolog into Scheme. Technical Report
182, Indiana University, 1985.
Extensible Looping Constructs:
Sebastian Egner. Eager comprehensions
in Scheme: The design of SRFI-42. In
Workshop on Scheme and Functional
Programming, pages13–26, September
2005.
Olin Shivers. The anatomy of a loop: a
story of scope and control. In
International Conference on Functional
Programming, pages 2–14, 2005.
Class Systems:
PLT. PLT MzLib: Libraries manual.
Technical Report PLT-TR2006-4-v352,
PLT Scheme Inc., 2006.
http://www.plt-scheme.org/techreports/
Eli Barzilay. Swindle.
http://www.barzilay.org/Swindle.
Component Systems:
Ryan Culpepper, Scott Owens, and
Matthew Flatt. Syntactic abstraction
in component interfaces. In
International Conference on Generative
Programming and Component Engineering,
pages 373–388, 2005
Software Contract Checking
Matthew Flatt and Matthias Felleisen.
Units: Cool modules for HOT languages
In ACM SIGPLAN Conference on
Programming Language Design and
Implementation, pages 236–248, 1998
Oscar Waddell and R. Kent Dybvig.
Extending the scope of syntactic
abstraction.In Symposium on Principles
of Programming Languages, pages
203–215, 199
Parser Generators
Scott Owens, Matthew Flatt, Olin
Shivers, and Benjamin McMullan. Lexer
and parser generators in Scheme. In
Workshop on Scheme and Functional
Programming, pages 41–52, September
2004.
Tools for Engineering Semantics:
Matthias Felleisen, Robert Bruce
Findler, and Matthew Flatt. Semantics
Engineering with PLT Redex. MIT Press,
August 2009.
Specifications of Compiler Transformations:
Dipanwita Sarkar, Oscar Waddell, and R. Kent Dybvig. A nanopass
framework for compiler education.
Journal of Functional
Programming,15(5):653–667, September
2005. Educational Pearl.
Novel Forms of Execution
Servlets with serializable
continuations Greg Pettyjohn, John
Clements, Joe Marshall, Shriram
Krishnamurthi, and Matthias Felleisen.
Continuations from generalized stack
inspection. In International
Conference on Functional Programming,
pages216–227, 2005.
Theorem-Proving System
Sebastian Egner. Eager comprehensions in Scheme: The design
of SRFI-42.
In Workshop on Scheme and Functional Programming, pages 13–26,
September 2005.
Extensions of the Base Language with Types
Sam Tobin-Hochstadt and Matthias
Felleisen.The design and
implementation of typed scheme. In
Symposium on Principles of Programming
Languages, pages 395–406, 2008.
Laziness
Eli Barzilay and John Clements.
Laziness without all the hard
work:combining lazy and strict
languages for teaching. In Functional
and declarative programming in
education, pages 9–13, 2005.
Functional Reactivity
Gregory H. Cooper and Shriram
Krishnamurthi. Embedding dynamic
dataflow in a call-by-value language.
In European Symposium on Programming,
2006
Reference:
Collected from Ryan Culpepper's Dissertation
Source: (StackOverflow)
I know they are dialects of the same family of language called lisp, but what exactly are the differences? Could you give an overview, if possible, covering topics such as syntax, characteristics, features and resources.
Source: (StackOverflow)
The intent of my question is not to start a flame war, but rather to determine in what circumstances each language is "the best tool for the job."
I have read several books on Clojure (Programming Clojure, Practical Clojure, The Joy of Clojure, and the Manning Early Access edition of Clojure in Action), and I think it is a fantastic language. I am currently reading Let Over Lambda which mostly deals with Common Lisp macros, and, it too, is a very interesting language.
I am not a Lisp expert (more of a newbie), but this family of languages fascinates me, as does functional programming, in general.
Advantages of Clojure (and disadvantages of "others"):
Runs on the JVM.
The JVM is a very stable, high-performance language environment that pretty well meets Sun's dream of "Write once, run [almost] anywhere". I can write code on my Macbook Pro, compile it into an executable JAR file, and then run it on Linux and Microsoft Windows with little additional testing.
The (Hotspot, and other) JVM supports high-quality garbage collection and very performant just-in-time compilation and optimization. Where just a few years ago, I wrote everything that had to run fast in C, now I do not hesitate to do so in Java.
Standard, simple, multithreading model. Does Common Lisp have a standard multithreading package?
Breaks up the monotony of all those parentheses with []
, {}
, and #{}
, although Common Lisp experts will probably tell me that with reader macros, you can add those to CL.
Disadvantages of Clojure:
- Runs on the JVM.
- No tail recursion or continuations. Does Common Lisp support continuations? Scheme requires support for both, I believe.
Advantages of Others (Common Lisp, in particular) (and disadvantages of Clojure):
Thoughts? Other differences?
Source: (StackOverflow)