rpc interview questions
Top rpc frequently asked interview questions
I cloned a git repo that I have hosted on github to my laptop. I was able to successfully push a couple of commits to github without problem. However, now I get the following error:
Compressing objects: 100% (792/792), done.
error: RPC failed; result=22, HTTP code = 411
Writing objects: 100% (1148/1148), 18.79 MiB | 13.81 MiB/s, done.
Total 1148 (delta 356), reused 944 (delta 214)
From here it just hangs and I finally have to ^C back to the terminal.
Source: (StackOverflow)
Is there any clear definition about RPC and Web Service? A quick wikipedia search shows:
RPC: Remote procedure call (RPC) is an
Inter-process communication technology
that allows a computer program to
cause a subroutine or procedure to
execute in another address space
(commonly on another computer on a
shared network) without the programmer
explicitly coding the details for this
remote interaction.
Web Service: Web services are
typically application programming
interfaces (API) or web APIs that are
accessed via Hypertext Transfer
Protocol and executed on a remote
system hosting the requested services.
Web services tend to fall into one of
two camps: Big Web Services[1] and
RESTful Web Services.
I am not quite clear what the real difference between the two things. It seems that one thing could belongs to RPC and is kind of web service at the same time.
Is Web Service a higher level representation of RPC?
Source: (StackOverflow)
Possible Duplicate:
Java Python Integration
I have a large existing codebase written in 100% Java, but I would like to use Python for some new sections of it. I need to do some text and language processing, and I'd much rather use Python and a library like NLTK to do this.
I'm aware of the Jython project, but it looks like this represents a way to use Java and its libraries from within Python, rather than the other way round - am I wrong about this?
If not, what would be the best method to interface between Java and Python, such that (ideally) I can call a method in Python and have the result returned to Java?
Thank you.
Source: (StackOverflow)
I'm building native mobile applications in both iOS and Android. These apps require "realtime" updates from and to the server, same as any other network-based application does (Facebook, Twitter, social games like Words with Friends, etc)
I think using HTTP long polling for this is over kill in the sense that long polling can be detrimental to battery life, especially with a lot of TCP setup/teardown. It might make sense to have the mobile applications use persistent TCP sockets to establish a connection to the server, and send RPC style commands to the server for all web service communication. This ofcourse, would require a server to handle the long-lived TCP connection and be able to speak to a web service once it makes sense of the data passed down the TCP pipe. I'm thinking of passing data in plain text using JSON or XML.
Perhaps an Erlang based RPC server would do well for a network based application like this. It would allow for the mobile apps to send and receive data from the server all over one connection without multiple setup/teardown that individual HTTP requests would do using something like NSURLConnection on iOS. Since no web browser isn't involved, we don't need to deal with the nuances of HTTP at the mobile client level. A lot of these "COMET" and long-polling/streaming servers are built with HTTP in mind. I'm thinking just using a plain-text protocol over TCP is good enough, will make the client more responsive, allow for receiving of updates from the server, and preserve battery life over the traditional long polling and streaming models.
Does anyone currently do this with their native iOS or Android app? Did you write your own server or is there something open sourced out there that I can begin working with today instead of reinventing the wheel? Is there any reason why using just a TCP based RPC service is a worse decision than using HTTP?
I also looked into HTTP pipelining, but it doesn't look to be worth the trouble when it comes to implementing it on the clients. Also, I'm not sure if it would allow for bi-directional communication in the client<->server communication channel.
Any insight would be greatly appreciated.
Source: (StackOverflow)
I started using ZeroMQ this week, and when using the Request-Response pattern I am not sure how to have a worker safely "hang up" and close his socket without possibly dropping a message and causing the customer who sent that message to never get a response. Imagine a worker written in Python who looks something like this:
import zmq
c = zmq.Context()
s = c.socket(zmq.REP)
s.connect('tcp://127.0.0.1:9999')
while i in range(8):
s.recv()
s.send('reply')
s.close()
I have been doing experiments and have found that a customer at 127.0.0.1:9999
of socket type zmq.REQ
who makes a fair-queued request just might have the misfortune of having the fair-queuing algorithm choose the above worker right after the worker has done its last send()
but before it runs the following close()
method. In that case, it seems that the request is received and buffered by the ØMQ stack in the worker process, and that the request is then lost when close()
throws out everything associated with the socket.
How can a worker detach "safely" — is there any way to signal "I don't want messages anymore", then (a) loop over any final messages that have arrived during transmission of the signal, (b) generate their replies, and then (c) execute close()
with the guarantee that no messages are being thrown away?
Edit: I suppose the raw state that I would want to enter is a "half-closed" state, where no further requests could be received — and the sender would know that — but where the return path is still open so that I can check my incoming buffer for one last arrived message and respond to it if there is one sitting in the buffer.
Edit: In response to a good question, corrected the description to make the number of waiting messages plural, as there could be many connections waiting on replies.
Source: (StackOverflow)
I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a Java EE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication.
I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth.
I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at:
- RMI
- JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server.
- JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication.
- JGroups/Hazelcast: Use shared distributed data structures to facilitate communication.
- Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency.
- Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss
- WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like.
Is there any technology I'm missing?
Edit: Also looked at:
- JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.
Source: (StackOverflow)
I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server.
Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too.
I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example.
I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial).
The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging).
I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part.
I would be happy to hear suggestions for either the former or latter scenario!
Source: (StackOverflow)
I have had the opportunity to spend a great number of hours trying to use WCF in mono. It is simply too poorly implemented at this point to be put into a production environment, for anything beyond toy applications. It does not survive a 24/7 load.
I do currently have WCF on Mono running in a production environment, but I need to move away from it, at least in the near term, to bring stability to my software. Currently I'm surviving by restarting processes every few hours, and often times that is not enough.
I'm looking for potential alternatives. All of my communicating entities are .net based, with some being Mono on Linux and others being ms.net on Windows Server. I'm very tempted to roll my own RPC layer with protobuf-net as the serialization layer, but I'd prefer not to do this. The big plus with protobuf-net is that it has good C++ support, which is something that I value.
Has anyone out there achieved stability with RPC on Mono? If so, what did you do?
Updated: I did not mention that I'm looking for stateful duplex messaging. This is a considerably important piece of information. I'm not stuck with it, but I very much want it. WCF provides this with net-tcp duplex channels.
Source: (StackOverflow)
I would like to develop a web-app requiring data persistence using GWT and GAE. As I understand it, my only (or at least by far the most convenient) option for data persistence is GAE's Datastore, using JDO or JPA annotated objects. I would also like to be able to send my objects back and forth client-server using GWT Remote Procedure Calls (RPC), therefore my objects must be able to "detach". However, GWT RPC serialization cannot handle detached JDO/JPA objects and it doesn't appear as though it will in the near future.
My question: what is the simplest and most direct solution to this? Being able to share the same objects client/server with server-side persistence would be extremely convenient.
EDIT
I should clarify that I still wish to use GWT RPC with GAE's Datastore. I am just looking for the best solution that would allow all these technologies to work together.
Source: (StackOverflow)
I'm sure there's some ancient legacy reason for it, but what is it? It seems like a service that's geared towards reliable data delivery.
Source: (StackOverflow)
While working on a software protection library for smart card based dongle I realized I need to transfer some tree-like data structures back and forth between client application and code inside the dongle.
Well, when working with web services the technologies like XML-RPC or JSON-RPC are reasonable way to consider. However, that is not the case with embedded devices like smart cards. You need to use some binary formats to optimize memory usage and to achieve good performance.
I guess what I need is to implement some binary data marshaling algorithm. I don't like the idea of reinventing the whole wheel and I pretty sure there are great books, articles and examples on marshalling issues like these.
What would you recommend?
UPD. I am using C and C++ on Linux, but the question is about info on marshalling algorithms in general.
Source: (StackOverflow)
Can anyone recommend some simple code to set up a simple JSON RPC client and server using twisted?
I found txJSON-RPC, but I was wondering if someone had some experience using some of these anc could recommend something.
Source: (StackOverflow)
I have a web application that uses AJAX to grab JSON data from the server. It requires that the user first log in with their browser so that a cookie can be set. Only the GET
and POST
verbs are used, where GET
is for retrieving data and POST
is for any operation that modifies data.
From what I understand, REST differs from the above method in that the user authentication information is sent with every request and the PUT
and DELETE
verbs are used as well.
My question is, what benefits does a REST web service have over the RPC-like method, if the end point is only meant to be a user's browser? I can understand how REST is beneficial when the client is unknown, but when I'm only using jQuery ajax calls, are the benefits still worth it over an RPC-like method?
Source: (StackOverflow)
I'm reading on Google App Engine groups many users (Fig1, Fig2, Fig3) that can't figure out where the high number of Datastore reads in their billing reports come from.
As you might know, Datastore reads are capped to 50K operations/day, above this budget you have to pay.
50K operations sounds like a lot of resources, but unluckily, it seems that each operation (Query, Entity fetch, Count..), hides several Datastore reads.
Is it possible to know via API or some other approach, how many Datastore reads are hidden behind the common RPC.get
, RPC.runquery
calls?
Appstats seems useless in this case because it gives just the RPC details and not the hidden reads cost.
Having a simple Model like this:
class Example(db.Model):
foo = db.StringProperty()
bars= db.ListProperty(str)
and 1000 entities in the datastore, I'm interested in the cost of these kind of operations:
items_count = Example.all(keys_only = True).filter('bars=','spam').count()
items_count = Example.all().count(10000)
items = Example.all().fetch(10000)
items = Example.all().filter('bars=','spam').filter('bars=','fu').fetch(10000)
items = Example.all().fetch(10000, offset=500)
items = Example.all().filter('foo>=', filtr).filter('foo<', filtr+ u'\ufffd')
Source: (StackOverflow)