EzDevInfo.com

http-request

Java HTTP Request Library http-request http-request : java http request library

How to set the allowed url length for a nginx request (error code: 414, uri too large)

I am using Nginx in front of 10 mongrels.

When I make a request with size > 2900 I get back an "error code 414: uri too large".

Does anyone know the setting in the nginx conf file which determines the allowed uri length ?


Source: (StackOverflow)

Possible to specify async files in headers?

So my timeline always looks like this

The index.html loads and then asks for other files. I was thinking is there a way to have the headers that respond to the request say what files should come down? So it would look like this ..

maybe something like ..

<?

header('fileGetRequest: /js/common.js');
header('fileGetRequest: /css/common.css');

?>

Source: (StackOverflow)

Advertisements

How do can you make redirect_to use a different HTTP request?

At the end of one of my controller actions I need to redirect to a page that only accepts put requests. I have been trying to figure out how to get redirect_to to use a put request but to no success.

Is this possible? Or is there another way to accomplish this?


Source: (StackOverflow)

How to get form parameters in request filter

I'm trying to get the form parameters of a request in a request filter:

@Override
public ContainerRequest filter(final ContainerRequest request) {

    final Form formParameters = request.getFormParameters();

    //logic

    return request;
}

However, the form always seems to be empty. The HttpRequestContext.getFormParameters() documentation says:

Get the form parameters of the request entity.

This method will ensure that the request entity is buffered such that it may be consumed by the applicaton.

Returns: the form parameters, if there is a request entity and the content type is "application/x-www-form-urlencoded", otherwise an instance containing no parameters will be returned.

My resource is annotated with @Consumes("application/x-www-form-urlencoded"), although it won't have been matched until after the request filter - is that why this isn't working?

I tried doing some research but couldn't find any conclusive evidence of whether this is possible. There was this 4-year old discussion, in which Paul Sandoz says:

If you are working in Jersey filters or with the HttpRequestContext you can get the form parameters as follows: [broken link to Jersey 1.1.1 HttpRequestContext.getFormParameters]

I also found this 3-year-old discussion about how to get multipart/form-data form fields in a request filter. In it, Paul Sandoz uses the following code:

// Buffer
InputStream in = request.getEntityInputStream();
if (in.getClass() != ByteArrayInputStream.class) {
    // Buffer input
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    try {
        ReaderWriter.writeTo(in, baos);
    } catch (IOException ex) {
        throw new ContainerException(ex);
    }
    in = new ByteArrayInputStream(baos.toByteArray());
    request.setEntityInputStream(in);
}

// Read entity
FormDataMultiPart multiPart = request.getEntity(FormDataMultiPart.class);

I tried emulating that approach for Form instead, but the result of request.getEntityInputStream() is always an empty stream. And looking at the source of getFormParameters, that method is in fact doing the same thing already:

@Override
public Form getFormParameters() {
    if (MediaTypes.typeEquals(MediaType.APPLICATION_FORM_URLENCODED_TYPE, getMediaType())) {
        InputStream in = getEntityInputStream();
        if (in.getClass() != ByteArrayInputStream.class) {
            // Buffer input
            ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
            try {
                ReaderWriter.writeTo(in, byteArrayOutputStream);
            } catch (IOException e) {
                throw new IllegalArgumentException(e);
            }

            in = new ByteArrayInputStream(byteArrayOutputStream.toByteArray());
            setEntityInputStream(in);
        }

        ByteArrayInputStream byteArrayInputStream = (ByteArrayInputStream) in;
        Form f = getEntity(Form.class);
        byteArrayInputStream.reset();
        return f;
    } else {
        return new Form();
    }
}

I can't figure out what's slurping up the entity input stream before I get to it. Something in Jersey must be consuming it because the form params are later passed into the resource method. What am I doing wrong here, or is this impossible (and why)?

EDIT: Here's an example of a request being sent:

POST /test/post-stuff HTTP/1.1
Host: local.my.application.com:8443
Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded

form_param_1=foo&form_param_2=bar

Here's the (somewhat redundant) request logging:

INFO: 1 * Server in-bound request
1 > POST https://local.my.application.com:8443/test/post-stuff
1 > host: local.my.application.com:8443
1 > connection: keep-alive
1 > content-length: 33
1 > cache-control: no-cache
1 > origin: chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm
1 > user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36
1 > content-type: application/x-www-form-urlencoded
1 > accept: */*
1 > accept-encoding: gzip,deflate,sdch
1 > accept-language: en-US,en;q=0.8
1 > cookie: [omitted]
1 > 

Here are the response headers of that request, including the Jersey Trace:

Content-Type →application/json;charset=UTF-8
Date →Fri, 09 Aug 2013 18:00:17 GMT
Location →https://local.my.application.com:8443/test/post-stuff/
Server →Apache-Coyote/1.1
Transfer-Encoding →chunked
X-Jersey-Trace-000 →accept root resource classes: "/post-stuff"
X-Jersey-Trace-001 →match path "/post-stuff" -> "/post\-stuff(/.*)?", [...], "(/.*)?"
X-Jersey-Trace-002 →accept right hand path java.util.regex.Matcher[pattern=/post\-stuff(/.*)? region=0,11 lastmatch=/post-stuff]: "/post-stuff" -> "/post-stuff" : ""
X-Jersey-Trace-003 →accept resource: "post-stuff" -> @Path("/post-stuff") com.application.my.jersey.resource.TestResource@7612e9d2
X-Jersey-Trace-004 →match path "" -> ""
X-Jersey-Trace-005 →accept resource methods: "post-stuff", POST -> com.application.my.jersey.resource.TestResource@7612e9d2
X-Jersey-Trace-006 →matched resource method: public javax.ws.rs.core.Response com.application.my.jersey.resource.TestResource.execute(java.lang.String,java.lang.String)
X-Jersey-Trace-007 →matched message body reader: class com.sun.jersey.api.representation.Form, "application/x-www-form-urlencoded" -> com.sun.jersey.core.impl.provider.entity.FormProvider@b98df1f
X-Jersey-Trace-008 →matched message body writer: java.lang.String@f62, "application/json" -> com.sun.jersey.core.impl.provider.entity.StringProvider@1c5ddffa

Here is the (unremarkable) servlet config:

<servlet>
    <servlet-name>jersey</servlet-name>
    <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
    <init-param>
        <param-name>com.sun.jersey.config.property.packages</param-name>
        <param-value>com.application.my.jersey</param-value>
    </init-param>
    <init-param>
        <param-name>com.sun.jersey.spi.container.ResourceFilters</param-name>
        <param-value>com.application.my.jersey.MyFilterFactory</param-value>
    </init-param>
    <init-param>
        <param-name>com.sun.jersey.config.feature.Trace</param-name>
        <param-value>true</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
</servlet>

Here's the example resource:

@Path("/post-stuff")
@Produces(MediaType.APPLICATION_JSON)
public final class TestResource {

    @POST
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    public Response execute(
            @FormParam("form_param_1") final String formParam1,
            @FormParam("form_param_2") final String formParam2
    ) {
        return Response.created(URI.create("/")).entity("{}").build();
    }
}

I'm using Jersey 1.17.


For those interested, I'm trying to roll my own required parameter validation, as described in JERSEY-351. My solution here worked for query, cookie, and header params - form params are holding out on me.


Source: (StackOverflow)

Proxies with Python 'Requests' module

Just a short, simple one about the excellent Requests module for Python.

I can't seem to find in the documentation what the variable 'proxies' should contain. When I send it a dict with a standard "IP:PORT" value it rejected it asking for 2 values. So, I guess (because this doesn't seem to be covered in the docs) that the first value is the ip and the second the port?

The docs mention this only:

proxies – (optional) Dictionary mapping protocol to the URL of the proxy.

So I tried this... what should I be doing?

proxy = { ip: port}

and should I convert these to some type before putting them in the dict?

r = requests.get(url,headers=headers,proxies=proxy)

Source: (StackOverflow)

Sending multipart/mixed content with Postman Chrome extension

I'm struggling with creating POST multipart/mixed request with Postman Chrome extension

Here is my curl request what works nice

curl -H "Content-Type: multipart/mixed" 
-F "metadata=@simple_json.json; type=application/json "
-F "content=@1.jpg; type=image/jpg" -X POST http://my/api/item -i -v

interesting part of response

Content-Length: 41557

Expect: 100-continue

Content-Type: multipart/mixed; boundary=----------------------------8aaca457e117

  • additional stuff not fine transfer.c:1037: 0 0
  • HTTP 1.1 or later with persistent connection, pipelining supported

And when I use Postman enter image description here

I getting such response

{"message":"Could not parse multipart servlet request;
 nested exception is org.apache.commons.fileupload.FileUploadException: 
 the request was rejected because no multipart boundary was     
 found","type":"error","status":500,"requestId":"1861eloo6fpio"}

That's it - I wish to get rid of that error. If some more information needed please ask :)


Source: (StackOverflow)

How to send cookies in a post request with the Python Requests library?

I'm trying to use the Requests library to send cookies with a post request, but I'm not sure how to actually set up the cookies based on its documentation. The script is for use on Wikipedia, and the cookie(s) that need to be sent are of this form:

enwiki_session=17ab96bd8ffbe8ca58a78657a918558e; path=/; domain=.wikipedia.com; HttpOnly

However, the requests documentation quickstart gives this as the only example:

cookies = dict(cookies_are='working')

How can I encode a cookie like the above using this library? Do I need to make it with python's standard cookie library, then send it along with the POST request?


Source: (StackOverflow)

Using headers with the Python requests library's get method

So I recently stumbled upon this great library for handling HTTP requests in Python; found here http://docs.python-requests.org/en/latest/index.html.

I love working with it, but I can't figure out how to add headers to my get requests. Help?


Source: (StackOverflow)

When should one use CONNECT and GET HTTP methods at HTTP Proxy Server?

I'm building a WebClient library, now I'm implement a proxy feature, so making some research I saw some code use CONNECT method to request a URL.

But sniff my web browser, it don't use CONNECT verb, it's call GET method instead.

So I'm confuse, finaly, When I should use both methods?


Source: (StackOverflow)

urllib2 - post request

I try to perform a simple POST-request with urllib2. However the servers response indicates that it receives a simple GET. I checked the type of the outgoing request, but it is set to POST.
To check whether the server behaves like I expect it to, I tried to perform a GET request with the (former POST-) data concatenated to the url. This got me the answer I expected.
Does anybody have a clue what I misunderstood?

def connect(self):
    url = 'http://www.mitfahrgelegenheit.de/mitfahrzentrale/Dresden/Potsdam.html/'
    user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
    header = { 'User-Agent' : user_agent }

    values = {
      'city_from' : 69,
      'radius_from' : 0,
      'city_to' : 263,
      'radius_to' : 0,
      'date' : 'date',
      'day' : 5,
      'month' : 03,
      'year' : 2012,
      'tolerance' : 0
    }

    data = urllib.urlencode(values)
    # req = urllib2.Request(url+data, None, header) # GET works fine
    req = urllib2.Request(url, data, header)  # POST request doesn't not work

    self.response = urllib2.urlopen(req)

This seems to be a problem like the one discussed here: Python URLLib / URLLib2 POST but I'm quite sure that in my case the trailing slash is not missing. ;)

I fear this might be a stupid misconception, but I'm already wondering for hours!



EDIT: A convenience function for printing:

def response_to_str(response):
    return response.read()

def dump_response_to_file(response):
    f = open('dump.html','w')
    f.write(response_to_str(response))



EDIT 2: Resolution:

I found a tool to capture the real interaction with the site, http://fiddler2.com/fiddler2/. Apparently the server takes the data from the input form, redirects a few times and and then makes a GET request with this data simply appended to the url.
Everything is fine with urllib2 and I apologize for misusing your time!


Source: (StackOverflow)

HTTP status code for unaccepted Content-Type in request

For certain resources, my RESTful server only accepts PUT and POST requests with JSON objects as the content body, thus requiring a Content-Type of application/json instead of application/x-www-form-urlencoded or multipart/form-data or anything else.

Malformed JSON (or lack thereof) returns a 400 with the error message taken directly from the exception raised by the JSON parser, for debugging purposes.

Which HTTP error code means that the client sent a request with an unacceptable Content-Type, even if the server could technically parse the request content?


Source: (StackOverflow)

HTTP requests and Apache modules: Creative attack vectors

Slightly unorthodox question here:

I'm currently trying to break an Apache with a handful of custom modules.

What spawned the testing is that Apache internally forwards requests that it considers too large (e.g. 1 MB trash) to modules hooked in appropriately, forcing them to deal with the garbage data - and lack of handling in the custom modules caused Apache in its entirety to go up in flames. Ouch, ouch, ouch.

That particular issue was fortunately fixed, but the question's arisen whether or not there may be other similar vulnerabilities.

Right now I have a tool at my disposal that lets me send a raw HTTP request to the server (or rather, raw data through an established TCP connection that could be interpreted as an HTTP request if it followed the form of one, e.g. "GET ...") and I'm trying to come up with other ideas. (TCP-level attacks like Slowloris and Nkiller2 are not my focus at the moment.)

Does anyone have a few nice ideas how to confuse the server's custom modules to the point of server-self-immolation?

  • Broken UTF-8? (Though I doubt Apache cares about encoding - I imagine it just juggles raw bytes.)
  • Stuff that is only barely too long, followed by a 0-byte, followed by junk?
  • et cetera

I don't consider myself a very good tester (I'm doing this by necessity and lack of manpower; I unfortunately don't even have a more than basic grasp of Apache internals that would help me along), which is why I'm hoping for an insightful response or two or three. Maybe some of you have done some similar testing for your own projects?

(If stackoverflow is not the right place for this question, I apologise. Not sure where else to put it.)


Source: (StackOverflow)

Making HTTP requests via Python Requests module not working via proxy where curl does? Why?

Using this curl command I am able to get the response I am looking for from Bash

curl -v -u z:secret_key --proxy http://proxy.net:80  \
-H "Content-Type: application/json" https://service.com/data.json

I have already seen this other post on proxies with the Requests module

And it helped me formulate my code in Python but I need to make a request via a proxy. However, even while supplying the proper proxies it isn't working. Perhaps I'm just not seeing something?

>>> requests.request('GET', 'https://service.com/data.json', \
>>> headers={'Content-Type':'application/json'}, \ 
>>> proxies = {'http' : "http://proxy.net:80",'https':'http://proxy.net:80'}, \
>>> auth=('z', 'secret_key'))

Furthermore, at the same python console I can use urllib to make a request have it be successful.

>>> import urllib
>>> urllib.urlopen("http://www.httpbin.org").read()
---results---

Even trying requests on just a non-https address fails to work.

>>> requests.get('http://www.httpbin.org')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.6/site-packages/requests/api.py", line 79, in get
   return request('get', url, **kwargs)
File "/Library/Python/2.6/site-packages/requests/api.py", line 66, in request
    prefetch=prefetch
File "/Library/Python/2.6/site-packages/requests/sessions.py", line 191, in request
    r.send(prefetch=prefetch)
File "/Library/Python/2.6/site-packages/requests/models.py", line 454, in send
    raise ConnectionError(e)
requests.exceptions.ConnectionError: Max retries exceeded for url:

Requests is so elegant and awesome but how could it be failing in this instance?


Source: (StackOverflow)

python-requests: order get parameters

I am implementing a client library for a private HTTP-API using python requests. The API(which I don't control) expects the parameters to be in a certain order, but python-requests doesn't honor a sorted dict as parameter.

This is what i tried:

import requests
from django.utils.datastructures import SortedDict

params = SortedDict()
params['s'] = 'value1'
params['f'] = 'value2'

requests.get('https://example.org/private_api', params=params)
#performs request as https://example.org/private_api?f=value1&s=value2 

This is what I am trying to avoid:

requests.get('https://example.org?{0}'.format(urlencode(params)))

Source: (StackOverflow)

YSlow recommendations. How necessary are they?

So I've just downloaded yslow for firebug and have taken a look at the results for a site I am building.

I'm seeing recommendations, for example, to use ETags, cookie-free domain for my static components, and add expires headers.

I'm thinking, well I could go off and fix these but there's more likely a bunch of other optimizations I could do first, e.g caching results from database calls or something similar.

I don't think this site will get 'that much' usage to warrant YSlow's recommendations.

I know that you should never optimize before you know you need to, but I'm thinking things like ETags and expires headers surely only come into play on sites with really heavy traffic.

If for example, I've written a poor implementation that makes 5 (relatively small) calls to the database per request, and YSlow is telling me that my 14 images are not on a cookie-free domain, then which of those two optimisations should be tackled first?


Source: (StackOverflow)