I am experimenting with RFC Expect Header in java HttpURLConnection which works perfectly except for one detail.
There is an 5 second wait period between sending the body of an Fixed Length Mode or between each chunk in Chunk Streaming Mode
Here is the client class
public static void main(String[] args)throws Exception
{
HttpURLConnection con=(HttpURLConnection)new URL("http://192.168.1.2:2000/ActionG").openConnection();
//for 100-Continue logic
con.setRequestMethod("POST");
con.setRequestProperty("Expect", "100-Continue");
//responds to 100-continue logic
con.setDoOutput(true);
con.setChunkedStreamingMode(5);
con.getOutputStream().write("Hello".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("World".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("123".getBytes());
con.getOutputStream().flush();
//decode response and response body/error if any
System.out.println(con.getResponseCode() "/" con.getResponseMessage());
con.getHeaderFields().forEach((key,values)->
{
System.out.println(key "=" values);
System.out.println("====================");
});
try(InputStream is=con.getInputStream()){System.out.println(new String(is.readAllBytes()));}
catch(Exception ex)
{
ex.printStackTrace(System.err);
InputStream err=con.getErrorStream();
if(err!=null)
{
try(err){System.err.println(new String(is.readAllBytes()));}
catch(Exception ex2){throw ex2;}
}
}
con.disconnect();
}
I am uploading 3 chunks. In the server side 5 packets of data are received
All headers. respond with 100 Continue
3 Chunks. For each chunk respond with 100 Continue
Last Chunk[Length 0]. respond with 200 OK
Here is the Test Server
final class TestServer
{
public static void main(String[] args)throws Exception
{
try(ServerSocket socket=new ServerSocket(2000,0,InetAddress.getLocalHost()))
{
int count=0;
try(Socket client=socket.accept())
{
int length;
byte[] buffer=new byte[5000];
InputStream is=client.getInputStream();
OutputStream os=client.getOutputStream();
while((length=is.read(buffer))!=-1)
{
System.out.println( count);
System.out.println(new String(buffer,0,length));
System.out.println("==========");
if(count<5)
{
os.write("HTTP/1.1 100 Continue\r\n\r\n".getBytes());
os.flush();
}
else
{
os.write("HTTP/1.1 200 Done\r\nContent-Length:0\r\n\r\n".getBytes());
os.flush();
break;
}
}
}
}
}
}
Output:
1
POST /ActionG HTTP/1.1
Expect: 100-Continue
User-Agent: Java/17.0.2
Host: 192.168.1.2:2000
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
Content-type: application/x-www-form-urlencoded
Transfer-Encoding: chunked
========== //5 seconds later
2
5
Hello
========== //5 seconds later
3
5
World
========== //5 seconds later
//this and the last chunk come seperatly but with no delay
4
3
123
==========
5
0
==========
I have checked every Timeout Method in my Connection Object
System.out.println(con.getConnectTimeout());
System.out.println(con.getReadTimeout());
Both return 0
So where is this 5 second delay coming from?
I am using jdk 17.0.2 with windows 10
CodePudding user response:
3 Chunks. For each chunk respond with 100 Continue
That is not how Expect: 100-Continue
works. Your server code is completely wrong for what you are attempting to do. In fact, your server code is completely wrong for an HTTP server in general. It is not even attempting to parse the HTTP protocol at all. Not the HTTP headers, not the HTTP chunks, nothing. Is there a reason why you are not using an actual HTTP server implementation, such as Java's own HttpServer
?
When using Expect: 100-Continue
, the client is required to send ONLY the request headers, and then STOP AND WAIT a few seconds to see if the server sends a 100
response or not:
If the server responds with
100
, the client can then finish the request by sending the request body, and then receive a final response.If the server responds with anything other than
100
, the client can fail its operation immediately without sending the request body at all.If no response is received, the client can finish the request by sending the request body and receive the final response.
The whole point of Expect: 100-Continue
is for a client to ask for permission before sending a large request body. If the server doesn't want the body (ie, the headers describe unsatisfactory conditions, etc), the client doesn't have to waste effort and bandwidth to send a request body that will just be rejected.
HttpURLConnection
has built-in support for handling 100
responses, but see How to wait for Expect 100-continue response in Java using HttpURLConnection for caveats. Also see JDK-8012625: Incorrect handling of HTTP/1.1 " Expect: 100-continue " in HttpURLConnection.
But, your server code as shown needs a major rewrite to handle HTTP properly, let alone handle chunked requests properly.
CodePudding user response:
So thank you @Remy Lebeau for the insights on how to properly parse this special header. I noticed after creating an basic parser and after responding properly to chunked[Transfer-Encoding:chunked header] and fixed length streaming[Content-Length header] headers that some times my client would still get stuck and wait for 5 seconds occasionally and sometimes would work with no problem.
After hours of debugging the com.sun.www.http.HttpURLConnection class i realized Another flaw was no longer in the server side but actually in the Client Side of the code. Especially this bit
con.getOutputStream().write("Hello".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("World".getBytes());
con.getOutputStream().flush();
con.getOutputStream().write("123".getBytes());
con.getOutputStream().flush();
I had mistakenly assumed that getOutputStream() in this class would cache the outputstream returned but infact it returns an new output stream every single time.
So i had to change the code to this
OutputStream os=client.getOutputStream();
os.write("Hello".getBytes());
os.flush();
os.write("World".getBytes());
os.flush();
os.write("123".getBytes());
os.flush();
This finally solved all my problems. Works for chunked and fixed length streaming