Skip to main content

HTTP/2 Java Server using Embedded Jetty - Part 2

In the last post we saw how to spin a http/2 server using embedded jetty. In this post we will see, how to use asynchronous servlet (as well as AIO a.k.a non-blocking IO) and why that is important?

Asynchronous Servlets ≠ Asynchronous IO[1]

The concept of Asynchronous Servlets is often confused with Asynchronous IO or the use of NIO. However, Asynchronous Servlets are not primarily motivated by asynchronous IO, since:
  • HTTP Requests are mostly small and arrive in a single packet. Servlets rarely block on requests.
  • Many responses are small and fit within the server buffers, so servlets often do not block writing responses.

Asynchronous Servlets Use-case

The main use-case for asynchronous servlets is waiting for non-IO events or resources. Many web applications need to wait at some stage during the processing of a HTTP request, for example:
  • waiting for a resource to be available before processing the request (e.g., thread, JDBC Connection)
  • waiting for an application event in an AJAX Comet application (e.g., chat message, price change)
  • waiting for a response from a remote service (e.g., RESTful or SOAP call to a web service).
The servlet API (<=2.5) supports only a synchronous call style, so that any waiting that a servlet needs to do must be with blocking. Unfortunately this means that the thread allocated to the request must be held during that wait along with all its resources: kernel thread, stack memory and often pooled buffers, character converters, EE authentication context, etc. It is wasteful of system resources to hold these resources while waiting.

Significantly better scalability and quality of service can be achieved if waiting is done asynchronously. But, how much benefit will we get by off-loading container threads and switching to asynchronous servlets?

Benefits of Asynchronous servlet (RESTful Web Service)

Consider a web application that accesses a remote web service (e.g., SOAP service or RESTful service). Typically a remote web service can take hundreds of milliseconds to produce a response -- eBay's RESTful web service frequently takes 350ms to respond with a list of auctions matching a given keyword -- while only a few 10s of milliseconds of CPU time are needed to locally process a request and generate a response.

To handle 1000 requests per second, which each perform a 200ms web service call, a webapp would needs 1000*(200+20)/1000 = 220 threads and 110MB of stack memory. It would also be vulnerable to thread starvation if bursts occurred or the web service became slower.

If handled asynchronously, the web application would not need to hold a thread while waiting for web service response. Even if the asynchronous mechanism cost 10ms (which it doesn't), then this webapp would need 1000*(20+10)/1000 = 30 threads and 15MB of stack memory. This is a 86% reduction in the resources required and 95MB more memory would be available for the application.

Furthermore, if multiple web services request are required, the asynchronous approach allows these to be made in parallel rather than serially, without allocating additional threads.

Hello world

Let's dive into our Helloworld example and see how we can benefit from async servlet. The default limit of Jetty thread pool (number of threads serving user requests) is 250. Without limiting this pool size, we will have to fire more than 250 concurrent requests to measure the impact of async operation in test environment. So, for this experiment, let's limit the server pool thread count to 5 as follows in the server creation code.
    QueuedThreadPool pool = new QueuedThreadPool(5);
    pool.setName("server-pool");
    Server server = new Server(pool);
Now let's introduce an artificial delay of 5 seconds in the servlet handling code as follows to mimic some real processing in production.
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
    try {
        Thread.sleep(5_000);
        response.setStatus(HttpServletResponse.SC_OK);
        response.setContentType("text/html");
        response.setCharacterEncoding("utf-8");
        response.getWriter().println("<h1>Hello from HelloServlet</h1>");
    } catch (Exception ex) {
        log.error(null, ex);
    }
}
Let's fire 10 requests and measure the response times using the following client code.
    HttpClient httpClient = new HttpClient();
    httpClient.setConnectTimeout(60_000);
    httpClient.setIdleTimeout(60_000);
    httpClient.start();
    ExecutorService executor = Executors.newCachedThreadPool();
    for (int i = 0; i < 10; i++) {
        executor.submit(() -> {
            long t1 = System.currentTimeMillis();
            Request req = httpClient.newRequest("http://localhost:8080/hello-servlet");
            req.idleTimeout(1, TimeUnit.MINUTES);
            ContentResponse response = req.send();
            long t2 = System.currentTimeMillis();
            log.info("response time:{} ms", t2 - t1);
            return true;
        });
    }
    executor.shutdown();
    executor.awaitTermination(2, TimeUnit.MINUTES);
    httpClient.stop();
Here is the screenshot of the thread usage (using NetBeans profiler).

In this image, you can see that out of 5 threads, 3 are used for internal purpose and 2 (thread 19 and 23) are serving user requests. The purple color indicates the time the threads were in sleeping state. Since we dont have much processing, we can take the purple color as the indication of duration in which a thread is associated with its corresponding request. Let's look at the output.
[pool-1-thread-8 ] INFO com.pb.jetty.http2.client.Client - response time:5254 ms
[pool-1-thread-7 ] INFO com.pb.jetty.http2.client.Client - response time:5254 ms
[pool-1-thread-6 ] INFO com.pb.jetty.http2.client.Client - response time:10238 ms
[pool-1-thread-10] INFO com.pb.jetty.http2.client.Client - response time:10242 ms
[pool-1-thread-1 ] INFO com.pb.jetty.http2.client.Client - response time:15247 ms
[pool-1-thread-5 ] INFO com.pb.jetty.http2.client.Client - response time:15243 ms
[pool-1-thread-2 ] INFO com.pb.jetty.http2.client.Client - response time:20252 ms
[pool-1-thread-9 ] INFO com.pb.jetty.http2.client.Client - response time:20247 ms
[pool-1-thread-4 ] INFO com.pb.jetty.http2.client.Client - response time:25245 ms
[pool-1-thread-3 ] INFO com.pb.jetty.http2.client.Client - response time:25251 ms
This output clearly shows that the third and later requests are queued though the threads (19 & 23) are not doing anything but sleeping. What this means is, threads are tied to requests and when the thread pool is full, requests are queued irrespective of whether the thread is waiting/sleeping or doing anything useful.

In most real time scenarios, the thread will be waiting for data from database or from a RESTful service wasting CPU cycle and memory and also increasing the mean response times.

Let's tweak the servlet code a little bit to release the waiting/sleeping thread.
Change the servlet registration code as follows to enable async support in servlet.
    ServletHolder asyncHolder = context.addServlet(HelloServlet.class, "/hello-servlet");
    asyncHolder.setAsyncSupported(true);
Change the doGet method in HelloServlet as follows.
    private final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();

    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
        final AsyncContext asyncContext = request.startAsync();
        executor.schedule(() -> {
            try {
                response.setStatus(HttpServletResponse.SC_OK);
                response.setContentType("text/html");
                response.setCharacterEncoding("utf-8");
                response.getWriter().println("<h1>Hello from HelloServlet</h1>");
            } catch (Exception ex) {
                log.error(null, ex);
            }
            asyncContext.complete();
        }, 5, TimeUnit.SECONDS);
    }

Let's look at the thread profiling result.
This image shows that server-pool threads are idle(i.e., in parked state) most of the time and hence it is evident that they are not tied to the lifetime of the request.

Let's look at the output.
[pool-1-thread-9 ] INFO com.pb.jetty.http2.client.Client - response time:5236 ms
[pool-1-thread-10] INFO com.pb.jetty.http2.client.Client - response time:5236 ms
[pool-1-thread-2 ] INFO com.pb.jetty.http2.client.Client - response time:5239 ms
[pool-1-thread-8 ] INFO com.pb.jetty.http2.client.Client - response time:5238 ms
[pool-1-thread-6 ] INFO com.pb.jetty.http2.client.Client - response time:5239 ms
[pool-1-thread-4 ] INFO com.pb.jetty.http2.client.Client - response time:5240 ms
[pool-1-thread-5 ] INFO com.pb.jetty.http2.client.Client - response time:5240 ms
[pool-1-thread-3 ] INFO com.pb.jetty.http2.client.Client - response time:5240 ms
[pool-1-thread-7 ] INFO com.pb.jetty.http2.client.Client - response time:5240 ms
[pool-1-thread-1 ] INFO com.pb.jetty.http2.client.Client - response time:5244 ms 
The response time is ~5 seconds for all the requests and hence it is clear that none of the requests are queued.

Of course, this example is far from real world scenario. But I believe it gave you an idea of what  async servlets is.

You may think that the net effort is same in real world whether we execute in servlet thread or offloading to another thread. You are right. The answer lies in what you do in the offloaded thread. If you are calling a REST service and if your client uses non-blocking IO (a.k.a AIO) then instead of using one thread per request you can handle thousands of requests using a very small number of threads -- as low as 5.

For more information on how to use AIO REST client please refer to this post.

You can get source for this project from here. You can directly open the project in NetBeans and run or run it from command line using maven.

AIO (non-blocking) Servlets

But what about AIO servlet? It is little complicated to picture the use of AIO in servlets; it is better explained in the AIO REST client post. But for the record, Jetty already uses AIO in servlets. Here is a little scenario that can help you understand where AIO is used in Jetty.

Thread per connection

The traditional IO model of Java associated a thread with every TCP/IP connection. If you have a few very active threads, this model can scale to a very high number of requests per second.
However, the traffic profile typical of many web applications is many persistent HTTP connections that are mostly idle while users read pages or search for the next link to click. With such profiles, the thread-per-connection model can have problems scaling to the thousands of threads required to support thousands of users on large scale deployments.

Thread per request

The Java NIO libraries support asynchronous IO, so that threads no longer need to be allocated to every connection. When the connection is idle (between requests), then the connection is added to an NIO select set, which allows one thread to scan many connections for activity. Only when IO is detected on a connection is a thread allocated to it. 

References

Comments

Popular posts from this blog

Installing GoDaddy certificate in Wildfly/Keycloak

In the previous post we saw how to set up Keycloak . Here we will see how to generate and install GoDaddy.com certificate in Keycloak. The steps are similar for Wildfly as well. Step 1: Generate CSR file Run the following commands in your terminal. <mydomain.com> has to be replaced with your actual domain name. keytool -genkey -alias mydomain_com -keyalg RSA -keysize 2048 -keystore mydomain_com.jks keytool -certreq -alias mydomain_com -file mydomain_com.csr -keystore mydomain_com.jks Step 2: Generate certificate Upload  mydomain_com . csr  file content into GoDaddy.com, generate and download certificate for tomcat server (steps to generating SSL certificate is beyond the scope of this article). If you unzip the file, you will see the following files. gd_bundle-g2-g1.crt ..5f8c...3a89.crt   #some file with alphanumeric name gdig2.crt Files 1 and 2 are of our interest. Third file is not required. Step 3: Import certificate to key store Download r

Using Nginx as proxy server for Keycloak

I have used Keycloak  in its very early stage ( when it is was in 2.x version). But now it has come a long way (at this time of writing it is in 21.x) In this article let's configure Keycloak behind Nginx. Here are the points to consider.  If you want to configure Apache2 as a proxy server for your java application, please check  this article . We are going to use a domain name other than localhost Anything other than localhost will require Keycloak to run in production mode which requires SSL configurations etc. Or it requires a proxy server. Lets begin. Requirements Keycloak distribution Ubuntu 22.04 server Configuring Keycloak 1. Download Keycloak from here . 2. Extract it using tar -xvzf  keycloak-21.0.1.tar.gz 3. Create a script file called keycloak.sh with the following contents #!/bin/bash export KEYCLOAK_ADMIN=<admin-username-here> export KEYCLOAK_ADMIN_PASSWORD=<admin-password-here> nohup keycloak-21.0.0/bin/kc.sh start-dev --proxy edge --hostname-strict=fa

Hibernate & Postgresql

If you are using Hibernate 3.5 or above to talk to Postgresql database, have you ever tried to store a byte array? Let's take an example. Here is the mapping which will store and read byte[] from the database. @Lob @Column(name = "image") private byte[] image; Here is the JPA mapping file configuration. <persistence version="2.0"  xmlns="http://java.sun.com/xml/ns/persistence"  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">   <persistence-unit name="testPU" transaction-type="JTA">     <provider>org.hibernate.ejb.HibernatePersistence</provider>     <jta-data-source>test</jta-data-source>     <properties>     </properties>   </persistence-unit> </persistence> When you try to save your entity you will get t

Dynamic SOAP Service Client

If you have written SOAP service client, you might know that you need the WSDL file; need to generate Java code for that,compile that Java classes and add it as dependency for your module. What would you do if you have to incorporate your code with a new SOAP service every now and then? What would you do if all you need is to consume the service and do a little processing on the output, i.e., you need the data in XML format? What would you do if you don't have a complete WSDL? What would you do if your service is in .NET whose WSDL is having problem while generating Java classes? Is there a way to write a dynamic client which can consume any SOAP service? .... YES!... there is a way. Let's quickly write a web (SOAP) service. Software used: Java 7 NetBeans IDE 7.4 GlassFish 4.0 Maven Create a web project and choose Glassfish as server. Now add web service (not a rest service) as below. Edit the SimpleService.java as follows. package com.mycom

How to retry a method call in Spring or Quarkus?

Have you ever come across a situation where you wanted to retry a method invocation automatically? Let's say you are calling a stock ticker service for a given stock and get a transient error. Since it is a transient error, you will try again and it may work in second attempt. But what if it doesn't? Well, you will try third time. But how many times can you try like that? More importantly after how much time will you retry? Imagine if you have a handful of methods like this. Your code will become convoluted with retry logic. Is there a better way? Well, if you are using spring/spring boot, you are in luck. Here is how you can do that using spring. Let's write our business service as follows. import java.time.LocalDateTime; import java.util.concurrent.CompletableFuture; import lombok.extern.slf4j.Slf4j; import org.springframework.retry.annotation.Backoff; import org.springframework.retry.annotation.Retryable; import org.springframework.scheduling.annotation.Async; import