You are here: Home / Past Courses / Fall 2013 - ECPE 177 / Projects / Project 2 - Parallel HTTP Server

Project 2 - Parallel HTTP Server

Project Objectives

In this project, you will be upgrading the simple web server created in Project 1 to support concurrent (parallel) downloads. In doing so, you will gain: 

  • Hands-on experience with parallel programming (threads or processes) in the Python 3 programming language, or serial techniques (via select()) that can provide an approximation of parallel service.
  • Hands-on experience with a broader set of HTTP/1.1 features



All of the requirements specified in Project 1 are still valid.  If part of your original Project 1 implementation was incomplete or buggy, you should fix it, either before starting this project, or during this project.

The high-level goal of Project 2 is the following:

Your  web server must be upgraded to support concurrent requests from multiple web browsers. Simply put, if your website hosted a single 100GB file, and 1, 10, 100, ... 1000 clients tried to download the same file at the same time, their downloads should all make forward progress.  In contrast, your original solution in Project 1 (assuming you did not provide any parallel code) would result in the first client downloading the entire file, followed by the second client downloading the entire file, etc...


There are a variety of methods to support concurrent requests in your server. Any of the following methods is acceptable:

  1. Use multiple processes, each process handling a single active connection. Because processes are "heavy" and take a non-trivial amount of time to launch, your solution should launch a "pool" of processes during initialization, and re-use processes in the pool for each active socket.
  2. Use multiple threads (in a single process), where each thread handles a single active connection. Because threads are "lightweight", you can launch and kill threads for each active socket.
    1. Python note: In this interpreted language, all threads compete for the single global interpreter lock (GIL). The effect is that only 1 thread can be running Python code at a time, and thus the performance benefits of threads in Python are often minimal or non-existent. A straight C implementation using threads would not have this bottleneck.  (See:  Python Threads and the GIL, Understanding the GIL)
  3. Use select() to service many active sockets in a single web server process.  Your solution should attempt to ensure that all sockets are serviced "fairly", instead of always favoring some sockets over others.
  4. Use an event-based architecture to service many active sockets in a single web server process


In addition, the following features are also required of your web server:

  • Persistent connections:  In HTTP/1.1, a web server should leave a client connection open after serving a request. The client has the option to send additional HTTP requests over the already-open socket. This reduces the latency of each request, because a new TCP connection does not have to be established. (See: Wikipedia entry)  The client socket should stay open for 30 seconds, or a similar, finite, length of time.
    • Note: The Content-Length header (see below) is critical to the correct operation of persistent connections!
    • Note: The Connection: Close header should not be sent when using persistent connections! (If set, the browser might close the socket and never send another request on this connection)
  • Pipelined connections: In HTTP/1.1, a web server must accept multiple requests from a client before a response has been provided. For example, a client could send 5 back-to-back requests for images in the same connection, and the server responds whenever it is ready. (See: Wikipedia entry)
    • Note: In Firefox, pipelining is off by default. To enable, go to "about:config" (as a URL), then ignore the warning, and search for network.http.pipelining or network.http.pipelining.aggressive, and change the value from false to true.
    • Note: In Chrome, pipelining is off by default.  To enable, launch Chrome on the command line with the --enable-http-pipelining option.
  • Graceful shutdown:  Rather than abruptly terminating when the server administrator does a CTRL-C, your web server should capture the CTRL-C signal and do a graceful shutdown. All active sockets should be allowed to finish their current request before being closed by the server.
    • Tip: Get stuck in a loop while testing your signal handler?  Use CTRL-Z to "background" your web server, then do a ps to get the process ID (PID) of the server, and then do a kill -9 <PID> to kill the server abruptly.
  • HEAD requests:  The HTTP HEAD method produces a response identical to HTTP GET, but without the response body (i.e. file data). This is frequently used by browsers to check on file creation/modification dates, and thereby determine if their local cached item is current, or if a newer version should be fetched from the server. 
  • Headers: The following response headers must be produced by your web server:
    • Date
    • Server
    • Content-Length
    • Content-Type  (the python mimetypes module should be able to guess a reasonable MIME type based on file extension)
    • Last-Modified
    • Expires (set an expiration time of 12 hours in the future)
  • Verbose / Silent Modes:  The default behavior of your web server should be silent under normal operation.  Add a --verbose command-line option to enable debugging output to be printed to the console. It is up to you how much debugging output to produce beyond a minimum of 1 line per URL request. (Real servers often allow you to vary the amount of debugging output produced, from minimal to extreme).



The same restrictions as specified in Project 1 apply here.


Functionality Testing

The same test strategy as specified in Project 1 applies here. 

In addition to the test website, you should also employ additional testing methods:

  • Place a large file in your web server.  While the file is downloading via a web browser, press CTRL-C in the server.  Does the download complete before the server gracefully terminates?
  • Place a large file in your web server. Download the file in parallel using several different web browsers. Do all concurrent downloads appear to make forward progress concurrently?


WARNING: I will definitely test your web browser with a large, multi-GB file in order to easily view concurrent downloads.  If your web server does something sloppy, like calling once to read the entire contents of the file into an array, points will be deducted if Python crashes on my machine.  (Maybe I'll even lower the amount of memory in my virtual machine to something small, like 512MB, so there's no way the file could fit into an array at once...) 


Performance Testing

After writing a web server that accepts parallel requests, you should benchmark its performance, and compare against the original web server in Project 1.  Ideally, we want to produce a graph that shows the number of pages served by your web server per second as the number of concurrent clients varies from 1 to infinity, or at least until the web server begins to suffer under heavy load.

There are many web server benchmarking tools.  How do we choose a suitable tool?  They fall into two main categories:

  • "Classic" designs: These tools (including FunkLoad, ab aka "Apache Bench", JMeter, and httperf) are flexible and generate a wide range of measurements. However, they may be slower than the web server they are attempting to profile!
  • "Modern" designs: These tools (including weighttp and wrk) use the same parallel, event-driven code style employed by the most sophisticated web servers. Although they produce only simple results (simple == streamlined), they can create a blizzard of requests sufficient to saturate most web servers.

For this project, I don't expect our Python-based web server to be particularly fast. Thus, a classic tool should be sufficient.  For this project, we will be using FunkLoad, because it impersonates a web browser by downloading HTML files and all referenced CSS/Javascript/images used within, and then creates fancy output reports with a variety of performance graphs. 

This blog post has helpful advice on how to setup your experiments in order to avoid gathering useless, misleading results. All the points are good advice, but the following rules are particularly important for this project: 

  1. Do not run the performance tester on same machine as your web server, as they will compete for resources!
    (You can do so for initial proof of concept, but your final benchmark results submitted should not show a connection to localhost!) 
  2. Verify that the web server machine and performance tester machine are otherwise idle before running the test
  3. Verify that the network connecting the web server machine and performance tester machine is not congested before running the test.
    (Translation: Do NOT run this test over Wi-Fi!   Instead, run it over two adjacent computers plugged into the same, preferably gigabit, Ethernet switch). 


To get started, install FunkLoad on the performance measurement computer.  (The following are instructions tested on Ubuntu 12.04 LTS. Alternate installation instructions are also available if you are using a different environment)

sudo aptitude install python-dev python-setuptools python-webunit python-docutils gnuplot
sudo aptitude install tcpwatch-httpproxy --without-recommends
sudo easy_install -f -U funkload

To verify that your installation of FunkLoad works, install their demo and test it:

cd ~
cd funkload-demo/xmlrpc/

# Run demo
make test
make bench

# Open HTML file of results in Firefox
firefox test_credential-XXXXXXXXXX/index.html & 

Assuming you see a nice results page with graphs in Firefox, you should be good to proceed with benchmarking your own web server.

Create the following test files.  (This configuration is inspired by the basic FunkLoad tutorial) : 

import unittest
from random import random
from funkload.FunkLoadTestCase import FunkLoadTestCase

class ecpe177(FunkLoadTestCase):
    """This test use a configuration file ecpe177.conf."""
    def setUp(self):
        """Setting up test."""
        self.server_url = self.conf_get('main', 'url')

    def test_ecpe177(self):
        # The description should be set in the configuration file
        server_url = self.server_url
        # begin of test ---------------------------------------------
        nb_time = self.conf_getInt('test_ecpe177', 'nb_time')
        for i in range(nb_time):
            self.get(server_url, description='Get url')
        # end of test -----------------------------------------------

if __name__ in ('main', '__main__'):


ecpe177.conf : (Note, you should change the URL hostname from localhost to whatever the actual server location is)

# main section for the test case
title=Project 2 Web Server Test
description=COMP / ECPE 177

# a section for each test
description=Access %(nb_time)s times the main url

# a section to configure the test mode
log_to = console file
log_path = ecpe177-test.log
result_path = ecpe177-test.xml
sleep_time_min = 0
sleep_time_max = 0

# a section to configure the bench mode
cycles = 1:10:20:30:40:50:75:100:125:150
duration = 40
startup_delay = 0.01
sleep_time = 0.01
cycle_time = 1
log_to =
log_path = ecpe177-bench.log
result_path = ecpe177-bench.xml
sleep_time_min = 0
sleep_time_max = 0.5


Ensure that your web server is running, and then verify that your benchmark is functional by running a quick test. (You should see OK printed in green at various points in the test)

fl-run-test -dv

Note:  FunkLoad sends requests using the legacy HTTP/1.0 standard, unlikely modern browsers that use HTTP/1.1.  For our project, the only significant difference is that, in HTTP/1.0, a connection is closed after a single request, rather than remaining open for subsequent requests from the same client. You may need to extend your web server to support HTTP/1.0.

Run the actual benchmark test suite:

fl-run-bench ecpe177.test_ecpe177

Produce the HTML report file, and open in Firefox: 

fl-build-report --html ecpe177-bench.xml
firefox test_ecpe177-XXXXXXXX/index.html & 



See the main resource page for links that helped me when developing my solution.



Your submitted project must include:

  • Web server code
  • FunkLoad test script and config file
  • FunkLoad benchmark report for Project 1 web server (including graphs)
  • FunkLoad benchmark report for Project 2 web server (including graphs)
  • A readme.txt file explaining what parallelism method you chose to implement

In standard Linux style, submit your final project as a .tar.gz compressed archive.  To create the archive, assuming your files are in the folder "project2", run:

$ tar -cvzf project2.tar.gz project2

Once created, upload this file to the corresponding Sakai assignment and submit.

To extract your archive, I will run:

$ tar -xvf project2.tar.gz