Project 4 - Parallel HTTP Server
In this project, you will be upgrading the simple web server created in Project 1 to support concurrent (parallel) downloads. In doing so, you will gain:
- Hands-on experience with parallel programming (threads or processes) in the Python 3 programming language
- Hands-on experience with a broader set of HTTP/1.1 features
All of the requirements specified in Project 1 are still valid. If part of your original Project 1 implementation was incomplete or buggy, you should fix it, either before starting this project, or during this project.
The high-level goal of Project 4 is the following:
Your web server must be upgraded to support concurrent requests from multiple web browsers. Simply put, if your website hosted a single 100GB file, and 1, 10, 100, ... 1000 clients tried to download the same file at the same time, their downloads should all make forward progress. In contrast, your original solution in Project 1 (assuming you did not provide any parallel code) would result in the first client downloading the entire file, followed by the second client downloading the entire file, etc...
There are a variety of methods to support concurrent requests in your server. Any of the following methods is acceptable:
- Use multiple processes, each process handling a single active connection. Because processes are "heavy" and take a non-trivial amount of time to launch, your solution should launch a "pool" of processes during initialization, and re-use processes in the pool for each active socket.
- Use multiple threads (in a single process), where each thread handles a single active connection. Because threads are "lightweight", you can launch and kill threads for each active socket.
- Python note: In this interpreted language, all threads compete for the single global interpreter lock (GIL). The effect is that only 1 thread can be running Python code at a time, and thus the performance benefits of threads in Python are often minimal or non-existent. A straight C implementation using threads would not have this bottleneck. (See: Python Threads and the GIL, Understanding the GIL)
In addition, the following features are also required of your web server:
- Persistent connections: In HTTP/1.1, a web server should leave a client connection open after serving a request. The client has the option to send additional HTTP requests over the already-open socket. This reduces the latency of each request, because a new TCP connection does not have to be established. (See: Wikipedia entry) The client socket should stay open for 30 seconds, or a similar, finite, length of time.
- Note: The Content-Length header (see below) is critical to the correct operation of persistent connections!
- Note: The Connection: Close header should not be sent when using persistent connections! (If set, the browser might close the socket and never send another request on this connection)
- Pipelined connections: In HTTP/1.1, a web server must accept multiple requests from a client before a response has been provided. For example, a client could send 5 back-to-back requests for images in the same connection, and the server responds whenever it is ready. (See: Wikipedia entry)
- Note: In Firefox, pipelining is off by default. To enable, go to "about:config" (as a URL), then ignore the warning, and search for network.http.pipelining or network.http.pipelining.aggressive, and change the value from false to true.
- Note: In Chrome, pipelining is off by default and cannot be enabled except by compiling your own custom version.
- Graceful shutdown: Rather than abruptly terminating when the server administrator does a CTRL-C, your web server should capture the CTRL-C signal and do a graceful shutdown. All active sockets should be allowed to finish their current request before being closed by the server.
- Tip: Get stuck in a loop while testing your signal handler? Use CTRL-Z to "background" your web server, then do a ps to get the process ID (PID) of the server, and then do a kill -9 <PID> to kill the server abruptly.
- HEAD requests: The HTTP HEAD method produces a response identical to HTTP GET, but without the response body (i.e. file data). This is frequently used by browsers to check on file creation/modification dates, and thereby determine if their local cached item is current, or if a newer version should be fetched from the server.
- Headers: The following response headers must be produced by your web server:
- Content-Type (the python mimetypes module should be able to guess a reasonable MIME type based on file extension)
- Expires (set an expiration time of 12 hours in the future)
- Verbose / Silent Modes: The default behavior of your web server should be silent under normal operation. Add a --verbose command-line option to enable debugging output to be printed to the console. It is up to you how much debugging output to produce beyond a minimum of 1 line per URL request. (Real servers often allow you to vary the amount of debugging output produced, from minimal to extreme).
The same restrictions as specified in Project 1 apply here.
The same test strategy as specified in Project 1 applies here.
In addition to the test website, you should also employ additional testing methods:
- Place a large file in your web server. While the file is downloading via a web browser, press CTRL-C in the server. Does the download complete before the server gracefully terminates?
- Place a large file in your web server. Download the file in parallel using several different web browsers. Do all downloads appear to make forward progress concurrently?
WARNING: I will definitely test your web browser with a large, multi-GB file in order to easily view concurrent downloads. If your web server does something sloppy, like calling f.read() once to read the entire contents of the file into an array, points will be deducted if Python crashes on my machine. (Maybe I'll even lower the amount of memory in my virtual machine to something small, like 512MB, so there's no way the file could fit into an array at once...)
After writing a web server that accepts parallel requests, you should benchmark its performance, and compare against the original web server in Project 1. Ideally, we want to produce a graph that shows the number of pages served by your web server per second as the number of concurrent clients varies from 1 to infinity, or at least until the web server begins to suffer under heavy load.
There are many web server benchmarking tools. How do we choose a suitable tool? They fall into two main categories:
- "Classic" designs: These tools (including FunkLoad, ab aka "Apache Bench", JMeter, and httperf) are flexible and generate a wide range of measurements. However, they may be slower than the web server they are attempting to profile!
- "Modern" designs: These tools (including weighttp and wrk) use the same parallel, event-driven code style employed by the most sophisticated web servers. Although they produce only simple results (simple == streamlined), they can create a blizzard of requests sufficient to saturate most web servers.
This blog post has helpful advice on how to setup your experiments in order to avoid gathering useless, misleading results. All the points are good advice, but the following rules are particularly important for this project:
- Do not run the performance tester on same machine as your web server, as they will compete for resources!
(You can do so for initial proof of concept, but your final benchmark results submitted should not show a connection to localhost!)
- Verify that the web server machine and performance tester machine are otherwise idle before running the test
- Verify that the network connecting the web server machine and performance tester machine is not congested before running the test.
(Translation: Do NOT run this test over Wi-Fi! Instead, run it over two adjacent computers plugged into the same, preferably gigabit, Ethernet switch).
To get started, install FunkLoad on the performance measurement computer. (The following are instructions tested on Ubuntu 12.04 LTS. Alternate installation instructions are also available if you are using a different environment)
sudo aptitude install python-dev python-setuptools python-webunit python-docutils gnuplot sudo aptitude install tcpwatch-httpproxy --without-recommends sudo easy_install -f http://funkload.nuxeo.org/snapshots/ -U funkload
To verify that your installation of FunkLoad works, install their demo and test it:
# Run demo
# Open HTML file of results in Firefox
firefox test_credential-XXXXXXXXXX/index.html &
Assuming you see a nice results page with graphs in Firefox, you should be good to proceed with benchmarking your own web server.
Create the following test files. (This configuration is inspired by the basic FunkLoad tutorial)
import unittest from random import random from funkload.FunkLoadTestCase import FunkLoadTestCase class ecpe177(FunkLoadTestCase): """This test use a configuration file ecpe177.conf.""" def setUp(self): """Setting up test.""" self.server_url = self.conf_get('main', 'url') def test_ecpe177(self): # The description should be set in the configuration file server_url = self.server_url # begin of test --------------------------------------------- nb_time = self.conf_getInt('test_ecpe177', 'nb_time') for i in range(nb_time): self.get(server_url, description='Get url') # end of test ----------------------------------------------- if __name__ in ('main', '__main__'): unittest.main()
ecpe177.conf : (Note, you should change the URL hostname from localhost to whatever the actual server location is)
# main section for the test case [main] title=Project 2 Web Server Test description=COMP / ECPE 177 url=http://localhost:8080/index.html # a section for each test [test_ecpe177] description=Access %(nb_time)s times the main url nb_time=20 # a section to configure the test mode [ftest] log_to = console file log_path = ecpe177-test.log result_path = ecpe177-test.xml sleep_time_min = 0 sleep_time_max = 0 # a section to configure the bench mode [bench] cycles = 1:10:20:30:40:50:75:100:125:150 duration = 40 startup_delay = 0.01 sleep_time = 0.01 cycle_time = 1 log_to = log_path = ecpe177-bench.log result_path = ecpe177-bench.xml sleep_time_min = 0 sleep_time_max = 0.5
Ensure that your web server is running, and then verify that your benchmark is functional by running a quick test. (You should see OK printed in green at various points in the test)
fl-run-test -dv test_ecpe177.py
Note: FunkLoad sends requests using the legacy HTTP/1.0 standard, unlikely modern browsers that use HTTP/1.1. For our project, the only significant difference is that, in HTTP/1.0, a connection is closed after a single request, rather than remaining open for subsequent requests from the same client. You may need to extend your web server to support HTTP/1.0.
Run the actual benchmark test suite:
fl-run-bench test_ecpe177.py ecpe177.test_ecpe177
Produce the HTML report file, and open in Firefox:
fl-build-report --html ecpe177-bench.xml
firefox test_ecpe177-XXXXXXXX/index.html &
See the main resource page for links that helped me when developing my solution.
There are slight differences between Python 3.x versions (3.2, 3.3, and 3.4) To ensure I use the same version of Python while grading that you did during development, include the following version-checking code during your program's initialization.
Note: Replace "3,4" with the version number of Python that you used.
if not sys.version_info[:2] == (3,4):
print("Error: need Python 3.4 to run program")
print("Using Python 3.4 to run program")
Your submitted project must include:
- Web server code
- FunkLoad test script and config file
- FunkLoad benchmark report for Project 1 web server (including graphs)
- FunkLoad benchmark report for Project 4 web server (including graphs)
- A readme.txt file explaining what parallelism method you chose to implement
In standard Linux style, submit your final project as a .tar.gz compressed archive. To create the archive, assuming your files are in the folder "project4", run:
$ tar -cvzf project4.tar.gz project4
Once created, upload this file to the corresponding Sakai assignment and submit.
To extract your archive, I will run:
$ tar -xvf project4.tar.gz