Adaptive Overload Control for Busy Internet Servers Summary: This paper describes an overload shedding mechanism that uses admission control based on 90 percentile response time to provide acceptable service to at least some users during extreme overload situations. The mechanism is built on top of the SEDA staged architecture and allows fine-grained admission control at each stage of the application pipeline. Contributions: The authors show that their fine-grained per-stage admission control mechanism provides acceptable response to some users during extreme overload while rejecting the other users quickly. With current systems, the response time spikes enormously during overload for all users so that no user makes any progress. Analysis of the paper: The paper describes an overload control mechanism based on 90 percentile response time. It uses a fine-grained per-stage admission control mechanism. The first idea is interesting and evaluated well but the second is neither justified nor evaluated well in the paper. Issues with the paper (in no particular order): 1. The authors compare thier overload control mechanism with a mechanism that provides no overload control. This is a straw man. Why not compare with a standard scheme that limits the total number of TCP connections? A more detailed comparison would be against an approach that limits TCP connections based on response time. The latter would show whether per-stage admission control has benefits over a single global admission control scheme. 2. The authors need to clarify that the 90 percentile response time is for accepted connections (and not all connections). 3. The overhead of the controller needs to be measured. 4. The authors need to explain how they tuned their controller. How hard is it? How long did it take them to do so? Are there any experiences that they can offer? 5. The main contribution of this paper is the use of fine-grained per-stage admission control. The cost of this approach is the control tuning needed and the control overhead. However, the authors don't justify the benefits in detail. Please see point 1 above. In addition, the per-stage control raises several issues: 1) As the authors explain, the per-stage control can lead to work being done in many stages of the application pipeline before the request is rejected late in the pipeline, which leads to wasted work. 2) When a stage is not used frequently, the controller doesn't kick in and the response time guarantees cannot be made for such stages. A global control mechanism would not suffer this problem. 3) Is it always useful to have fine-grained admission control and allow some operations while not allowing others? Consider a system where you are allowed to login but then allowed to do only a few operations after that and none of them happen to be useful. A user is more frustrated by that than a system that simply disallows the login and asks the user to come in later.