"Curiosity is the very basis of education and if you tell me that curiosity killed the cat, I say only the cat died nobly." - Arnold Edinborough

Performance is often important to people using nginx – and for good reason, of course. Sadly, while many people will optimize their software stack they will rarely work on optimizing the back-end code; and even more rarely will they eliminate single points of failure. Such was also the case when SitePoint recently published an article about uploading large files with PHP. This post will discuss a method to accept uploads that will scale far better and not offer malicious users an easy DoS vector.

The Problem

File uploads are used in many places, depending on your site people might be adding avatars, personal pictures, music or any other type of file. The size of uploads can vary a lot but in the end it doesn’t really matter much, you’re still offering a malicious user a single point of failure where he can direct his denial of service attack.

Allow me to illustrate. Lets say you have an upload form for people to upload pictures, you run Apache in pre-fork mode with mod_php and 50 max children, otherwise known as the standard Apache setup.

Each time Apache accepts an upload one of these processes is going to be busy for the duration of the upload. Do you see the problem here? File uploads to PHP are essentially really long-running scripts and you’re going to run out of Apache processes quickly. It might not even be a malicious user, you could be a victim of your own popularity.

If you’re using nginx then you’re already better off as nginx will buffer the file upload to disk and only pass it to your fastcgi back-end once the file upload is complete. If you’re uploading a 1 GB file nginx is still going to send 1 GB of data over fastcgi, though, but we can do something about that.

Read More