Amazon reduced redundancy storage with S3 is currently priced at $0.100 per GB for the “First 50 TB / Month of Storage Used”. Akamai is more expensive and has an inherent problem for smaller businesses, which is that they have thousands of servers that will hit you if they need fresh content quickly. Impressive and good if you are MSN or MySpace, but ridiculous overkill otherwise. S3 for object delivery and storage is less taxing on your origin, and ten cents a gig looks cheap initially, but wait until you get your invoice for 20TB. It will be $2,000.00. Every month.
Consider your own system using varnish now. Currently, you can rent servers for $150.00 per month, with 6TB of transfer. I assume you can already do the easy math and see that 4 of those will get you 24TB of transfer for $600.00 per month, and you now have four real servers you control, and can use for varnish and whatever else you need. The reality is that wholesale data transfer is now about $0.015 per GB, with a server rental, if you know what you are doing. That is without any negotiated discount or additional considerations in terms of expertise or service.
I think that S3 and Akamai are excellent services, but that they are outrageously expensive for most businesses. Fortune 500 companies with global customer bases are appropriately served, albeit expensively, with 77,000 Akamai servers distributing their bits. For midrange delivery of content, objects or html, it is complete overkill, akin to renting an aircraft carrier to do your onsite security when a decent patrol service will cover you. The S3 problem, which will be familiar to anyone who has used it, is that you really cannot get out of the cloud once you are in it, unless you set up the system which you were trying to replace or create in the first place with S3.
To me, the great misconception with akamai is that they “accelerate” content delivery. This is excellent marketing, but any subdomain division of your html and objects will get you this “acceleration”. Of course, if you have so many concurrent users in every world region who need your content, so many that your network commit and transfer limits won’t handle them, then you may need akamai. You may just also need ten $150.00 servers positioned globally with an intelligent DNS system for locating the image or video subdomain for users.
The point is that varnish makes it possible to scale as needed, without expensive monthly access fees, and without outrageous per GB transfer fees. As your traffic needs ebb and flow, you decommission your monthly server rentals, or you add more, adjusting your DNS, and your varnish config distribution to suit the situation. You build internal expertise with the leading, trusted open source content distribution system, and you spend money wisely, on expertise and on gear you control yourself.
Compile a kernel with
device carp
in the config.
add the following to /etc/rc.conf, adjusting for IP address
This creates a Master CARP interface.
defaultrouter=”192.168.0.1″
hostname=”direct1″
ifconfig_em0=”inet 192.168.1.8 netmask 255.255.255.0″
cloned_interfaces=”carp0″
ifconfig_carp0=”vhid 1 advskew 10 advbase 1 pass mypass 192.168.1.7/24″
The second box needs this for its carp config – it will run as backup.
defaultrouter=”192.168.0.1″
hostname=”direct2″
ifconfig_em0=”inet 192.168.1.9 netmask 255.255.255.0″
cloned_interfaces=”carp0″
ifconfig_carp0=”vhid 1 pass mypass advbase 1 advskew 0 192.168.1.7/24″
If you want to start varnishd with
-s file,/usr/cache/topscache,30720000000
use
dd if=/dev/zero count=60000000 of=/usr/cache/topscache
This works well for a server with 32GB ram.
Twitter does this, and I’ve implemented it too, we just modify vcl_error as follows:
sub vcl_error {
set obj.http.Location = “http://oops.scaleengine.com/”;
set obj.status = 302;
return(deliver);
}
Take out the 503 guru meditation stuff and simply set a redirect to another, friendly domain.
This does not fix “cannot reach the backend” configuration errors, and does not fix poorly configured backends, but if your backends are underpowered and overwhelmed by a traffic spike, you can at least send people to a nice fail page.