When you get “JavaScript runtime is out of memory; server shutting down instance” errors in your FMS logs, and your app won’t stay loaded, and none of your clients can connect, you need to increase the amount of memory available to the script engine.
In the docs for FMS 4.0, the JSEngine tag is “deprecated”. This means that when your app runs out of the default 1024K, and you try to crank it up to 51200, it’s actually not changing anything. In FMS4.0, you have to change the xml tag, in Application.xml, to ScriptEngine, increase the setting, and restart fms, to fix this.
Use this vcl_fetch to add a custom Expires header to objects. This example adds one day (86400 seconds).
sub vcl_fetch {
set beresp.grace = 4h;
set beresp.ttl = 300s;
C{
#include
static char timebuf[30];
char *format = {“%a, %d %b %Y %H:%M:%S GMT”};
struct tm timestruct;
time_t now;
time(&now);
now+=86400;
gmtime_r(&now, ×truct);
strftime(timebuf, 30, format, ×truct);
VRT_SetHdr(sp, HDR_BERESP, “\010Expires:”, timebuf, vrt_magic_string_end);
}C
return(deliver);
}
Amazon reduced redundancy storage with S3 is currently priced at $0.100 per GB for the “First 50 TB / Month of Storage Used”. Akamai is more expensive and has an inherent problem for smaller businesses, which is that they have thousands of servers that will hit you if they need fresh content quickly. Impressive and good if you are MSN or MySpace, but ridiculous overkill otherwise. S3 for object delivery and storage is less taxing on your origin, and ten cents a gig looks cheap initially, but wait until you get your invoice for 20TB. It will be $2,000.00. Every month.
Consider your own system using varnish now. Currently, you can rent servers for $150.00 per month, with 6TB of transfer. I assume you can already do the easy math and see that 4 of those will get you 24TB of transfer for $600.00 per month, and you now have four real servers you control, and can use for varnish and whatever else you need. The reality is that wholesale data transfer is now about $0.015 per GB, with a server rental, if you know what you are doing. That is without any negotiated discount or additional considerations in terms of expertise or service.
I think that S3 and Akamai are excellent services, but that they are outrageously expensive for most businesses. Fortune 500 companies with global customer bases are appropriately served, albeit expensively, with 77,000 Akamai servers distributing their bits. For midrange delivery of content, objects or html, it is complete overkill, akin to renting an aircraft carrier to do your onsite security when a decent patrol service will cover you. The S3 problem, which will be familiar to anyone who has used it, is that you really cannot get out of the cloud once you are in it, unless you set up the system which you were trying to replace or create in the first place with S3.
To me, the great misconception with akamai is that they “accelerate” content delivery. This is excellent marketing, but any subdomain division of your html and objects will get you this “acceleration”. Of course, if you have so many concurrent users in every world region who need your content, so many that your network commit and transfer limits won’t handle them, then you may need akamai. You may just also need ten $150.00 servers positioned globally with an intelligent DNS system for locating the image or video subdomain for users.
The point is that varnish makes it possible to scale as needed, without expensive monthly access fees, and without outrageous per GB transfer fees. As your traffic needs ebb and flow, you decommission your monthly server rentals, or you add more, adjusting your DNS, and your varnish config distribution to suit the situation. You build internal expertise with the leading, trusted open source content distribution system, and you spend money wisely, on expertise and on gear you control yourself.
Compile a kernel with
device carp
in the config.
add the following to /etc/rc.conf, adjusting for IP address
This creates a Master CARP interface.
defaultrouter=”192.168.0.1″
hostname=”direct1″
ifconfig_em0=”inet 192.168.1.8 netmask 255.255.255.0″
cloned_interfaces=”carp0″
ifconfig_carp0=”vhid 1 advskew 10 advbase 1 pass mypass 192.168.1.7/24″
The second box needs this for its carp config – it will run as backup.
defaultrouter=”192.168.0.1″
hostname=”direct2″
ifconfig_em0=”inet 192.168.1.9 netmask 255.255.255.0″
cloned_interfaces=”carp0″
ifconfig_carp0=”vhid 1 pass mypass advbase 1 advskew 0 192.168.1.7/24″
Twitter does this, and I’ve implemented it too, we just modify vcl_error as follows:
sub vcl_error {
set obj.http.Location = “http://oops.scaleengine.com/”;
set obj.status = 302;
return(deliver);
}
Take out the 503 guru meditation stuff and simply set a redirect to another, friendly domain.
This does not fix “cannot reach the backend” configuration errors, and does not fix poorly configured backends, but if your backends are underpowered and overwhelmed by a traffic spike, you can at least send people to a nice fail page.

Categories
Tag Cloud
Blog RSS
Comments RSS
Last 50 Posts
Back
Back
Back
Void « Default
Life
Earth
Wind
Water
Fire
Light 