27 Sep 2010 @ 4:31 AM 

Amazon reduced redundancy storage with S3 is currently priced at $0.100 per GB for the “First 50 TB / Month of Storage Used”. Akamai is more expensive and has an inherent problem for smaller businesses, which is that they have thousands of servers that will hit you if they need fresh content quickly. Impressive and good if you are MSN or MySpace, but ridiculous overkill otherwise. S3 for object delivery and storage is less taxing on your origin, and ten cents a gig looks cheap initially, but wait until you get your invoice for 20TB. It will be $2,000.00. Every month.
Consider your own system using varnish now. Currently, you can rent servers for $150.00 per month, with 6TB of transfer. I assume you can already do the easy math and see that 4 of those will get you 24TB of transfer for $600.00 per month, and you now have four real servers you control, and can use for varnish and whatever else you need. The reality is that wholesale data transfer is now about $0.015 per GB, with a server rental, if you know what you are doing. That is without any negotiated discount or additional considerations in terms of expertise or service.
I think that S3 and Akamai are excellent services, but that they are outrageously expensive for most businesses. Fortune 500 companies with global customer bases are appropriately served, albeit expensively, with 77,000 Akamai servers distributing their bits. For midrange delivery of content, objects or html, it is complete overkill, akin to renting an aircraft carrier to do your onsite security when a decent patrol service will cover you. The S3 problem, which will be familiar to anyone who has used it, is that you really cannot get out of the cloud once you are in it, unless you set up the system which you were trying to replace or create in the first place with S3.
To me, the great misconception with akamai is that they “accelerate” content delivery. This is excellent marketing, but any subdomain division of your html and objects will get you this “acceleration”. Of course, if you have so many concurrent users in every world region who need your content, so many that your network commit and transfer limits won’t handle them, then you may need akamai. You may just also need ten $150.00 servers positioned globally with an intelligent DNS system for locating the image or video subdomain for users.
The point is that varnish makes it possible to scale as needed, without expensive monthly access fees, and without outrageous per GB transfer fees. As your traffic needs ebb and flow, you decommission your monthly server rentals, or you add more, adjusting your DNS, and your varnish config distribution to suit the situation. You build internal expertise with the leading, trusted open source content distribution system, and you spend money wisely, on expertise and on gear you control yourself.

Posted By: caunter
Last Edit: 27 Sep 2010 @ 01:03 PM

EmailPermalinkComments (1)
Categories: Varnish Cache
 20 Sep 2010 @ 12:48 PM 

Compile a kernel with

device carp

in the config.

add the following to /etc/rc.conf, adjusting for IP address
This creates a Master CARP interface.

ifconfig_em0=”inet netmask″
ifconfig_carp0=”vhid 1 advskew 10 advbase 1 pass mypass″

The second box needs this for its carp config – it will run as backup.

ifconfig_em0=”inet netmask″
ifconfig_carp0=”vhid 1 pass mypass advbase 1 advskew 0″

Posted By: admin
Last Edit: 20 Sep 2010 @ 12:48 PM

EmailPermalinkComments (0)
Categories: Security, Varnish Cache
 14 Sep 2010 @ 1:59 PM 

Twitter does this, and I’ve implemented it too, we just modify vcl_error as follows:

sub vcl_error {
set obj.http.Location = “http://oops.scaleengine.com/”;
set obj.status = 302;

Take out the 503 guru meditation stuff and simply set a redirect to another, friendly domain.

This does not fix “cannot reach the backend” configuration errors, and does not fix poorly configured backends, but if your backends are underpowered and overwhelmed by a traffic spike, you can at least send people to a nice fail page.

Posted By: admin
Last Edit: 14 Sep 2010 @ 01:59 PM

EmailPermalinkComments (0)
Categories: Varnish Cache
 27 Aug 2010 @ 9:26 PM 

This is different, and useful website speed test, especially because of the external element count and total page load time. Enjoy.

Posted By: admin
Last Edit: 27 Aug 2010 @ 09:26 PM

EmailPermalinkComments (0)
Categories: IIS7, Varnish Cache
 19 Jul 2010 @ 11:17 AM 

Being able to see how your load balance algorithm works in real time is a huge advantage for IIS7 – the performance monitor lets us see real time incoming http request, and real time ASP queue size. Correlating this view with a scrolling filtered view of varnishlog checking for Backend health and reuse lets us confirm that we have an optimized load balance algorithm.

We need two perfmon msc files set up and saved; one will view HTTP arrival rates across the farm (HTTP Service Request Queues / Arrival Rate). I like the histogram display for my five servers. When a box comes out of the pool, we see its “bar” dip nice and low, to give it a break from traffic until it recovers health. The other will view ASP.NET v2.0.50727 Requests Current for each IIS7 w3wp.exe in histogram view. If this “bar” starts going up, we look for correlation to varnish health checks.

The backend probe, that ultimately decides whether our server gets traffic or not, needs to check if ASP can build a page, but not be too expensive. Don’t ask for a heavy page, but get your developer to build a simple aspx page, and set a time and frequency that takes the server out if it cannot answer acceptably.

I use 200ms for my check.

.probe = {
.url = “/howyoudoin/”;
.timeout = 200 ms;
.interval = 5s;
.window = 5;
.threshold = 1;

This url is served from the default web site, on the IP address, and actually triggers the real ASP page (defined in web.config) that runs in our high traffic webapp. You need to tune for aggressive .timeout, but still keep the box in the pool for most traffic. No point in setting this so that varnish cannot find healthy backends.

Once you have directors set up with backend probes, as above, view perfmon on your IIS console, and set up shell windows with filtered scrolling varnishlog data on your varnish server (or the same one if you have several in front of your webapp).

varnishlog|grep Backend_health

This scrolling display shows which IIS7 servers are seen to be healthy and which are not.

varnishlog|grep BackendReuse

This scrolling display (it will go fast if you get a lot of traffic) shows which IIS7 server is preferred by your client director. You set up in a client director a preferred backend, and it will show as preferred when it is healthy. If its ASP queue gets too large, it will fail health check probe, be marked sick, and traffic will start to flow to the next preferred server defined in the client director. The incoming http traffic display to the IIS7 backends will reflect this behaviour.

We get confirmation of our settings by observing ASP queues build, the IIS7 server fail the health check, the load balance algorithm adjust to a healthy server, and the incoming http traffic sent by varnish move to another IIS7 server.

Posted By: admin
Last Edit: 19 Jul 2010 @ 11:36 AM

EmailPermalinkComments (0)
Categories: IIS7, Varnish Cache

 Last 50 Posts
Change Theme...
  • Users » 330
  • Posts/Pages » 47
  • Comments » 1
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight


    No Child Pages.