Return of The Squid: how to build a terabyte texture cache using Squid 3 (updated)

squid_logoThis is a followup to my earlier post where I discussed how to set up a local squid web cache as an adjunct to your SL viewer’s texture cache.  There were several important caveats there: First, that you could use Squid 2.7 only, since the feature we need for running an efficient cache, StoreURLRewrite wasn’t implemented in later versions and second, there were some bugs in recent viewers that resulted in some textures being cached in a broken state.

Well. Things always change, so this post is aimed squarely at the earlier problems and an attempt to find a better balance now that viewers support ten times the internal texture cache size that was previously available.  So, intrepid reader, gird your loins and lets get kraken!

A New Squid

Squid 3.4 and later has a new StoreID feature which is a semi-port of the earlier StoreURLRewrite feature we used in 2.7.  Check it out here:

IMPORTANT NOTE: This recipe doesn’t work with version 3.4 of squid.  I don’t immediately see any good reason for it, but now multiple people [thanks Lance!] have confirmed that 3.5 seems to be a minimum.

I’m going to use a copy of squid 3.5 that I downloaded from and then compiled myself.  3.5 is relatively new, so you may or may not be able to get prebuilt binaries for your computer.  As before, I’m targeting Linux (now Ubuntu 15.10), but maybe I’ll add in information at the end on other OSes as time permits.

The recipe

  1. download squid 3.5.12:
  2. decide where you are going to install it.  I’ve chosen /opt/squid and I’m going to keep the source tree there too.
  3. unpack and compile
    1. mkdir /opt/squid/src
    2. cd /opt/squid/src
    3. tar xvfz /tmp/squid-3.5.12.tar.gz
    4. cd squid-3.5.12
    5. ./configure –prefix=/opt/squid
    6. make all
    7. make install

The New Cache

Setting up a cache using the new setup.

  1. edit /opt/squid/etc/squid.conf
  2. initialize the cache: /opt/squid/sbin/squid -z
  3. run squid: /opt/squid/sbin/squid

Note: whatever user you install and run squid as must have read/write access to the cache directory (as written below,  /opt/squid/var/cache/squid). If you use an installation that runs as the “squid” user, your cache directory must also be accessible to “squid.”

The New config

Following is a diff between the default squid.conf and my changes.  If you are unfamiliar with the format, lines that begin with “-” are deleted from the file, “+” are added, and lines not prefixed just give context.  The lines that star with “@@” indicate which lines of the before and after files are being discussed in the next section.  These settings are really just a combination of lines from my previous post, the old Firestorm support post, and updates for Squid 3. The new special sauce start with “store_id_” near the end and the new script that comes in the next section.

/opt/squid/etc$ diff -u squid.conf.default squid.conf
--- squid.conf.default	2015-12-06 15:40:36.021277888 -0500
+++ squid.conf	2015-12-06 22:08:28.926149461 -0500
@@ -28,10 +28,10 @@
 # Recommended minimum Access Permission configuration:
 # Deny requests to certain unsafe ports
-http_access deny !Safe_ports
+#http_access deny !Safe_ports
 # Deny CONNECT to other than secure SSL ports
-http_access deny CONNECT !SSL_ports
+#http_access deny CONNECT !SSL_ports
 # Only allow cachemgr access from localhost
 http_access allow localhost manager
@@ -58,12 +58,30 @@
 # Squid normally listens to port 3128
 http_port 3128
-# Uncomment and adjust the following to add a disk cache directory.
-#cache_dir ufs /opt/squid/var/cache/squid 100 16 256
+# 64 GB cache size, can make larger later.
+cache_dir aufs /opt/squid/var/cache/squid 64000 16 256
+# disable range requests due to LL fail. No Last Modify header.
+range_offset_limit -1
+# this is a local cache, prevent possible error
+visible_hostname cache
+# dont tell http-in we are proxying or it wont work right
+header_access Via deny all
+header_access Forwarded-For deny all
+header_access X-Forwarded-For deny all
+#dont cache baked textures
+acl bakeserver dstdomain
+cache deny bakeserver
 # Leave coredumps in the first cache dir
 coredump_dir /opt/squid/var/cache/squid
+# SL Texture cache
+refresh_pattern /cap/          259200  20%     302400
+refresh_pattern asset-cdn\.agni\.lindenlab\.com/.*       259200  20% 302400
 # Add any of your own refresh_pattern entries above these.
@@ -71,3 +89,15 @@
 refresh_pattern ^gopher:	1440	0%	1440
 refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
 refresh_pattern .		0	20%	4320
+# store_id hackers
+acl rewritedoms dstdomain
+store_id_program /opt/squid/local/bin/
+store_id_children 40 startup=10 idle=5 concurrency=0
+store_id_access allow rewritedoms
+store_id_access deny all
+workers 4

Edit: Some people have had difficulties with the “workers 4” line.  This should be considered optional – if you have a machine capable of running several worker threads at once and an internet connection fast enough to make it worthwhile, then I’d recommend it… unless of course, it doesn’t work at all when you are using it.


Note that if you want the subject-line-hinted terabyte-sized cache, you’ll need to update the cache_dir line appropriately.  Something like:

cache_dir aufs /opt/squid/var/cache/squid 1000000 256 256

The New Script

The new store_id script goes here:


. Aside from being written in python instead of perl, it supports any number of “channels” (parallel workers), it exits gracefully, and handles a wider range of URLs. Although it really isn’t needed, I also decided to map meshes to a different store-id space so that I could manage them differently, even if I was just looking at the store-ids.

#!/usr/bin/python2.7 -u
import re
import sys

# group 1 is the host - simNNNN or a CDN server
# group 2 is the port on the server
# group 3 cap
# group 4 is either 'texture' or 'mesh'
# group 5 the actual id
texturl_patt = re.compile('(.*)\.agni\.lindenlab\.com(.*)/(.*/?)\?(texture|mesh)_id=(.*)')

def rewrite(texturl):
    m =
    if m is not None:
        r = 'http://{}{}'.format(,
        return r
        return None

def process(line):
    if line == 'quit':
    stuff = line.split()
    if stuff[0].isdigit():
        r = rewrite(stuff[1])
        if r is not None:
            print('{} OK store-id={}'.format(stuff[0], r))
            print('{} ERR'.format(stuff[0]))
        if r is not None:
            print('OK store-id={}'.format(r))

while True:
    line = sys.stdin.readline().strip()
    if not line:

Edit: a bit of important magic is the -u argument to python, which puts python’s i/o in unbuffered mode.  This is required to make sure that your script doesn’t stall out.

The New Viewer

For purposes of testing, I’m using v4.7.5.47975 of Firestorm, but other viewers should work similarly.  Check out the Firestorm support page on Squid for some nice screenshots on how to set the parameters in their viewer.  You might need to use a hack like my original post and set the http_proxy environment variable.


Where’s the party, officer?

Well, the good news is that it works, both for textures and for meshes – it is clear from the logs that squid is actually effectively caching data, with lots of hits for both.  It remains to be seen if it is a net win, at least for a small number of users on a single LAN.  Why is this in question?  Well, the bottom line is that the viewers very rarely download all the bytes associated with assets.

To consider a case that is sadly all too common in SL, think about what happens when a creator uses a 1024×1024 texture on one face of a tiny part of someone’s jewelry.  If that person was a game designer, they’d be fired, but in SL the viewer needs to attempt to minimize the impact of that extremely poor choice by making a guess at how much detail is actually needed.  Then, because textures are encoded more-or-less as a succession of low-to-high resolution details, the viewer will only get as many of the bytes (up to the computed “discard level”) as currently needed to render just that one bit to the screen.  This is a HUGE savings for normal operations.

But when we’re caching with squid, we’ve either got the complete texture in the cache (and thus can supply whatever bytes the viewer wants handily) or we need to retrieve the whole thing from SL and start doling out the requested bytes when they show up… but the cache doesn’t stop when the viewer ceased to care – it still gets the whole thing so it doesn’t have to ask again.  In theory, a cache could be smart enough to only get the required bytes, but that is a management overhead that few use cases would likely care about. It is possible that someone’s figured out how to get Squid to be that smart… and I’ll update this article if I hear about it.

Meanwhile, the bottom line is that a viewer cached in this way feels different than one that isn’t – I can’t shake the feeling that the viewer gets slightly confused by the way the cache responds – it often seems as though the viewer is satisfied with the wrong discard level for far too long, leaving you with occasional blurry textures.  At one point I thought I’d identified some viewer-side cache issues and proposed some fixes, but it has been a looong time since I’ve looked at the codebase and some of my suggestions are clearly OBE, since the http client library has been updated since then.


… things to try.

  • compare the different cache styles.  ufs vs aufs but also diskd and rock.  It might make sense to have a rock-based cache of small objects in front of an aufs-based cache of large objects.
  • The header_access lines are non-functional in the current version of these instructions, but retained because a) it doesn’t seem to do any harm (probably because the viewer http libraries no longer seem to care) and b) as documentation for future use.  I believe the 3.x equivalent is reply_header_access (with the same arguments), but this too is disabled by default unless squid is compiled with  –enable-http-violations.  My guess based on what the setting is trying to block is that you might have better luck adding a line like:
    forwarded_for delete
    # or maybe
    # forwarded_for transparent
  • It would be really interesting to figure out and execute on a cache efficiency A-B test to parameterize the end performance in various ways.


Thanks to Lance Corrimal who worked through the above instructions, found a few bugs, and reported on various successes and failures. Dave Bell who noted that header_access is nonfunctional in squid 3.x and suggested a reminder that the user you run as needs permission to access the cache directory.