configuration-templates – Diff between revs 48 and 60

Subversion Repositories:
Rev:
Show entire fileIgnore whitespace
Rev 48 Rev 60
Line 1... Line 1...
1 vcl 4.0; 1 vcl 5.0;
2 ########################################################################### 2 ###########################################################################
3 ## Copyright (C) Wizardry and Steamworks 2016 - License: GNU GPLv3 ## 3 ## Copyright (C) Wizardry and Steamworks 2016 - License: GNU GPLv3 ##
4 ## Please see: http://www.gnu.org/licenses/gpl.html for legal details, ## 4 ## Please see: http://www.gnu.org/licenses/gpl.html for legal details, ##
5 ## rights of fair usage, the disclaimer and warranty conditions. ## 5 ## rights of fair usage, the disclaimer and warranty conditions. ##
6 ########################################################################### 6 ###########################################################################
Line 300... Line 300...
300 # When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically. 300 # When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically.
301 # If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content. 301 # If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content.
302 # if (!std.healthy(req.backend_hint) && (obj.ttl + obj.grace > 0s)) { 302 # if (!std.healthy(req.backend_hint) && (obj.ttl + obj.grace > 0s)) {
303 # return (deliver); 303 # return (deliver);
304 # } else { 304 # } else {
305 # return (fetch); 305 # return (miss);
306 # } 306 # }
Line 307... Line 307...
307   307  
308 # We have no fresh fish. Lets look at the stale ones. 308 # We have no fresh fish. Lets look at the stale ones.
309 if (std.healthy(req.backend_hint)) { 309 if (std.healthy(req.backend_hint)) {
310 # Backend is healthy. Limit age to 10s. 310 # Backend is healthy. Limit age to 10s.
311 if (obj.ttl + 10s > 0s) { 311 if (obj.ttl + 10s > 0s) {
312 #set req.http.grace = "normal(limited)"; 312 #set req.http.grace = "normal(limited)";
313 return (deliver); 313 return (deliver);
314 } else { 314 } else {
315 # No candidate for grace. Fetch a fresh object. 315 # No candidate for grace. Fetch a fresh object.
316 return(fetch); 316 return (miss);
317 } 317 }
318 } else { 318 } else {
319 # backend is sick - use full grace 319 # backend is sick - use full grace
320 if (obj.ttl + obj.grace > 0s) { 320 if (obj.ttl + obj.grace > 0s) {
321 #set req.http.grace = "full"; 321 #set req.http.grace = "full";
322 return (deliver); 322 return (deliver);
323 } else { 323 } else {
324 # no graced object. 324 # no graced object.
325 return (fetch); 325 return (miss);
326 } 326 }
Line 327... Line 327...
327 } 327 }
328   328  
329 # fetch & deliver once we get the result 329 # fetch & deliver once we get the result
Line 330... Line 330...
330 return (fetch); # Dead code, keep as a safeguard 330 return (miss); # Dead code, keep as a safeguard
331 } 331 }
332   332