LiveZilla Live Help

You are here

Tuning up SafeSquid

Tuning up SafeSquid for best results

Tuning the timing related parameters for optimum results

It should be possible to extract better performance out of SafeSquid, by tweaking up the overall system & application tuning.

Quite a few users have experienced difficulties due to lack of understanding of SafeSquid's configuration, and possibly due to insufficient documentation on the subject.
Hopefully the following will help.


In startup parameters manageable by editing startup.conf or invoking "/etc/init.d/safesquid adjust"

  • Socket_Timeout: Socket timeout is a command-line parameter of the SafeSquid Executable.
    The SafeSquid's init script reads this value from the startup.conf and passes it to the safesquid executable.
    The socket timeout is the minimum time a socket handle will be monitored by safesquid, for a consecutive incoming request on an established connection.
    If the client side application supports pipelining the subsequent request will be handled with nearly zero latency.
    If further requests are not made on this socket, it will be transferred to the client pool, and will be further monitored as per the entries in the general section of the SafeSquid's CONFIG XML.
    SafeSquid will additionally check for a socket's availability for 10 times the socket_timeout, before considering it to be a dead socket.
    Depending upon the environment and available system resources, setting this value between 30 to 60, should give optimum performance.
     
  • Thread_Timeout: Thread_Timeout is a command-line parameter of the SafeSquid Executable.
    The SafeSquid's init script reads this value from the startup.conf and passes it to the safesquid executable.
    SafeSquid uses one thread to serve a request.
    The Thread_Timeout is the time a thread is kept alive after serving a request, and can serve a new request immediately after serving the first request.
    Keeping a higher Thread_timeout reserves virtual memory for a longer period, but reduces the CPU overheads involved in creation of a new thread.
    Keeping a lower Thread_Timeout releases virtual memory faster and may be beneficial if the environment requires a large number of concurrent threads, while conserving virtual memory.
     
  • Overload_Factor: Overload_Factor is a command-line parameter of the SafeSquid Executable.
    The SafeSquid's init script reads this value from the startup.conf and passes it to the safesquid executable.
    The algorithm for application of Overload_Factor was changed in SafeSquid 4.2.2.RC.9.1B.
    The Overload_Factor is used to depreciate (reduce) the life of a socket held in the client pool, in proportion to the connections held in the pool (client pool size).
    The algorithm computes Overload Multiplier = 1 + ( Overload_Factor * client_pool_size) / MAXTHREADS
    The effective Header Timeout for each of the connections, for which SafeSquid is waiting for the first request in the client pool is now recalculated as Header Timeout - Overload_Multiplier
    The effective Keepalive Timeout for each of the connections, for which SafeSquid has already served a request, and is awaiting subsequent requests in the client pool is now recalculated Connection Timeout / Overload Multiplier
    The new algorithm tries to ensure that the concurrently opened sockets held in Established state SafeSquid remains below the set limits for MAX_FDS.
    However the closed connections are moved to the CLOSE_WAIT state, and if the client side request appears on the socket within system's keepalive, the socket is reclaimed.

In SafeSquid's config manageable via GUI http://safesquid.cfg -> config -> General Section

  • Connection Pool Timeout: This parameter controls the life of an outbound socket
    The SafeSquid creates a socket to fetch response from a remote web-server, to serve a client-side request.
    This socket is placed in a "Connection Pool" after use.
    SafeSquid reuses this socket, if it is still available, for fetching further responses form the same web-server.
    This tremendously reduces the overall protocol overheads and latency.
    However the number of sockets maintained in the "Connection Pool" is limited to "Connection Pool Size".
    If a new connection that is placed in the pool, increases the pool size beyond the "Connection Pool Size", the oldest socket in the pool will be first closed.
     
  • Connection Timeout: This parameter sets the maximum time safesquid will wait for a remote webserver to respond.
    This parameter can be set differently for each "profile", by creating a new entry in the General Section.
    This provides a mechanism to deal with web-applications that give a very slow response.
    Normally a value of 30 should be enough for standard web-sites. However quite a few web-applications break proxy based connections, because they provide simply no feedback while they process request for response.
    These are typically applications that generate a dynamic response.
    Increasing Connection Timeout may also be required if you have a heavily loaded outbound traffic.
     
  • Header Timeout: This parameter sets the maximum time safesquid will wait for a client to make the first or initial request after a connection.
    In normal business networks where the client side connections are on the same LAN as the proxy service, the connectivity is usually 100Mbs or 1 to 10 Gbps.
    Header Timeout = 10 to 20 should therefore be more than sufficient.
    In ISP implementations or when the proxy service is situated at a remote data center, the connectivity would be just a few Mbps.
    Header Timeout = 20 to 40 should therefore be more than sufficient.
    Header Timeout should be increased only if normally client side applications complain of connection timeout while trying to reach the proxy server.
    Increasing Header Timeout without proper considerations can make the proxy service vulnerable to DDoS attacks.
     
  • KeepAlive Timeout: This parameter sets the maximum time a connection will be held in the client pool for further requests.
    This parameter can also be set differently for each profile by creating a new entry in the General Section.
    If the host system has spare capacity, setting this to be a few times of the Header Timeout could be the thumb rule.
    KeepAlive Timeout = 40 to 120 should work for most of the environments.
     
  • Buffer wait time: This parameter sets the time interval at which "downloading" template is sent to the client.
    Content fetched from the web-sites are buffered in memory by SafeSquid for being procesed by virus scanners and dynamic filters like keyword filter, document rewrite, etc.
    If the content takes some time to be downloaded, because of it's size or speed of the Internet connection, and the user may prefer a visual indication, SafeSquid can issue a "downloading" template, at this specified interval.
    Buffer wait time=0 would however be more appropriate for content like executables, images, multimedia objects, etc.

In system's TCP stack manageable via /proc files or sysctl

  • keepalive_time: This parameter controls the life of a socket awaiting the last_ack signal from the client side.
    This can greatly impact the overall performance and stability of your service with SafeSquid
    On most Linux distributions one would find net.ipv4.tcp_keepalive_time = 7200.
    This means a connection that is stil in half-open state or half-closed state can be reclaimed by SafeSquid, if required within 7200 seconds.
    But since each such socket still uses memory, it may be required to reduce this, if you notice that the system's used up memory, is manifold of what can be accounted against the SafeSquid process.
    This used up memory is generally accounted against these CLOSE_WAIT sockets.
    Try this command:

watch 'netstat -antop | grep -iE --regexp="close"'

  • It would reveal a very interesting picture. Specially note the changing information (time) in the last column. It tells the time for which the socket will contine to live, and if the socket suddenly disappears before the time reaches 0, the socket was reclaimed!
    It does look like a great performance but remember it will cause the memory and socket to remain used for 7200 seconds i.e 2 hours. So if the socket doesn't have to be reclaimed, it will remain open for 2 hours, and could result in memory and TCP stack being used (rather wasted!).
    Most SafeSquid environments deliver a better result with tcp_keepalive_time = 600 to 900. To try out the best for your network try setting the value temporarily via proc, and then set it permanently via sysctl.

cat /proc/sys/net/ipv4/tcp_keepalive_time

or, alternatively

sysctl net.ipv4.tcp_keepalive_time

both show the current keepalive time of your system to modify the time to 900, you could do:

echo "900" > /proc/sys/net/ipv4/tcp_keepalive_time

this will make changes temporarily, that will be effective only until the system reboots

or

sysctl -w net.ipv4.tcp_keepalive_time=900

the latter makes the changes permanent, and makes the value effective across reboots.
On some linux distros this can be also effected by editing /etc/sysctl.conf.

You may have to restart SafeSquid every time you change this value, for making it effective.

I hope the above information helps you to improve your system performance and get a better throughput. But please do note, that the above is not really exhaustive, there's quite a few more things that can impact or boost the overall performance.