Introduction
Emcache is a high-performance asynchronous Python client for Memcached, offering features like multi-host support with traffic distribution via Rendezvous hashing, various commands (noreply, exptime, flags), and security through SSL/TLS and SASL authentication. It supports autodiscovery for AWS/GCP clusters and includes an adaptive connection pool that scales with traffic. Emcache also prioritizes health monitoring, allowing for the exclusion of unhealthy nodes during operations.
Connection Pool
A connection pool in Emcache is a mechanism that maintains a collection of open connections to Memcached servers. Instead of opening and closing a connection for each request, the pool keeps a predefined number of open connections that can be reused. New connections are created as the traffic increases, and unused connections are purged after a certain period of inactivity.
Emcache connection pool
In Emcache, the connection pool is a key component responsible for managing and maintaining multiple connections to Memcached servers. It ensures efficient usage of resources by reusing a set of connections rather than creating a new one every time a request is made.
Initialize the emcache connection pool client
from emcache import ConnectionPool
def emcache_pool_client():
return ConnectionPool(
address=MemcachedHostAddress("x.x.x.x", 11211),
max_connections=10,
min_connections=1,
purge_unused_connections_after=60.0,
on_healthy_status_change_cb=lambda status: print(f"Pool health status: {status}"),
connection_timeout=10.0,
ssl=False,
ssl_verify=False,
ssl_extra_ca=None,
username=None,
password=None,
)
Insert/Set the key and value.
async def insert(key, value, exptime: int = 0):
try:
async with emcache_pool_client().create_connection_context() as connection:
await connection.storage_command(b"set", key.encode("utf-8"), value, flags=0, exptime=exptime, noreply=0, cas=None)
except (Exception, StorageCommandError, ValueError) as e:
raise ValueError(f"emcache insertion failed: {e}")
Get the value by key
async def query(key):
try:
async with emcache_pool_client().create_connection_context() as connection:
values = await connection.fetch_command(b"get", [key.encode("utf-8")])
if len(values[1])> 0:
return values[1][0]
return None
except (Exception, StorageCommandError, ValueError) as e:
raise ValueError(f"emcache get failed: {e}")
AUTOBATCHING
Autobatching in Emcache refers to automatically grouping multiple requests into a single batch, reducing the number of individual operations sent to the Memcached server. By doing this, Emcache optimizes performance and minimizes network overhead. Instead of sending each request separately, autobatching combines them, which leads to more efficient use of resources and faster execution, especially in high-throughput scenarios. This feature is particularly useful for applications with frequent small requests, as it reduces latency and improves overall performance.
GET with auto-batching
Autobatching refers to retrieving the values of multiple keys in the same connection.
async def get(key1, key2):
try:
async with emcache_pool_client().create_connection_context() as connection:
values = await connection.fetch_command(b"get", [key1.encode("utf-8"), key2.encode("utf-8")])
if len(values[1])> 0:
return values[1][0], values[1][1]
return None, None
except (Exception, StorageCommandError, ValueError) as e:
raise ValueError(f"emcache get multiple keys failed: {e}")
Conclusion
Emcache provides a powerful, asynchronous client for Memcached, ensuring high performance and scalability through features like adaptive connection pooling and autobatching. These optimizations reduce latency and improve resource efficiency in distributed caching environments. With Emcache, developers can manage Memcached clusters more effectively, even under high-traffic conditions.