If your application needs an LRU cache that works at the speed of light. That's what the Cachelot library is.
The library works within a fixed amount of memory. No garbage collector. Small metadata, near perfect memory utilization (overhead is 5-7% of the total memory).
Besides memory management, Cachelot ensures smooth responsiveness, without any "gaps" for both read and write operations.
Cachelot can work as a consistent cache, returning an error when out of memory or evicting old items to free space for new ones.
The code is written in C++ and it is highly optimized. You can use cachelot on platforms with limited resources, like IoT devices or handheld.
All this allows you to store and access three million items per second (depending on the CPU cache size). Maybe 3MOPs doesn't sound like such a large number, but it means ~333 nanoseconds are spent on a single operation, while RAM reference cost is at ~100 nanoseconds. Only 3 RAM reference per request, can you compete with that?
There are benchmarks inside of repo; we encourage you to try them for yourself.
It is possible to create bindings and use Cachelot from your programming language of choice: JS, Python, Go, Java, Ruby, Erlang, etc.
cachelotd is Memcached-compatible distributed caching server that tends to better hardware utilization and performance.
Single-threaded by design, Cachelot can make CPU work 99% of the time, not wasting cycles on locks and RAM access. 1024-core scalable. NUMA-aware. Simple.
Cachelot keeps more items in the same amount of RAM than Memcached (saves up to 15% RAM, depending on the data and store patterns).