Running a Purge and a Get at the same time could allow an old key to be
pulled back into the cache if the request was initiated by a node that
had been cleared and sent to a node that hadn't. This patch tries to
mitigate this, by adding an additional `generation` field, that keeps
track of the number of purges issued. Requests are fulfilled successfully
only if boths ends of a request are on the same generation.
While the LRU package has the ability to purge all items from cache,
this functionality was not available to `ProtoGetter`, making it
imposibile to clear the cache without restarting all peers. This change
adds a `Clear()` method to `ProtoGetter`, that enables clearing the
cache with no downtime.
Provides backwards compatibility with existing logrus via SetLogger() method
Introduces a Logger interface that others can implement for other
structured loggers such as zerolog.
Add SetLoggerFromLogger method that allows caller to pass in an implementation
of the new Logger interface.
Bumps the golang.org/x/sys dependency since tests fail to run on go 1.18 with the old version.
adding a test case for LogrusLogger
adding benchmark, add WithFields method because it's a lot faster apparently
Introducing a new metric GetFromPeersSlowestDuration which will be recording the slowest
call made to getFromPeers.
Introducing SetLogger() to set a logger and allow getFromPeers error to be logged of.
* `Get()` now returns immediately when context is done during a groupcache peer
conversation. Previously `Get()` would call the `Getter` with a done context.
* Now Associating the transport with peer `httpGetter` so we take advantage of
connection reuse. This lowers the impact on DNS and improves preformance for
high request volume low latency applications.
* Now always populating the hotcache. A more complex algorithm is unnecessary
when the LRU cache will ensure the most used values remain in the cache. The
evict code ensures the hotcache does not over crowd the maincache.
requests could simultaneously result in a load(). Only requests that
enter singleflight Do concurrently would be deduped, so it was possible for
populateCache to be called multiple times for the same key. That
would result in overcounting nbytes, and eventually leading to a
livelock where nbytes > cacheBytes, but there were no entries in the
cache.
Change-Id: I5b304ce82041c1310c31b662486744e86509cc53