Skip to content

Commit

Permalink
Merge pull request #11 from mgtv-tech/feat_docs
Browse files Browse the repository at this point in the history
Docs update
  • Loading branch information
daoshenzzg authored Jun 13, 2024
2 parents 90a18b0 + 5ac89f9 commit 4ecb73f
Show file tree
Hide file tree
Showing 3 changed files with 110 additions and 113 deletions.
110 changes: 56 additions & 54 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,33 +5,34 @@
<a href="https://github.com/mgtv-tech/jetcache-go/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
</p>

Translations: [English](README_en.md) | [简体中文](README.md)

# 介绍
[jetcache-go](https://github.com/mgtv-tech/jetcache-go)是基于[go-redis/cache](https://github.com/go-redis/cache)拓展的通用缓存访问框架。
实现了类似Java版[JetCache](https://github.com/alibaba/jetcache)的核心功能,包括:

- ✅ 二级缓存自由组合:本地缓存、分布式缓存、本地缓存+分布式缓存
-`Once`接口采用单飞(`singleflight`)模式,高并发且线程安全
- ✅ 默认采用[MsgPack](https://github.com/vmihailenco/msgpack)来编解码Value。可选[sonic](https://github.com/bytedance/sonic)、原生`json`
- ✅ 本地缓存默认实现了[TinyLFU](https://github.com/dgryski/go-tinylfu)[FreeCache](https://github.com/coocood/freecache)
- ✅ 分布式缓存默认实现了[go-redis/v8](https://github.com/redis/go-redis)的适配器,你也可以自定义实现
- ✅ 可以自定义`errNotFound`,通过占位符替换,缓存空结果防止缓存穿透
- ✅ 支持开启分布式缓存异步刷新
- ✅ 指标采集,默认实现了通过日志打印各级缓存的统计指标(QPM、Hit、Miss、Query、QueryFail)
- ✅ 分布式缓存查询故障自动降级
-`MGet`接口支持`Load`函数。带分布缓存场景,采用`Pipeline`模式实现 (v1.1.0+)
- ✅ 支持拓展缓存更新后所有GO进程的本地缓存失效 (v1.1.1+)

# 安装
使用最新版本的jetcache-go,您可以在项目中导入该库:
Translate to: [简体中文](README_zh.md)

# Introduction
[jetcache-go](https://github.com/mgtv-tech/jetcache-go) is a general-purpose cache access framework based on
[go-redis/cache](https://github.com/go-redis/cache). It implements the core features of the Java version of
[JetCache](https://github.com/alibaba/jetcache), including:

- ✅ Flexible combination of two-level caching: You can use memory, Redis, or your own custom storage method.
- ✅ The `Once` interface adopts the `singleflight` pattern, which is highly concurrent and thread-safe.
- ✅ By default, [MsgPack](https://github.com/vmihailenco/msgpack) is used for encoding and decoding values. Optional [sonic](https://github.com/bytedance/sonic) and native json.
- ✅ The default local cache implementation includes [TinyLFU](https://github.com/dgryski/go-tinylfu) and [FreeCache](https://github.com/coocood/freecache).
- ✅ The default distributed cache implementation is based on [go-redis/v8](https://github.com/redis/go-redis), and you can also customize your own implementation.
- ✅ You can customize the errNotFound error and use placeholders to prevent cache penetration by caching empty results.
- ✅ Supports asynchronous refreshing of distributed caches.
- ✅ Metrics collection: By default, it prints statistical metrics (QPM, Hit, Miss, Query, QueryFail) through logs.
- ✅ Automatic degradation of distributed cache query failures.
- ✅ The `MGet` interface supports the `Load` function. In a distributed caching scenario, the Pipeline mode is used to improve performance. (v1.1.0+)
- ✅ Invalidate local caches (in all Go processes) after updates (v1.1.1+)

# Installation
To start using the latest version of jetcache-go, you can import the library into your project:
```shell
go get github.com/mgtv-tech/jetcache-go
```

## 快速开始
# Getting started

###
## Basic Usage
```go
package cache_test

Expand All @@ -48,7 +49,6 @@ import (
"github.com/mgtv-tech/jetcache-go"
"github.com/mgtv-tech/jetcache-go/local"
"github.com/mgtv-tech/jetcache-go/remote"
"github.com/mgtv-tech/jetcache-go/util"
)

var errRecordNotFound = errors.New("mock gorm.errRecordNotFound")
Expand Down Expand Up @@ -89,7 +89,7 @@ func Example_basicUsage() {
cache.WithErrNotFound(errRecordNotFound))

ctx := context.TODO()
key := util.JoinAny(":", "mykey", 1)
key := "mykey:1"
obj, _ := mockDBGetObject(1)
if err := mycache.Set(ctx, key, cache.Value(obj), cache.TTL(time.Hour)); err != nil {
panic(err)
Expand Down Expand Up @@ -118,7 +118,7 @@ func Example_advancedUsage() {
cache.WithRefreshDuration(time.Minute))

ctx := context.TODO()
key := util.JoinAny(":", "mykey", 1)
key := "mykey:1"
obj := new(object)
if err := mycache.Once(ctx, key, cache.Value(obj), cache.TTL(time.Hour), cache.Refresh(true),
cache.Do(func(ctx context.Context) (any, error) {
Expand All @@ -127,7 +127,7 @@ func Example_advancedUsage() {
panic(err)
}
fmt.Println(obj)
//Output: &{mystring 42}
// Output: &{mystring 42}

mycache.Close()
}
Expand Down Expand Up @@ -160,7 +160,7 @@ func Example_mGetUsage() {
b.WriteString(fmt.Sprintf("%v", ret[id]))
}
fmt.Println(b.String())
//Output: &{mystring 1}&{mystring 2}<nil>
// Output: &{mystring 1}&{mystring 2}<nil>

cacheT.Close()
}
Expand Down Expand Up @@ -218,7 +218,7 @@ func Example_syncLocalUsage() {
}
```

### 配置选项
## Configure settings
```go
// Options are used to store cache options.
Options struct {
Expand All @@ -242,9 +242,10 @@ Options struct {
}
```

### 缓存指标收集和统计
您可以实现`stats.Handler`接口并注册到Cache组件来自定义收集指标,例如使用[Prometheus](https://github.com/prometheus/client_golang)
采集指标。我们默认实现了通过日志打印统计指标,如下所示:
## Cache metrics collection and statistics
You can implement the `stats.Handler` interface and register it with the Cache component to customize metric collection,
for example, using [Prometheus](https://github.com/prometheus/client_golang) to collect metrics. We have provided a
default implementation that logs the statistical metrics, as shown below:
```shell
2023/09/11 16:42:30.695294 statslogger.go:178: [INFO] jetcache-go stats last 1m0s.
cache | qpm| hit_ratio| hit| miss| query| query_fail
Expand All @@ -255,15 +256,15 @@ bench_remote| 5153| 95.03%| 4897| 256| -|
------------+------------+------------+------------+------------+------------+------------
```

### 自定义日志
## Custom Logger
```go
import "github.com/mgtv-tech/jetcache-go/logger"

// Set your Logger
logger.SetDefaultLogger(l logger.Logger)
```

### 自定义编解码
## Custom Encoding and Decoding
```go
import (
"github.com/mgtv-tech/jetcache-go"
Expand All @@ -274,55 +275,56 @@ import (
encoding.RegisterCodec(codec Codec)

// Set your codec name
mycache := cache.New[string, any]("any",
mycache := cache.New("any",
cache.WithRemote(...),
cache.WithCodec(yourCodecName string))
```
### 使用场景说明

#### 自动刷新缓存
`jetcache-go`提供了自动刷新缓存的能力,目的是为了防止缓存失效时造成的雪崩效应打爆数据库。对一些key比较少,实时性要求不高,加载开销非常大的缓存场景,适合使用自动刷新。下面的代码指定每分钟刷新一次,1小时如果没有访问就停止刷新。如果缓存是redis或者多级缓存最后一级是redis,缓存加载行为是全局唯一的,也就是说不管有多少台服务器,同时只有一个服务器在刷新,目的是为了降低后端的加载负担。
# Usage Scenarios

## Automatic Cache Refresh
`jetcache-go` provides automatic cache refresh capability to prevent cache avalanche and database overload when cache misses occur. It is suitable for scenarios with a small number of keys, low real-time requirements, and high loading overhead. The code below specifies a refresh every minute, and stops refreshing after 1 hour without access. If the cache is Redis or the last level of a multi-level cache is Redis, the cache loading behavior is globally unique, which means that only one server is refreshing at a time regardless of the number of servers, to reduce the load on the backend.
```go
mycache := cache.New(cache.WithName("any"),
// ...
// cache.WithRefreshDuration 设置异步刷新时间间隔
cache.WithRefreshDuration(time.Minute),
// cache.WithStopRefreshAfterLastAccess 设置缓存 key 没有访问后的刷新任务取消时间
// ...
// cache.WithRefreshDuration sets the asynchronous refresh interval
cache.WithRefreshDuration(time.Minute),
// cache.WithStopRefreshAfterLastAccess sets the time to cancel the refresh task after the cache key is not accessed
cache.WithStopRefreshAfterLastAccess(time.Hour))

// `Once` 接口通过 `cache.Refresh(true)` 开启自动刷新
// `Once` interface starts automatic refresh by `cache.Refresh(true)`
err := mycache.Once(ctx, key, cache.Value(obj), cache.Refresh(true), cache.Do(func(ctx context.Context) (any, error) {
return mockDBGetObject(1)
}))
```

#### MGet批量查询
`MGet` 通过 `golang`的泛型机制 + `Load` 函数,非常友好的多级缓存批量查询ID对应的实体。如果缓存是redis或者多级缓存最后一级是redis,查询时采用`Pipeline`实现读写操作,提升性能。需要说明是,针对异常场景(IO异常、序列化异常等),我们设计思路是尽可能提供有损服务,防止穿透。
## MGet Batch Query
`MGet` utilizes `golang generics` and the Load function to provide a user-friendly way to batch query entities corresponding to IDs in a multi-level cache. If the cache is Redis or the last level of a multi-level cache is Redis, `Pipeline` is used to implement read and write operations to improve performance. It's worth noting that for abnormal scenarios (IO exceptions, serialization exceptions, etc.), our design philosophy is to provide lossy services as much as possible to prevent cache penetration.
```go
mycache := cache.New(cache.WithName("any"),
// ...
cache.WithRemoteExpiry(time.Minute),
)
// ...
cache.WithRemoteExpiry(time.Minute),
)
cacheT := cache.NewT[int, *object](mycache)

ctx := context.TODO()
key := "mykey"
ids := []int{1, 2, 3}

ret := cacheT.MGet(ctx, key, ids, func(ctx context.Context, ids []int) (map[int]*object, error) {
ret := mycache.MGet(ctx, key, ids, func(ctx context.Context, ids []int) (map[int]*object, error) {
return mockDBMGetObject(ids)
})
```

#### Codec编解码选择
`jetcache-go`默认实现了三种编解码方式,[sonic](https://github.com/bytedance/sonic)[MsgPack](https://github.com/vmihailenco/msgpack)和原生`json`
## Codec Selection
`jetcache-go` implements three serialization and deserialization (codec) methods by default: [sonic](https://github.com/bytedance/sonic)[MsgPack](https://github.com/vmihailenco/msgpack), and native json.

**选择指导:**
**Selection Guide:**

- **追求编解码性能:** 例如本地缓存命中率极高,但本地缓存byte数组转对象的反序列化操作非常耗CPU,那么选择`sonic`
- **兼顾性能和极致的存储空间:** 选择`MsgPack`,MsgPack采用MsgPack编解码,内容>64个字节,会采用`snappy`压缩。
- **For high-performance encoding and decoding:** If the local cache hit rate is extremely high, but the deserialization operation of converting byte arrays to objects in the local cache consumes a lot of CPU, choose `sonic`.
- **For balanced performance and extreme storage space:** Choose `MsgPack`, which uses MsgPack encoding and decoding. Content > 64 bytes will be compressed with `snappy`.

> Tip:使用的时候记得按需导包来完成对应的编解码器注册
> Tip: Remember to import the necessary packages as needed to register the codec.
```go
_ "github.com/mgtv-tech/jetcache-go/encoding/sonic"
```
Loading

0 comments on commit 4ecb73f

Please sign in to comment.