Fix an stale option in the file mount start command and make filer's default config more clearly

shenxingwuying
2024-05-07 14:40:08 +08:00
parent 2dcf09eaf9
commit 1b7568f02a

@@ -128,7 +128,7 @@ If currently only one filer is needed, just use one filer with default filer sto
You can always migrate to other scalable filer store by export and import the filer meta data. See [[Filer Stores]]
Run `weed scaffold -config=filer` to generate an example `filer.toml` file.
Run `weed scaffold -config=filer` to generate an example `filer.toml` file. This file choose `leveldb2` as the filer store by default which stores file meta in local on disk. `leveldb2` only support one filer.
The filer store to choose depends on your requirements, your existing data stores, etc.
@@ -165,9 +165,10 @@ The endpoint is `http://<s3_server_host>:8333`.
Run
`weed mount -filer=<filer_host:filer_port> -chunkCacheCountLimit=xxx -chunkSizeLimitMB=4`
`weed mount -filer=<filer_host:filer_port> -cacheCapacityMB=xxx -chunkSizeLimitMB=4 -dir=mount_point_dir`
* `-chunkCacheCountLimit` means how many entries cached in memory, default to 1000. With default `-chunkSizeLimitMB` set to 4, it may take up to 4x1000 MB memory. If all files are bigger than 4MB.
* `-cacheCapacityMB` means file chunk read cache capacity in MB with tiered cache(memory + disk), default 0 which means chunk cache for read is disabled.
* `-chunkSizeLimitMB` local write buffer size, also chunk large file, default 2 MB.
* `-replication` is the replication level for each file. It overwrites replication settings on both filer and master.
* `-volumeServerAccess=[direct|publicUrl|filerProxy]` is used if master, volume server, and filer are inside a cluster, but `weed mount` is outside of the cluster. With this option set to `filerProxy`, only filer needs to be exposed to outside. All read write access to volume servers will be proxied by filer.