Distributed, Prometheus-compatible, real-time, in-memory, massively scalable, multi-schema time series / event / operational database.
_______ __ ____ ____
/ ____(_) /___ / __ \/ __ )
/ /_ / / / __ \/ / / / __ |
/ __/ / / / /_/ / /_/ / /_/ /
/_/ /_/_/\____/_____/_____/

Table of Contents generated with DocToc
FiloDB is an open-source distributed, real-time, in-memory, massively scalable, multi-schema time series / event / operational database with Prometheus query support and some Spark support as well.
The normal configuration for real-time ingestion is deployment as stand-alone processes in a cluster, ingesting directly from Apache Kafka. The processes form a cluster using peer-to-peer Akka Cluster technology.
Overview presentation -- see the docs folder for design docs.
To compile the .mermaid source files to .png's, install the Mermaid CLI.
ccm create v39_single -v 3.9 -n 1 -sOptional:
Clone the project and cd into the project directory,
$ git clone https://github.com/filodb/FiloDB.git
$ cd FiloDB
filo-cli (see below) and also sbt spark/assembly.Follow the instructions below to set up an end to end local environment.
This section describes how you can run an end-to-end test locally on a Macbook by ingesting time series data into FiloDB In Memory Store, and querying from it using PromQL.
Use your favorite package manager to install and set up pre-requisite infrastructure. Kafka 0.10.2+ or 0.11 can be used.
brew install kafka
brew services start zookeeper
brew services start kafka
You may see this error from kafka log if you use an M1 chip Mac.
/opt/homebrew/Cellar/kafka/3.3.1_1/libexec/bin/kafka-run-class.sh: line 342: /opt/homebrew/@@HOMEBREW_JAVA@@/bin/java: No such file or directory
To resolve the issue, you may run brew bottle to get the installation file and reinstall kafka through it.
brew bottle --skip-relocation kafka
brew reinstall `ls kafka*bottle*`
Newer versions of ZooKeeper start an admin HTTP server on port 8080, which conflicts with the FiloDB servers.
To fix this add the following to zoo.cfg (/opt/homebrew/etc/zookeeper/zoo.cfg if installing via homebrew):
# Disable admin server on 8080
admin.enableServer=false
Create a new Kafka topic with 4 partitions. This is where time series data will be ingested for FiloDB to consume
kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 4 --topic timeseries-dev
Download and start Cassandra 2.1 or more recent versions (Cassandra 3 and above recommended).
bin/cassandra
You should install Cassandra using a tool which you're the most familiar with.
For instance, one easy way to install it is via brew
brew install cassandra
If you are working on an Apple M1 laptop, you may need to apply workaround mentioned in here to move past the JNA issue.
cp lib/cassandra/jna-5.10.0.jar /opt/homebrew/Cellar/cassandra/4.0.7/libexec/jna-5.6.0.jar
Start Cassandra
brew services start cassandra
Build the required projects
sbt standalone/assembly cli/assembly gateway/assembly
First initialize the keyspaces and tables in Cassandra.
./scripts/schema-create.sh filodb_admin filodb filodb_downsample prometheus 4 1,5 > /tmp/ddl.cql
cqlsh -f /tmp/ddl.cql
Verify that tables were created in filodb, filodb_downsample and filodb-admin keyspaces using cqlsh:
First type cqlsh to start the cassandra cli. Then check the keyspaces by entering DESCRIBE keyspaces.
The script below brings up the FiloDB Dev Standalone server, and then sets up the prometheus dataset (NOTE: if you previously started FiloDB and have not cleared the metadata, then the -s is not needed as FiloDB will recover previous ingestion configs from Cassandra. This script targets directly towards the develop branch.)
./filodb-dev-start.sh -o 0
The o argument is the ordinal of the filodb server. This is used to determine which shards are assigned.
Note that the above script starts the server with configuration at conf/timeseries-filodb-server.conf. This config
file refers to the following datasets that will be loaded on bootstrap:
conf/timeseries-dev-source.confFor queries to work properly you'll want to start a second server to serve all the shards:
./filodb-dev-start.sh -o 1
To quickly verify that both servers are up and set up for ingestion, do this (the output below was formatted using | jq '.', ports may vary):
curl localhost:8080/api/v1/cluster/prometheus/status
{
"status": "success",
"data": [
{
"shard": 0,
"status": "ShardStatusActive",
"address": "akka://filo-standalone"
},
{
"shard": 1,
"status": "ShardStatusActive",
"address": "akka://filo-standalone"
},
{
"shard": 2,
"status": "ShardStatusActive",
"address": "akka.tcp://filo-standalone@127.0.0.1:57749"
},
{
"shard": 3,
"status": "ShardStatusActive",
"address": "akka.tcp://filo-standalone@127.0.0.1:57749"
}
]
}
You can also check the server logs at logs/filodb-server-N.log.
Now run the time series generator. This will ingest 20 time series (the default) with 100 samples each into the Kafka topic with current timestamps. The required argument is the path to the source config. Use --help for all the options.
./dev-gateway.sh --gen-gauge-data conf/timeseries-dev-source.conf
NOTE: Check logs/gateway-server.log for logs.
At this point, you should be able to confirm such a message in the server logs: KAMON counter name=memstore-rows-ingested count=4999
Now you are ready to query FiloDB for the ingested data. The following command should return matching subset of the data that was ingested by the producer.
./filo-cli --host 127.0.0.1 --dataset prometheus --promql 'heap_usage0{_ws_="demo", _ns_="App-2"}'
You can also look at Cassandra to check for persisted data. Look at the tables in filodb and filodb-admin keyspaces.
If the above does not work, try the following:
delete.topic.enable=truekafka-topics.sh --bootstrap-server localhost:9092 --topic timeseries-dev --delete
./filodb-dev-stop.sh and restart filodb instances like above./dev-gateway.sh --gen-gauge-data. You can check consumption via running the TestConsumer, like this: java -Xmx4G -Dconfig.file=conf/timeseries-filodb-server.conf -cp standalone/target/scala-2.12/standalone-assembly-0.8-SNAPSHOT.jar filodb.kafka.TestConsumer conf/timeseries-dev-source.conf. Also, the memstore_rows_ingested metric which is logged to logs/filodb-server-N.log should become nonzero.To stop the dev server. Note that this will stop all the FiloDB servers if multiple are running.
./filodb-dev-stop.sh
FiloDB includes a Gateway server that listens to application metrics and data on a TCP port, converts the data to its internal format, shards it properly and sends it Kafka.
STATUS: Currently the only supported format is Influx Line Protocol. The only tested configuration is using Telegraf with a Prometheus endpoint source and a socket writer using ILP protocol.
The following will scrape metrics from FiloDB using its Prometheus metrics endpoint, and forward it to Kafka to be queried by FiloDB itself :)
./dev-gateway.shconf/telegraf.conf : telegraf --config conf/telegraf.conf. This config file scrapes from a Prom endpoint at port 9095 and forwards it using ILP format to a TCP socket at 8007, which is the gateway defaultNow, metrics from the application having a Prom endpoint at port 9095 will be streamed into Kafka and FiloDB.
Querying the total number of ingesting time series for the last 5 minutes, every 10 seconds:
./filo-cli --host 127.0.0.1 --dataset prometheus --promql 'sum(num_ingesting_partitions{_ws_="local_test",_ns_="filodb"})' --minutes 5
Note that histograms are ingested using FiloDB's optimized histogram format, which leads to very large savings in space. For example, querying the 90%-tile for the size of chunks written to Cassandra, last 5 minutes:
./filo-cli --host 127.0.0.1 --dataset prometheus --promql 'histogram_quantile(0.9, sum(rate(chunk_bytes_per_call{_ws_="local_test",_ns_="filodb"}[3m])))' --minutes 5
Here is how you display the raw histogram data for the same:
./filo-cli --host 127.0.0.1 --dataset prometheus --promql 'chunk_bytes_per_call{_ws_="local_test",_ns_="filodb"}' --minutes 5
To bring up local cluster for serving downsampled data
./filodb-dev-start.sh -o 0 -d
Subsequent servers. Change log file suffix with the -l option for each server.
./filodb-dev-start.sh -o 1 -d
If you had run the unit test DownsamplerMainSpec which populates data into the downsample
dataset, you can query downsample results by visiting the following URL:
curl "http://localhost:9080/promql/prometheus/api/v1/query_range?query=my_counter\{_ws_='my_ws',_ns_='my_ns'\}&start=74372801&end=74373042&step=10&verbose=true&spread=2"
Follow the same steps as in original setup, but do this first to clear out existing metadata:
./filo-cli -Dconfig.file=conf/timeseries-filodb-server.conf --command clearMetadata
Then follow the steps to create the dataset etc. Create a different Kafka topic with 128 partitions:
kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 128 --topic timeseries-perf
Modify server config to load the conf/timeseries-128shards-source.conf dataset instead of the default one.
Start two servers as follows. This will not start ingestion yet:
./filodb-dev-start.sh -o 0 ./filodb-dev-start.sh -o 1
Now if you curl the cluster status you should see 128 shards which are slowly turning active: curl http://127.0.0.1:8080/api/v1/cluster/timeseries/status | jq '.'
Generate records:
./dev-gateway.sh --gen-gauge-data -p


AI赋能电商 视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传 的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。


选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。


像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。


AI数字人视频创作平台
Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号