Capacity Planning
Tacnode's capacity planning involves storage and compute resources.
Storage Resources
Estimating storage for new applications considers both existing and incremental data.
Tacnode uses LSM (Log Structured Merge Tree) for writing, converting random writes to sequential writes to enhance performance. Log files undergo regular compaction to improve read and write performance. During compaction, data may temporarily expand, stabilizing afterward.
Historical Data
Tacnode features efficient encoding and compression, typically reducing data from its original CSV format to one-third. In columnar storage scenarios, compression effectiveness depends on the cardinality of column values. Lower cardinality benefits from Tacnode's adaptive dictionary encoding, potentially achieving even lower compression rates.
Incremental Data
Estimate based on daily incremental rows and size per row. For example, with a daily increase of 10 million rows at 100B each, considering compression, the daily storage increase is approximately 330MB (10M * 100B * 0.33).
Compute Resources
Tacnode scales compute resources by Units. Estimation guidelines include:
-
For transactional updates, estimate 1 compute Unit per approximately 500GB of compressed data.
-
For batch writes or low write volumes with mostly cold data, where queries focus on recent periods (e.g., 1 day/week/month), adjust the Unit-to-data size ratio. For instance, 1 Unit for 1TB-2TB, but generally avoid exceeding 2TB per Unit.
Usage Examples
-
An e-commerce supply chain scenario with continuous writes and queries: average write QPS of 5K/s, peak 28K/s, data volume 1.5TB. Average row size 100B, daily increment about 43GB pre-compression, 14.3GB post-compression. Based on data scale, use 4 Units.
-
An industrial IoT big data scenario with scheduled batch synchronization: existing data 20TB, daily increment 300GB, about 100GB post-compression. Based on data scale, use 16 Units.