TiDB

From Wikipedia, the free encyclopedia

TiDB
Developer(s)
Initial releaseOctober 15, 2017; 4 years ago (2017-10-15)[1]
Stable release
5.0.1[2] Edit this on Wikidata / 23 April 2021; 8 months ago (23 April 2021)
Repository
Written inGo (TiDB), Rust (TiKV)
Available inEnglish, Chinese
TypeNewSQL
LicenseApache 2.0
Websitepingcap.com/products/tidb/ Edit this on Wikidata

TiDB (/’taɪdiːbi:/, "Ti" stands for Titanium) is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads.[3] It is MySQL compatible and can provide horizontal scalability, strong consistency, and high availability. It is developed and supported primarily by PingCAP, Inc. and licensed under Apache 2.0. TiDB drew its initial design inspiration from Google's Spanner[4] and F1[5] papers.[6]

TiDB was recognized by InfoWorld 2018 Bossie Award as one of the best open source software for data storage and analytics.[7]

History[]

Company history[]

PingCAP Inc., a software company founded in April, 2015, began developing TiDB after its founding. The company is the primary developer, maintainer, and driver of TiDB and its associated open-source communities. PingCAP is a venture-backed company; it announced its 50 million USD Series C round financing in September, 2018.[8]

Release history[]

See all TiDB release notes.

Main features[]

Horizontal scalability[]

TiDB can expand both SQL processing and storage capacity by adding new nodes. This makes infrastructure capacity scaling easier and more flexible compared to traditional relational databases which only scale vertically.

MySQL compatibility[]

TiDB acts like it is a MySQL 5.7 server to applications. A user can continue to use all of the existing MySQL client libraries.[9] Because TiDB’s SQL processing layer is built from scratch, not a MySQL fork, its compatibility is not 100%, and there are known behavior differences between MySQL and TiDB.[10]

Distributed transactions with strong consistency[]

TiDB internally shards a table into small range-based chunks that are referred to as "Regions".[11] Each Region defaults to approximately 100 MB in size, and TiDB uses a two-phase commit internally to ensure that regions are maintained in a transactionally consistent way.

Cloud native[]

TiDB is designed to work in the cloud to make deployment, provisioning, operations, and maintenance flexible. The storage layer of TiDB, called TiKV, became a Cloud Native Computing Foundation (CNCF) member project in August 2018, as a Sandbox level project,[12] and became an incubation-level hosted project in May 2019.[13] TiKV graduated from CNCF in September 2020.[14] The architecture of the TiDB platform also allows SQL processing and storage to be scaled independently of each other.

Real-time HTAP[]

TiDB can support both online transaction processing (OLTP) and online analytical processing (OLAP) workloads. TiDB has two storage engines: TiKV, a rowstore, and TiFlash, a columnstore. Data can be replicated from TiKV to TiFlash in real time to ensure that TiFlash processes the latest data.

High availability[]

TiDB uses the Raft consensus algorithm[15] to ensure that data is highly available and safely replicated throughout storage in Raft groups. In the event of failure, a Raft group will automatically elect a new leader for the failed member, and self-heal the TiDB cluster without any required manual intervention. Failure and self-healing operations are transparent to the applications.

Deployment methods[]

Kubernetes with Operator[]

TiDB can be deployed in a Kubernetes-enabled cloud environment by using TiDB Operator.[16] An Operator is a method of packaging, deploying, and managing a Kubernetes application. It is designed for running stateful workloads and was first introduced by CoreOS in 2016.[17] TiDB Operator[18] was originally developed by PingCAP and open-sourced in August, 2018.[19] TiDB Operator can be used to deploy TiDB on a laptop,[20] Google Cloud Platform’s Google Kubernetes Engine,[21] and Amazon Web Services’ Elastic Container Service for Kubernetes.[22]

TiUP[]

TiDB 4.0 introduces TiUP, a cluster operation and maintenance tool. It helps users quickly install and configure a TiDB cluster with a few commands.[23]

TiDB Ansible[]

TiDB can be deployed using Ansible by using a TiDB Ansible playbook (not recommended).[24]

Docker[]

Docker can be used to deploy TiDB in a containerized environment on multiple nodes and multiple machines, and Docker Compose can be used to deploy TiDB with a single command for testing purposes.[25]

Tools[]

TiDB has a series of open-source tools built around it to help with data replication and migration for existing MySQL and MariaDB users.

TiDB Data Migration (DM)[]

TiDB Data Migration (DM) is suited for replicating data from already sharded MySQL or MariaDB tables to TiDB.[26] A common use case of DM is to connect MySQL or MariaDB tables to TiDB, treating TiDB almost as a slave, then directly run analytical workloads on this TiDB cluster in near real-time.

Backup & Restore[]

Backup & Restore (BR) is a distributed backup and restore tool for TiDB cluster data. It offers high backup and restore speeds for large-scale TiDB clusters.[27]

Dumpling[]

Dumpling is a data export tool that exports data stored in TiDB or MySQL. It lets users make logical full backups or full dumps from TiDB or MySQL.[28]

TiDB Lightning[]

TiDB Lightning is a tool that supports high speed full-import of a large MySQL dump into a new TiDB cluster, providing a faster import experience than executing each SQL statement. This tool is used to quickly populate an initially empty TiDB cluster with much data, in order to speed up testing or production migration. The import speed improvement is achieved by parsing SQL statements into key-value pairs, then directly generate Sorted String Table (SST) files to RocksDB.[29][30]

TiDB Binlog[]

TiDB Binlog is a tool used to collect the logical changes made to a TiDB cluster. It is used to provide incremental backup and replication, either between two TiDB clusters, or from a TiDB cluster to another downstream platform.[31]

It is similar in functionality to MySQL primary-secondary replication. The main difference is that since TiDB is a distributed database, the binlog generated by each TiDB instance needs to be merged and sorted according to the time of the transaction commit before being consumed downstream.[32]

Case studies[]

Currently, TiDB is used by nearly 1,000 companies, including PayPay, Shopee, BookMyShow, Xiaomi, Zhihu, Meituan-Dianping, iQiyi, Zhuan Zhuan, Mobike, Yiguo.com, VNG, JD Cloud and AI, NetEase Games, and Yuanfudao.com.

References[]

  1. ^ "1.0 GA release notes".
  2. ^ "Release 5.0.1". April 23, 2021. Retrieved May 22, 2021.
  3. ^ Xu, Kevin (October 17, 2018). "How TiDB combines OLTP and OLAP in a distributed database". InfoWorld.
  4. ^ "Spanner: Google's Globally-Distributed Database".
  5. ^ "F1: A Distributed SQL Database That Scales".
  6. ^ Hall, Susan (April 17, 2017). "TiDB Brings Distributed Scalability to SQL". The New Stack.
  7. ^ "The best open source software for data storage and analytics".
  8. ^ Shu, Catherine (September 11, 2018). "TiDB developer PingCAP wants to expand in North America after raising $50M Series C". TechCrunch.
  9. ^ Tocker, Morgan (November 14, 2018). "Meet TiDB: An open source NewSQL database". Opensource.com.
  10. ^ "Compatibility with MySQL". PingCAP.
  11. ^ "TiKV Architecture". TiKV.
  12. ^ Evans, Kristen (August 28, 2018). "CNCF to Host TiKV in the Sandbox". Cloud Native Computing Foundation.
  13. ^ CNCF (May 21, 2019). "TOC Votes to Move TiKV into CNCF Incubator". Cloud Native Computing Foundation. Retrieved August 19, 2020.
  14. ^ TiKV Authors (September 2, 2020). "Celebrating TiKV's CNCF Graduation". TiKV.
  15. ^ "The Raft Consensus Algorithm".
  16. ^ Jackson, Joab (January 22, 2019). "Database Operators Bring Stateful Workloads to Kubernetes". The New Stack.
  17. ^ Philips, Brandon (November 3, 2016). "Introducing Operators: Putting Operational Knowledge into Software". CoreOS.
  18. ^ "TiDB Operator GitHub repo". GitHub.
  19. ^ "Introducing the Kubernetes Operator for TiDB". InfoWorld. August 16, 2018.
  20. ^ "Deploy TiDB to Kubernetes on Your Laptop".
  21. ^ "Deploy TiDB, a distributed MySQL compatible database, to Kubernetes on Google Cloud".
  22. ^ "Deploy TiDB, a distributed MySQL compatible database, on Kubernetes via AWS EKS".
  23. ^ Long, Heng (April 19, 2020). "Get a TiDB Cluster Up in Only One Minute". PinCAP. Retrieved August 19, 2020.
  24. ^ "Ansible Playbook for TiDB".
  25. ^ "How to Spin Up an HTAP Database in 5 Minutes With TiDB + TiSpark".
  26. ^ "DM GitHub Repo". GitHub.
  27. ^ Shen, Taining (April 13, 2020). "How to Back Up and Restore a 10-TB Cluster at 1+ GB/s". PingCAP.
  28. ^ "Dumpling Overview". PingCAP.
  29. ^ Chan, Kenny (January 30, 2019). "Introducing TiDB Lightning". PingCAP.
  30. ^ "TiDB Lightning Overview". PingCAP.
  31. ^ "TiDB Binlog Cluster Overview". PingCAP.
  32. ^ Wang, Xiang (January 29, 2019). "TiDB-Binlog Architecture Evolution and Implementation Principles". PingCAP.
Retrieved from ""