It’s a well-known fact that data types are not identical across various file formats and databases. When transferring structured data from a source to a destination, Bobsled takes care of reading the columns and data types of source data and mapping them accurately to the destination.
When Bobsled reads the data from a source, we infer the schema based on the file format and data source.
For self-describing file formats such as Parquet, we read the schema directly from the files. The same is true of data from a data warehouse source.
For formats like CSVs and JSONs, we auto-infer the schema.
NOTE: Four source schema that contain the same column name in a different casing (e.g. ‘hash’ and ‘HASH’), Bobsled will only infer one of the columns, at random.
This schema is represented using our internal Bobsled Data Types. These Bobsled data types are designed to provide an interchange layer between various Sources and Destinations and to help providers better understand the way data will be delivered to downstream destinations.
Providers should therefore familiarize themselves with the Bobsled data types and can use this documentation to understand how Bobsled types drive destination data types.
Example of Bobsled schema inference and internal mapping.
Bobsled Data Types
Bobsled Data Types include all the well-known SQL Data Types.
Primitive Data Types
Data Type
Explanation
BINARY
Variable length Binary Data.
BOOLEAN
TRUE or FALSE or NULL
DECIMAL(precision,scale)
Represents a Fixed Point Decimal Number. The precision and scale are limited by the source and destination systems or file formats. See the sections on Mappings for details.
FLOAT
Double Precision (64-bit) Floating Point Number
FLOAT<FLOAT4 | FLOAT8>
Parameterized Floating point. Can represent either a 32-bit or 64-bit floating point number. Currently supported only when transferring data from Parquet to Databricks
INTEGER
INTEGER data type. The range of value depends on source and destination systems or file formats.
INTEGER<TINYINT | SMALLINT | INT | BIGINT>
Parameterized Integer Data Type. Represents 1, 2, 4, and 8-byte integers. Currently supported only when transferring data from Parquet to Databricks.
STRING
UTF-8 encoded String of varying length.
Date and Time
Data Type
Explanation
DATE
SQL Date in the Gregorian Calendar
TIME_NTZ
Represents a wall-clock TIME value (for ex. 10:23) irrespective of any time zone.
TIME_TZ
Represents a wall-clock TIME value (for ex. 10:23 CET) in a specific time zone.
TIMESTAMP_NTZ
Represents a specific wall-clock date-time value (for ex. 2024-04-01 10:23) irrespective of any time zone. Equivalent to DATETIME in some SQL dialects.
TIMESTAMP_TZ
Represents a specific point in time (for ex. 2024-04-01 10:23 UTC)