repositories: - namespace: CHANGEME repository: airbyte-snowflake # Catalog-specific metadata for the repository. Optional. metadata: readme: text: Readme description: Description of the repository topics: - sample_topic # Data source settings for the repository. Optional. external: # Name of the credential that the plugin uses. This can also be a credential_id if the # credential is already registered on Splitgraph. credential: airbyte-snowflake plugin: airbyte-snowflake # Plugin-specific parameters matching the plugin's parameters schema params: host: accountname.us-east-2.aws.snowflakecomputing.com # REQUIRED. Account Name. The host domain of the snowflake instance (must include the account, region, cloud environment, and end with snowflakecomputing.com). role: AIRBYTE_ROLE # REQUIRED. Role. The role you created for Airbyte to access Snowflake. warehouse: AIRBYTE_WAREHOUSE # REQUIRED. Warehouse. The warehouse you created for Airbyte to access data. database: AIRBYTE_DATABASE # REQUIRED. Database. The database you created for Airbyte to access data. schema: AIRBYTE_SCHEMA # REQUIRED. Schema. The source Snowflake schema tables. normalization_mode: basic # Post-ingestion normalization. Whether to normalize raw Airbyte tables. `none` is no normalization, `basic` is Airbyte's basic normalization, `custom` is a custom dbt transformation on the data.. One of none, basic, custom normalization_git_branch: master # dbt model Git branch. Branch or commit hash to use for the normalization dbt project. jdbc_url_params: '' # JDBC URL Params. Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3). tables: sample_table: # Plugin-specific table parameters matching the plugin's schema options: airbyte_cursor_field:  # Cursor field(s). Fields in this stream to be used as a cursor for incremental replication (overrides Airbyte configuration's cursor_field) airbyte_primary_key_field:  # Primary key field(s). Fields in this stream to be used as a primary key for deduplication (overrides Airbyte configuration's primary_key) # Schema of the table, a list of objects with `name` and `type`. If set to ``, will infer. schema:  # Whether live querying is enabled for the plugin (creates a "live" tag in the # repository proxying to the data source). The plugin must support live querying. is_live: false # Ingestion schedule settings. Disable this if you're using GitHub Actions or other methods # to trigger ingestion. schedule: credentials: airbyte-snowflake: # This is the name of this credential that "external" sections can reference. plugin: airbyte-snowflake # Credential-specific data matching the plugin's credential schema data: normalization_git_url: '' # dbt model Git URL. For `custom` normalization, a URL to the Git repo with the dbt project, for example,`https://uname:firstname.lastname@example.org/organisation/repository.git`. credentials: # Authorization Method. Choose one of: - # Username and Password auth_type: username/password # REQUIRED. Constant username: AIRBYTE_USER # REQUIRED. Username. The username you created to allow Airbyte to access the database. password: '' # REQUIRED. Password. The password associated with the username.
Use our splitgraph.yml format to check your Splitgraph configuration into version control, trigger ingestion jobs and manage your data stack like your code.
Splitgraph connects your vast, unrelated data sources and puts them in a single, accessible place.
Splitgraph handles data integration, storage, transformation and discoverability for you. All that remains is adding a BI client.
Focus on building data-driven applications without worrying about where the data will come from.
Splitgraph supports data ingestion from over 100 SaaS services, as well as data federation to over a dozen databases. These are all made queryable over a PostgreSQL-compatible interface.
Splitgraph stores data in a columnar format. This accelerates analytical queries and makes it perfect for dashboards, blogs and other read-intensive use cases.
Read more about Splitgraph’s support for Snowflake, including its documentation and sample queries you can run on Snowflake data with Splitgraph.
Splitgraph has a PostgreSQL-compatible endpoint that most BI clients can connect to.