Splitgraph has been acquired by EDB! Read the blog post.

Query Snowflake data

With its PostgreSQL-compatible interface, Splitgraph is the easiest way to get data from Snowflake queryable with any of your BI tools.

use tools

What is Splitgraph?

Splitgraph is a data API to power your analytics, data visualizations and other read-intensive applications.

Get started
 

Connecting Snowflake to your query tool with Splitgraph

First, connect Splitgraph to Snowflake.
This will create a Splitgraph repository with data from Snowflake.
You can now query it with any SQL client or BI tool that supports PostgreSQL.
Now, connect your query tool to Splitgraph.
Data from Snowflake will now be available to query directly from your SQL client.
repositories:
- namespace: CHANGEME
  repository: airbyte-snowflake
  # Catalog-specific metadata for the repository. Optional.
  metadata:
    readme:
      text: Readme
    description: Description of the repository
    topics:
    - sample_topic
  # Data source settings for the repository. Optional.
  external:
    # Name of the credential that the plugin uses. This can also be a credential_id if the
    # credential is already registered on Splitgraph.
    credential: airbyte-snowflake
    plugin: airbyte-snowflake
    # Plugin-specific parameters matching the plugin's parameters schema
    params:
      host: accountname.us-east-2.aws.snowflakecomputing.com  # REQUIRED. Account Name. The host domain of the snowflake instance (must include the account, region, cloud environment, and end with snowflakecomputing.com).
      role: AIRBYTE_ROLE # REQUIRED. Role. The role you created for Airbyte to access Snowflake.
      warehouse: AIRBYTE_WAREHOUSE # REQUIRED. Warehouse. The warehouse you created for Airbyte to access data.
      database: AIRBYTE_DATABASE # REQUIRED. Database. The database you created for Airbyte to access data.
      schema: AIRBYTE_SCHEMA # REQUIRED. Schema. The source Snowflake schema tables.
      normalization_mode: basic # Post-ingestion normalization. Whether to normalize raw Airbyte tables. `none` is no normalization, `basic` is Airbyte's basic normalization, `custom` is a custom dbt transformation on the data.. One of none, basic, custom
      normalization_git_branch: master # dbt model Git branch. Branch or commit hash to use for the normalization dbt project.
      jdbc_url_params: '' # JDBC URL Params. Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
    tables:
      sample_table:
        # Plugin-specific table parameters matching the plugin's schema
        options:
          airbyte_cursor_field: []  # Cursor field(s). Fields in this stream to be used as a cursor for incremental replication (overrides Airbyte configuration's cursor_field)
          airbyte_primary_key_field: [] # Primary key field(s). Fields in this stream to be used as a primary key for deduplication (overrides Airbyte configuration's primary_key)
        # Schema of the table, a list of objects with `name` and `type`. If set to `[]`, will infer. 
        schema: []
    # Whether live querying is enabled for the plugin (creates a "live" tag in the
    # repository proxying to the data source). The plugin must support live querying.
    is_live: false
    # Ingestion schedule settings. Disable this if you're using GitHub Actions or other methods
    # to trigger ingestion.
    schedule:
credentials:
  airbyte-snowflake:  # This is the name of this credential that "external" sections can reference.
    plugin: airbyte-snowflake
    # Credential-specific data matching the plugin's credential schema
    data:
      normalization_git_url: ''  # dbt model Git URL. For `custom` normalization, a URL to the Git repo with the dbt project, for example,`https://uname:pass_or_token@github.com/organisation/repository.git`.
      credentials: # Authorization Method. Choose one of:
      -  # Username and Password
        auth_type: username/password  # REQUIRED. Constant
        username: AIRBYTE_USER # REQUIRED. Username. The username you created to allow Airbyte to access the database.
        password: '' # REQUIRED. Password. The password associated with the username.
Use Data Source in splitgraph.yml
You can copy this into splitgraph.yml, or we'll generate it for you.

Developer-first

Use our splitgraph.yml format to check your Splitgraph configuration into version control, trigger ingestion jobs and manage your data stack like your code.

Get started
 
heart-icon

Why Splitgraph and Snowflake?

Splitgraph connects your vast, unrelated data sources and puts them in a single, accessible place.

Unify your data stack

Splitgraph handles data integration, storage, transformation and discoverability for you. All that remains is adding a BI client.

Read more
 

Power your applications

Focus on building data-driven applications without worrying about where the data will come from.

heart-icon

Not just Data Source...

Splitgraph supports data ingestion from over 100 SaaS services, as well as data federation to over a dozen databases. These are all made queryable over a PostgreSQL-compatible interface.

heart-icon

Optimized for analytics

Splitgraph stores data in a columnar format. This accelerates analytical queries and makes it perfect for dashboards, blogs and other read-intensive use cases.

use-tools

Do more with Snowflake?

heart-icon

Snowflake on Splitgraph

Read more about Splitgraph’s support for Snowflake, including its documentation and sample queries you can run on Snowflake data with Splitgraph.

Snowflake overview
 
heart-icon

Connecting to Splitgraph

Splitgraph has a PostgreSQL-compatible endpoint that most BI clients can connect to.

Try it out