Splitgraph has been acquired by EDB! Read the blog post.

Query PostgreSQL data

With its PostgreSQL-compatible interface, Splitgraph is the easiest way to get data from PostgreSQL queryable with any of your BI tools.

use tools

What is Splitgraph?

Splitgraph is a data API to power your analytics, data visualizations and other read-intensive applications.

Get started

Connecting PostgreSQL to your query tool with Splitgraph

First, connect Splitgraph to PostgreSQL.
This will create a Splitgraph repository with data from PostgreSQL.
You can now query it with any SQL client or BI tool that supports PostgreSQL.
Now, connect your query tool to Splitgraph.
Data from PostgreSQL will now be available to query directly from your SQL client.
- namespace: CHANGEME
  repository: airbyte-postgres
  # Catalog-specific metadata for the repository. Optional.
      text: Readme
    description: Description of the repository
    - sample_topic
  # Data source settings for the repository. Optional.
    # Name of the credential that the plugin uses. This can also be a credential_id if the
    # credential is already registered on Splitgraph.
    credential: airbyte-postgres
    plugin: airbyte-postgres
    # Plugin-specific parameters matching the plugin's parameters schema
      host: ''  # REQUIRED. Host. Hostname of the database.
      port: '5432' # REQUIRED. Port. Port of the database.
      database: '' # REQUIRED. DB Name. Name of the database.
      username: '' # REQUIRED. User. Username to use to access the database.
      normalization_mode: basic # Post-ingestion normalization. Whether to normalize raw Airbyte tables. `none` is no normalization, `basic` is Airbyte's basic normalization, `custom` is a custom dbt transformation on the data.. One of none, basic, custom
      normalization_git_branch: master # dbt model Git branch. Branch or commit hash to use for the normalization dbt project.
      schemas: [] # Schemas. The list of schemas to sync from. Defaults to user. Case sensitive.
      ssl: false # Connect using SSL. Encrypt client/server communications for increased security.
      replication_method: # Replication Method. Replication method to use for extracting data from the database.. Choose one of:
      -  # Standard. Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally.
        method: Standard  # REQUIRED. Constant
      - # Logical Replication (CDC). Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the <a href="https://docs.airbyte.com/integrations/sources/postgres">Postgres Source</a> docs for more information.
        method: CDC  # REQUIRED. Constant
        replication_slot: '' # REQUIRED. Replication Slot. A plug-in logical replication slot.
        publication: '' # REQUIRED. Publication. A Postgres publication used for consuming changes.
        plugin: pgoutput # Plugin. A logical decoding plug-in installed on the PostgreSQL server. `pgoutput` plug-in is used by default. If replication table contains a lot of big jsonb values it is recommended to use `wal2json` plug-in. For more information about `wal2json` plug-in read <a href="https://docs.airbyte.com/integrations/sources/postgres">Postgres Source</a> docs.. One of pgoutput, wal2json
        # Plugin-specific table parameters matching the plugin's schema
          airbyte_cursor_field: []  # Cursor field(s). Fields in this stream to be used as a cursor for incremental replication (overrides Airbyte configuration's cursor_field)
          airbyte_primary_key_field: [] # Primary key field(s). Fields in this stream to be used as a primary key for deduplication (overrides Airbyte configuration's primary_key)
        # Schema of the table, a list of objects with `name` and `type`. If set to `[]`, will infer. 
        schema: []
    # Whether live querying is enabled for the plugin (creates a "live" tag in the
    # repository proxying to the data source). The plugin must support live querying.
    is_live: false
    # Ingestion schedule settings. Disable this if you're using GitHub Actions or other methods
    # to trigger ingestion.
  airbyte-postgres:  # This is the name of this credential that "external" sections can reference.
    plugin: airbyte-postgres
    # Credential-specific data matching the plugin's credential schema
      normalization_git_url: ''  # dbt model Git URL. For `custom` normalization, a URL to the Git repo with the dbt project, for example,`https://uname:pass_or_token@github.com/organisation/repository.git`.
      password: '' # Password. Password associated with the username.
Use Data Source in splitgraph.yml
You can copy this into splitgraph.yml, or we'll generate it for you.


Use our splitgraph.yml format to check your Splitgraph configuration into version control, trigger ingestion jobs and manage your data stack like your code.

Get started

Why Splitgraph and PostgreSQL?

Splitgraph connects your vast, unrelated data sources and puts them in a single, accessible place.

Unify your data stack

Splitgraph handles data integration, storage, transformation and discoverability for you. All that remains is adding a BI client.

Read more

Power your applications

Focus on building data-driven applications without worrying about where the data will come from.


Not just Data Source...

Splitgraph supports data ingestion from over 100 SaaS services, as well as data federation to over a dozen databases. These are all made queryable over a PostgreSQL-compatible interface.


Optimized for analytics

Splitgraph stores data in a columnar format. This accelerates analytical queries and makes it perfect for dashboards, blogs and other read-intensive use cases.


Do more with PostgreSQL?


PostgreSQL on Splitgraph

Read more about Splitgraph’s support for PostgreSQL, including its documentation and sample queries you can run on PostgreSQL data with Splitgraph.

PostgreSQL overview

Connecting to Splitgraph

Splitgraph has a PostgreSQL-compatible endpoint that most BI clients can connect to.

Try it out