repositories: - namespace: CHANGEME repository: airbyte-postgres # Catalog-specific metadata for the repository. Optional. metadata: readme: text: Readme description: Description of the repository topics: - sample_topic # Data source settings for the repository. Optional. external: # Name of the credential that the plugin uses. This can also be a credential_id if the # credential is already registered on Splitgraph. credential: airbyte-postgres plugin: airbyte-postgres # Plugin-specific parameters matching the plugin's parameters schema params: host: '' # REQUIRED. Host. Hostname of the database. port: '5432' # REQUIRED. Port. Port of the database. database: '' # REQUIRED. DB Name. Name of the database. username: '' # REQUIRED. User. Username to use to access the database. normalization_mode: basic # Post-ingestion normalization. Whether to normalize raw Airbyte tables. `none` is no normalization, `basic` is Airbyte's basic normalization, `custom` is a custom dbt transformation on the data.. One of none, basic, custom normalization_git_branch: master # dbt model Git branch. Branch or commit hash to use for the normalization dbt project. schemas:  # Schemas. The list of schemas to sync from. Defaults to user. Case sensitive. ssl: false # Connect using SSL. Encrypt client/server communications for increased security. replication_method: # Replication Method. Replication method to use for extracting data from the database.. Choose one of: - # Standard. Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally. method: Standard # REQUIRED. Constant - # Logical Replication (CDC). Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the <a href="https://docs.airbyte.com/integrations/sources/postgres">Postgres Source</a> docs for more information. method: CDC # REQUIRED. Constant replication_slot: '' # REQUIRED. Replication Slot. A plug-in logical replication slot. publication: '' # REQUIRED. Publication. A Postgres publication used for consuming changes. plugin: pgoutput # Plugin. A logical decoding plug-in installed on the PostgreSQL server. `pgoutput` plug-in is used by default. If replication table contains a lot of big jsonb values it is recommended to use `wal2json` plug-in. For more information about `wal2json` plug-in read <a href="https://docs.airbyte.com/integrations/sources/postgres">Postgres Source</a> docs.. One of pgoutput, wal2json tables: sample_table: # Plugin-specific table parameters matching the plugin's schema options: airbyte_cursor_field:  # Cursor field(s). Fields in this stream to be used as a cursor for incremental replication (overrides Airbyte configuration's cursor_field) airbyte_primary_key_field:  # Primary key field(s). Fields in this stream to be used as a primary key for deduplication (overrides Airbyte configuration's primary_key) # Schema of the table, a list of objects with `name` and `type`. If set to ``, will infer. schema:  # Whether live querying is enabled for the plugin (creates a "live" tag in the # repository proxying to the data source). The plugin must support live querying. is_live: false # Ingestion schedule settings. Disable this if you're using GitHub Actions or other methods # to trigger ingestion. schedule: credentials: airbyte-postgres: # This is the name of this credential that "external" sections can reference. plugin: airbyte-postgres # Credential-specific data matching the plugin's credential schema data: normalization_git_url: '' # dbt model Git URL. For `custom` normalization, a URL to the Git repo with the dbt project, for example,`https://uname:firstname.lastname@example.org/organisation/repository.git`. password: '' # Password. Password associated with the username.
Use our splitgraph.yml format to check your Splitgraph configuration into version control, trigger ingestion jobs and manage your data stack like your code.
Splitgraph connects your vast, unrelated data sources and puts them in a single, accessible place.
Splitgraph handles data integration, storage, transformation and discoverability for you. All that remains is adding a BI client.
Focus on building data-driven applications without worrying about where the data will come from.
Splitgraph supports data ingestion from over 100 SaaS services, as well as data federation to over a dozen databases. These are all made queryable over a PostgreSQL-compatible interface.
Splitgraph stores data in a columnar format. This accelerates analytical queries and makes it perfect for dashboards, blogs and other read-intensive use cases.
Read more about Splitgraph’s support for PostgreSQL, including its documentation and sample queries you can run on PostgreSQL data with Splitgraph.
Splitgraph has a PostgreSQL-compatible endpoint that most BI clients can connect to.