datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip
Loading...

Query the Data Delivery Network

Query the DDN

The easiest way to query any data on Splitgraph is via the "Data Delivery Network" (DDN). The DDN is a single endpoint that speaks the PostgreSQL wire protocol. Any Splitgraph user can connect to it at data.splitgraph.com:5432 and query any version of over 40,000 datasets that are hosted or proxied by Splitgraph.

For example, you can query the high_cost_connect_america_fund_broadband_map_caf table in this repository, by referencing it like:

"datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip:latest"."high_cost_connect_america_fund_broadband_map_caf"

or in a full query, like:

SELECT
    ":id", -- Socrata column ID
    "carrier", -- Standard name used to identify a study area. Typically, the carrier name is the same as the company name.
    "deployment_city", -- City of the deployment location.
    "deployment_address", -- Address of the deployment location.
    "longitude", -- Longitude of the deployment location.
    "company_name", -- Company name of the affiliated carrier, or in its absence, carrier name.
    "other_technology", -- This field is mandatory if a carrier selects “other technology” (option 7) from the list in the Technology field.
    "technology", -- CAF II Auction and RDOF carriers must report the type of technology used to deliver broadband service for all locations served. (This field is optional for carriers in other funds).
    "speed_tier", -- Broadband speed that meets the required minimum standard (varies by fund).
    "overlapping_locations", -- This field indicates whether there are multiple locations sharing the same latitude and longitude (up to 5 digits after the decimal).
    "study_area_code", -- Unique number assigned to each ETC based on its service area. A carrier with multiple service areas within a single state will have multiple SAC.
    "deployment_state", -- State of the deployment location.
    "deployment_zip_code", -- Zip Code of the deployment location.
    "sac_prim_state", -- The primary state assigned to the Study Area Code (SAC) is determined by the predominant area of broadband and voice service deployments, particularly in cases where the SAC boundary extends across state borders.
    "deployment_date", -- Date the deployment occurred.
    "filing_year", -- Year when data was reported in High Cost Universal Broadband (HUBB) portal.
    "latency", -- CAF II Auction, Alaska Plan and RDOF carriers must report whether they provide low-latency broadband service (as indicated by a 2) or high-latency broadband service (as indicated by a 1). (This field is optional for carriers in other funds).
    "latitude", -- Latitude of the deployment location.
    "locations_deployed", -- Number of household being served within a location.
    "census_block", -- Census block of the deployment location.
    "fund_type" -- Individual High Cost fund under which the carrier deployed. See High Cost Funds Glossary for details.
FROM
    "datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip:latest"."high_cost_connect_america_fund_broadband_map_caf"
LIMIT 100;

Connecting to the DDN is easy. All you need is an existing SQL client that can connect to Postgres. As long as you have a SQL client ready, you'll be able to query datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip with SQL in under 60 seconds.

Query Your Local Engine

Install Splitgraph Locally
bash -c "$(curl -sL https://github.com/splitgraph/splitgraph/releases/latest/download/install.sh)"
 

Read the installation docs.

Splitgraph Cloud is built around Splitgraph Core (GitHub), which includes a local Splitgraph Engine packaged as a Docker image. Splitgraph Cloud is basically a scaled-up version of that local Engine. When you query the Data Delivery Network or the REST API, we mount the relevant datasets in an Engine on our servers and execute your query on it.

It's possible to run this engine locally. You'll need a Mac, Windows or Linux system to install sgr, and a Docker installation to run the engine. You don't need to know how to actually use Docker; sgrcan manage the image, container and volume for you.

There are a few ways to ingest data into the local engine.

For external repositories, the Splitgraph Engine can "mount" upstream data sources by using sgr mount. This feature is built around Postgres Foreign Data Wrappers (FDW). You can write custom "mount handlers" for any upstream data source. For an example, we blogged about making a custom mount handler for HackerNews stories.

For hosted datasets (like this repository), where the author has pushed Splitgraph Images to the repository, you can "clone" and/or "checkout" the data using sgr cloneand sgr checkout.

Cloning Data

Because datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip:latest is a Splitgraph Image, you can clone the data from Spltgraph Cloud to your local engine, where you can query it like any other Postgres database, using any of your existing tools.

First, install Splitgraph if you haven't already.

Clone the metadata with sgr clone

This will be quick, and does not download the actual data.

sgr clone datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip

Checkout the data

Once you've cloned the data, you need to "checkout" the tag that you want. For example, to checkout the latest tag:

sgr checkout datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip:latest

This will download all the objects for the latest tag of datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip and load them into the Splitgraph Engine. Depending on your connection speed and the size of the data, you will need to wait for the checkout to complete. Once it's complete, you will be able to query the data like you would any other Postgres database.

Alternatively, use "layered checkout" to avoid downloading all the data

The data in datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip:latest is 0 bytes. If this is too big to download all at once, or perhaps you only need to query a subset of it, you can use a layered checkout.:

sgr checkout --layered datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip:latest

This will not download all the data, but it will create a schema comprised of foreign tables, that you can query as you would any other data. Splitgraph will lazily download the required objects as you query the data. In some cases, this might be faster or more efficient than a regular checkout.

Read the layered querying documentation to learn about when and why you might want to use layered queries.

Query the data with your existing tools

Once you've loaded the data into your local Splitgraph Engine, you can query it with any of your existing tools. As far as they're concerned, datahub-usac/high-cost-connect-america-fund-broadband-map-caf-r59r-rpip is just another Postgres schema.

Related Documentation:

Loading...