cambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq
Icon for Socrata external plugin

Query the Data Delivery Network

Query the DDN

The easiest way to query any data on Splitgraph is via the "Data Delivery Network" (DDN). The DDN is a single endpoint that speaks the PostgreSQL wire protocol. Any Splitgraph user can connect to it at data.splitgraph.com:5432 and query any version of over 40,000 datasets that are hosted or proxied by Splitgraph.

For example, you can query the cambridge_building_energy_and_water_use_data table in this repository, by referencing it like:

"cambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq:latest"."cambridge_building_energy_and_water_use_data"

or in a full query, like:

SELECT
    ":id", -- Socrata column ID
    "kerosene_use_kbtu", -- Amount of kerosene used in calendar year, in kBtu, user submitted.
    "buildings_included", -- The list of building IDs included in this report. Building IDs come from the Cambridge GIS building footprints layer. The IDs included in a given data year represent the building IDs as they existed at that time. Building IDs may have changed over time, even though they refer to the same building. You can ensure comparability across years by comparing properties with the same Reporting ID, even if the list of buildings included may change for different data years.
    "fuel_oil_4_use_kbtu", -- Amount of fuel oil #4 used in calendar year, in kBtu, user submitted.
    "natural_gas_use_kbtu", -- Amount of natural gas used in calendar year, in kBtu, user submitted.
    "data_year", -- The year in which the energy and water in the report was used. Properties subject to BEUDO must report their data for the prior year (the Data Year) by May of the following year.
    "beudo_category", -- Properties may be subject to the BEUDO ordinance under three categories: Residential (50+ units), Non-Residential (25,000+ SF), or Municipal (municipal properties 10,000+ SF).
    ":@computed_region_guic_hr4a", -- This column was automatically created in order to record in what polygon from the dataset 'Police Neighborhood Regions' (guic-hr4a) the point in column 'location' is located.  This enables the creation of region maps (choropleths) in the visualization canvas and data lens.
    ":@computed_region_v7jj_366k", -- This column was automatically created in order to record in what polygon from the dataset 'Police Response Districts' (v7jj-366k) the point in column 'location' is located.  This enables the creation of region maps (choropleths) in the visualization canvas and data lens.
    "owner_line_2", -- Property owner information continued.
    "longitude", -- The longitude of the point assigned for the location of this report. If a report represents a single building, this is the center of the building footprint. If a report represents multiple buildings, this is the center of the parcel.
    "latitude", -- The latitude of the point assigned for the location of this report. If a report represents a single building, this is the center of the building footprint. If a report represents multiple buildings, this is the center of the parcel.
    "indoor_water_intensity_all_water_sources_gal_ft2", -- Total indoor water use per square foot, in gal/ft2, generated by ENERGY STAR Portfolio Manager.
    "water_use_all_water_sources_kgal", -- Total water use, user submitted.
    "location",
    "total_ghg_emissions_intensity_kgco2e_ft2", -- Total greenhouse gas emissions intensity, emissions per square foot, in kgCO2e/ft2, generated by ENERGY STAR Portfolio Manager.
    "total_ghg_emissions_metric_tons_co2e", -- Total greenhouse gas emissions, in metric tons CO2e, generated by ENERGY STAR Portfolio Manager.
    "weather_normalized_source_eui_kbtu_ft2", -- The weather normalized source energy use intensity (EUI), the amount of weather normalized site energy used per square foot, generated by ENERGY STAR Portfolio Manager.
    "source_eui_kbtu_ft2", -- Source energy use intensity (EUI), the amount of source energy used per square foot, generated by ENERGY STAR Portfolio Manager.
    "weather_normalized_source_energy_use_kbtu", -- The total weather normalized amount of source energy used, in kBtu, generated by ENERGY STAR Portfolio Manager.
    "source_energy_use_kbtu", -- The total amount of source energy used, in kBtu, user submitted.
    "weather_normalized_site_eui_kbtu_ft2", -- The weather normalized site energy use intensity (EUI), the amount of weather normalized site energy used per square foot, generated by ENERGY STAR Portfolio Manager.
    "site_eui_kbtu_ft2", -- Site energy use intensity (EUI), the amount of site energy used per square foot, generated by ENERGY STAR Portfolio Manager.
    "weather_normalized_site_energy_use_kbtu", -- The total weather normalized amount of site energy used, in kBtu, generated by ENERGY STAR Portfolio Manager.
    "site_energy_use_kbtu", -- The total amount of site energy used, in kBtu, user submitted.
    "electricity_use_generated_from_onsite_renewable_systems_kwh", -- Amount of electricity used that was generated from onsite renewable systems, in kWh, user submitted.
    "district_steam_use_kbtu", -- Amount of district steam used in calendar year, in kBtu, user submitted.
    "district_chilled_water_use_kbtu", -- Amount of district chilled water used in calendar year, in kBtu, user submitted.
    "diesel_2_use_kbtu", -- Amount of diesel #2 used in calendar year, in kBtu, user submitted.
    ":@computed_region_rffn_qbt6", -- This column was automatically created in order to record in what polygon from the dataset 'cambridge_neighborhoods' (rffn-qbt6) the point in column 'location' is located.  This enables the creation of region maps (choropleths) in the visualization canvas and data lens.
    ":@computed_region_swkg_bavi", -- This column was automatically created in order to record in what polygon from the dataset 'cambridge_cdd_zoning' (swkg-bavi) the point in column 'location' is located.  This enables the creation of region maps (choropleths) in the visualization canvas and data lens.
    ":@computed_region_e4yd_rwk4", -- This column was automatically created in order to record in what polygon from the dataset 'Census Blocks 2010' (e4yd-rwk4) the point in column 'location' is located.  This enables the creation of region maps (choropleths) in the visualization canvas and data lens.
    ":@computed_region_rcj3_ccgu", -- This column was automatically created in order to record in what polygon from the dataset 'cambridge_zipcodes' (rcj3-ccgu) the point in column 'location' is located.  This enables the creation of region maps (choropleths) in the visualization canvas and data lens.
    "reporting_id", -- The ID assigned to this report. Building IDs (Buildings Included) and parcel IDs (MapLot) can change over time even when there are no changes to the actual buildings. If there are significant changes for a subject property, such as buildings being demolished, new buildings constructed, or changes in parcel boundaries that change the buildings included on the parcel, a property will be assigned a new Reporting ID. When comparing data across multiple data years, you should only make direct comparisons between the same reporting IDs.
    "ml", -- The map-lot ID locator for the property. Some reports include buildings on multiple map/lots.
    "annual_report_received", -- Describes whether the City received an energy use report for the building. "Yes" values indicate reports that include values for grid-purchased electricity or natural gas use. "No" values indicate that no report was received, or the report did not include electricity or natural gas use.
    "pd_parcel_living_area", -- The total living area of all buildings on the parcel from the Property Database. The report itself may not include all buildings on the parcel or may include buildings on other parcels as well.
    "pd_parcel_units", -- The total number of residential units in all buildings on the parcel (MapLot) from the Property Database. The report itself may not include all buildings on the parcel or may include buildings on other parcels as well.
    "energy_star_score", -- 1-100 energy rating developed by the U.S. EPA, generated by ENERGY STAR Portfolio Manager.
    "electricity_use_grid_purchase_kwh", -- Amount of electricity used in calendar year, in kWh, user submitted.
    "natural_gas_use_therms", -- Amount of natural gas used in calendar year, in therms, user submitted.
    "fuel_oil_1_use_kbtu", -- Amount of fuel oil #1 used in calendar year, in kBtu, user submitted.
    "address", -- The street address assigned to this report. Reports may cover multiple buildings with multiple addresses.
    "buildings_included_count", -- The count of building IDs in Buildings Included field.
    "primary_property_type_self_selected", -- Primary property type for this report, user submitted.
    "all_property_uses", -- The list of all property uses as reported in ENERGY STAR Portfolio manager. May include details about the total area of each use.
    "reported_residential_units", -- The number of residential units in buildings covered by this report, as entered in ENERGY STAR Portfolio manager.
    "owner", -- The property owner.
    "fuel_oil_2_use_kbtu", -- Amount of fuel oil #2 used in calendar year, in kBtu, user submitted.
    "electricity_use_grid_purchase_kbtu", -- Amount of electricity used in calendar year, in kBtu, user submitted.
    "property_gfa_self_reported_ft2", -- The building’s gross floor area, user submitted.
    "year_built", -- The value entered in ENERGY STAR Portfolio Manager for the construction year. Reports may include multiple buildings with different construction years.
    "fuel_oil_5_6_use_kbtu" -- Amount of fuel oil #5 & 6 used in calendar year, in kBtu, user submitted.
FROM
    "cambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq:latest"."cambridge_building_energy_and_water_use_data"
LIMIT 100;

Connecting to the DDN is easy. All you need is an existing SQL client that can connect to Postgres. As long as you have a SQL client ready, you'll be able to query cambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq with SQL in under 60 seconds.

This repository is an "external" repository. That means it's hosted elsewhere, in this case at data.cambridgema.gov. When you querycambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq:latest on the DDN, we "mount" the repository using the socrata mount handler. The mount handler proxies your SQL query to the upstream data source, translating it from SQL to the relevant language (in this case SoQL).

We also cache query responses on the DDN, but we run the DDN on multiple nodes so a CACHE_HIT is only guaranteed for subsequent queries that land on the same node.

Query Your Local Engine

Install Splitgraph Locally
bash -c "$(curl -sL https://github.com/splitgraph/splitgraph/releases/latest/download/install.sh)"
 

Read the installation docs.

Splitgraph Cloud is built around Splitgraph Core (GitHub), which includes a local Splitgraph Engine packaged as a Docker image. Splitgraph Cloud is basically a scaled-up version of that local Engine. When you query the Data Delivery Network or the REST API, we mount the relevant datasets in an Engine on our servers and execute your query on it.

It's possible to run this engine locally. You'll need a Mac, Windows or Linux system to install sgr, and a Docker installation to run the engine. You don't need to know how to actually use Docker; sgrcan manage the image, container and volume for you.

There are a few ways to ingest data into the local engine.

For external repositories (like this repository), the Splitgraph Engine can "mount" upstream data sources by using sgr mount. This feature is built around Postgres Foreign Data Wrappers (FDW). You can write custom "mount handlers" for any upstream data source. For an example, we blogged about making a custom mount handler for HackerNews stories.

For hosted datasets, where the author has pushed Splitgraph Images to the repository, you can "clone" and/or "checkout" the data using sgr cloneand sgr checkout.

Mounting Data

This repository is an external repository. It's not hosted by Splitgraph. It is hosted by data.cambridgema.gov, and Splitgraph indexes it. This means it is not an actual Splitgraph image, so you cannot use sgr clone to get the data. Instead, you can use the socrata adapter with the sgr mount command. Then, if you want, you can import the data and turn it into a Splitgraph image that others can clone.

First, install Splitgraph if you haven't already.

Mount the table with sgr mount

sgr mount socrata \
  "cambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq" \
  --handler-options '{
    "domain": "data.cambridgema.gov",
    "tables": {
        "cambridge_building_energy_and_water_use_data": "72g6-j7aq"
    }
}'

That's it! Now you can query the data in the mounted table like any other Postgres table.

Query the data with your existing tools

Once you've loaded the data into your local Splitgraph engine, you can query it with any of your existing tools. As far as they're concerned, cambridgema-gov/cambridge-building-energy-and-water-use-data-72g6-j7aq is just another Postgres schema.