Vital Signs: Time in Congestion - Corridor (Updated October 2018)
VITAL SIGNS INDICATOR
Time Spent in Congestion (T7)
FULL MEASURE NAME
Time Spent in Congestion
LAST UPDATED
October 2018
DATA SOURCE
MTC/Iteris Congestion Analysis
No link available
CA Department of Finance Forms E-8 and E-5
http://www.dof.ca.gov/Forecasting/Demographics/Estimates/E-8/
http://www.dof.ca.gov/Forecasting/Demographics/Estimates/E-5/
CA Employment Division Department: Labor Market Information
http://www.labormarketinfo.edd.ca.gov/
CONTACT INFORMATION
vitalsigns.info@bayareametro.gov
METHODOLOGY NOTES (across all datasets for this indicator)
Time spent in congestion measures the hours drivers are in congestion on freeway facilities based on traffic data. In recent years, data for the Bay Area comes from INRIX, a company that collects real-time traffic information from a variety of sources including mobile phone data and other GPS locator devices. The data provides traffic speed on the region’s highways. Using historical INRIX data (and similar internal datasets for some of the earlier years), MTC calculates an annual time series for vehicle hours spent in congestion in the Bay Area. Time spent in congestion is defined as the average daily hours spent in congestion on Tuesdays, Wednesdays and Thursdays during peak traffic months on freeway facilities. This indicator focuses on weekdays given that traffic congestion is generally greater on these days; this indicator does not capture traffic congestion on local streets due to data unavailability.
This congestion indicator emphasizes recurring delay (as opposed to also including non-recurring delay), capturing the extent of delay caused by routine traffic volumes (rather than congestion caused by unusual circumstances). Recurring delay is identified by setting a threshold of consistent delay greater than 15 minutes on a specific freeway segment from vehicle speeds less than 35 mph. This definition is consistent with longstanding practices by MTC, Caltrans and the U.S. Department of Transportation as speeds less than 35 mph result in significantly less efficient traffic operations. 35 mph is the threshold at which vehicle throughput is greatest; speeds that are either greater than or less than 35 mph result in reduced vehicle throughput. This methodology focuses on the extra travel time experienced based on a differential between the congested speed and 35 mph, rather than the posted speed limit.
To provide a mathematical example of how the indicator is calculated on a segment basis, when it comes to time spent in congestion, 1,000 vehicles traveling on a congested segment for a 1/4 hour (15 minutes) each, 1,000 vehicles x ¼ hour congestion per vehicle= 250 hours congestion, is equivalent to 100 vehicles traveling on a congested segment for 2.5 hours each, 100 vehicles x 2.5 hour congestion per vehicle = 250 hours congestion. In this way, the measure captures the impacts of both slow speeds and heavy traffic volumes.
MTC calculates two measures of delay – congested delay, or delay that occurs when speeds are below 35 miles per hour, and total delay, or delay that occurs when speeds are below the posted speed limit. To illustrate, if 1,000 vehicles are traveling at 30 miles per hour on a one mile long segment, this would represent 4.76 vehicle hours of congested delay (1,000 vehicles x 1 mile / 30 miles per hour) - (1,000 vehicles x 1 mile / 35 miles per hour) = 33.33 vehicle hours – 28.57 vehicle hours = 4.76 vehicle hours. Considering that the posted speed limit on the segment is 60 miles per hour, total delay would be calculated as 16.67 vehicle hours (1,000 vehicles x 1 mile / 30 miles per hour) - (1,000 vehicles x 1 mile / 60 miles per hour) = 33.33 vehicle hours – 16.67 vehicle hours = 16.67 vehicle hours.
Data sources listed above were used to calculate per-capita and per-worker statistics. Top congested corridors are ranked by total vehicle hours of delay, meaning that the highlighted corridors reflect a combination of slow speeds and heavy t
Querying over HTTP
Splitgraph serves as an HTTP API that lets you run SQL queries directly on
this data to power Web applications. For example:
curl https://data.splitgraph.com/sql/query/ddn \
-H "Content-Type: application/json" \
-d@-<<EOF
{"sql": "
SELECT *
FROM \"bayareametro-gov/vital-signs-time-in-congestion-corridor-updated-f57x-8ifw\".\"vital_signs_time_in_congestion_corridor_updated\"
LIMIT 100
"}
EOF
See the Splitgraph documentation
for more information.