Binary files a/origin-src/transitfeed-1.2.5.tar.gz and /dev/null differ
Apache License | |
Version 2.0, January 2004 | |
http://www.apache.org/licenses/ | |
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | |
1. Definitions. | |
"License" shall mean the terms and conditions for use, reproduction, | |
and distribution as defined by Sections 1 through 9 of this document. | |
"Licensor" shall mean the copyright owner or entity authorized by | |
the copyright owner that is granting the License. | |
"Legal Entity" shall mean the union of the acting entity and all | |
other entities that control, are controlled by, or are under common | |
control with that entity. For the purposes of this definition, | |
"control" means (i) the power, direct or indirect, to cause the | |
direction or management of such entity, whether by contract or | |
otherwise, or (ii) ownership of fifty percent (50%) or more of the | |
outstanding shares, or (iii) beneficial ownership of such entity. | |
"You" (or "Your") shall mean an individual or Legal Entity | |
exercising permissions granted by this License. | |
"Source" form shall mean the preferred form for making modifications, | |
including but not limited to software source code, documentation | |
source, and configuration files. | |
"Object" form shall mean any form resulting from mechanical | |
transformation or translation of a Source form, including but | |
not limited to compiled object code, generated documentation, | |
and conversions to other media types. | |
"Work" shall mean the work of authorship, whether in Source or | |
Object form, made available under the License, as indicated by a | |
copyright notice that is included in or attached to the work | |
(an example is provided in the Appendix below). | |
"Derivative Works" shall mean any work, whether in Source or Object | |
form, that is based on (or derived from) the Work and for which the | |
editorial revisions, annotations, elaborations, or other modifications | |
represent, as a whole, an original work of authorship. For the purposes | |
of this License, Derivative Works shall not include works that remain | |
separable from, or merely link (or bind by name) to the interfaces of, | |
the Work and Derivative Works thereof. | |
"Contribution" shall mean any work of authorship, including | |
the original version of the Work and any modifications or additions | |
to that Work or Derivative Works thereof, that is intentionally | |
submitted to Licensor for inclusion in the Work by the copyright owner | |
or by an individual or Legal Entity authorized to submit on behalf of | |
the copyright owner. For the purposes of this definition, "submitted" | |
means any form of electronic, verbal, or written communication sent | |
to the Licensor or its representatives, including but not limited to | |
communication on electronic mailing lists, source code control systems, | |
and issue tracking systems that are managed by, or on behalf of, the | |
Licensor for the purpose of discussing and improving the Work, but | |
excluding communication that is conspicuously marked or otherwise | |
designated in writing by the copyright owner as "Not a Contribution." | |
"Contributor" shall mean Licensor and any individual or Legal Entity | |
on behalf of whom a Contribution has been received by Licensor and | |
subsequently incorporated within the Work. | |
2. Grant of Copyright License. Subject to the terms and conditions of | |
this License, each Contributor hereby grants to You a perpetual, | |
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |
copyright license to reproduce, prepare Derivative Works of, | |
publicly display, publicly perform, sublicense, and distribute the | |
Work and such Derivative Works in Source or Object form. | |
3. Grant of Patent License. Subject to the terms and conditions of | |
this License, each Contributor hereby grants to You a perpetual, | |
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |
(except as stated in this section) patent license to make, have made, | |
use, offer to sell, sell, import, and otherwise transfer the Work, | |
where such license applies only to those patent claims licensable | |
by such Contributor that are necessarily infringed by their | |
Contribution(s) alone or by combination of their Contribution(s) | |
with the Work to which such Contribution(s) was submitted. If You | |
institute patent litigation against any entity (including a | |
cross-claim or counterclaim in a lawsuit) alleging that the Work | |
or a Contribution incorporated within the Work constitutes direct | |
or contributory patent infringement, then any patent licenses | |
granted to You under this License for that Work shall terminate | |
as of the date such litigation is filed. | |
4. Redistribution. You may reproduce and distribute copies of the | |
Work or Derivative Works thereof in any medium, with or without | |
modifications, and in Source or Object form, provided that You | |
meet the following conditions: | |
(a) You must give any other recipients of the Work or | |
Derivative Works a copy of this License; and | |
(b) You must cause any modified files to carry prominent notices | |
stating that You changed the files; and | |
(c) You must retain, in the Source form of any Derivative Works | |
that You distribute, all copyright, patent, trademark, and | |
attribution notices from the Source form of the Work, | |
excluding those notices that do not pertain to any part of | |
the Derivative Works; and | |
(d) If the Work includes a "NOTICE" text file as part of its | |
distribution, then any Derivative Works that You distribute must | |
include a readable copy of the attribution notices contained | |
within such NOTICE file, excluding those notices that do not | |
pertain to any part of the Derivative Works, in at least one | |
of the following places: within a NOTICE text file distributed | |
as part of the Derivative Works; within the Source form or | |
documentation, if provided along with the Derivative Works; or, | |
within a display generated by the Derivative Works, if and | |
wherever such third-party notices normally appear. The contents | |
of the NOTICE file are for informational purposes only and | |
do not modify the License. You may add Your own attribution | |
notices within Derivative Works that You distribute, alongside | |
or as an addendum to the NOTICE text from the Work, provided | |
that such additional attribution notices cannot be construed | |
as modifying the License. | |
You may add Your own copyright statement to Your modifications and | |
may provide additional or different license terms and conditions | |
for use, reproduction, or distribution of Your modifications, or | |
for any such Derivative Works as a whole, provided Your use, | |
reproduction, and distribution of the Work otherwise complies with | |
the conditions stated in this License. | |
5. Submission of Contributions. Unless You explicitly state otherwise, | |
any Contribution intentionally submitted for inclusion in the Work | |
by You to the Licensor shall be under the terms and conditions of | |
this License, without any additional terms or conditions. | |
Notwithstanding the above, nothing herein shall supersede or modify | |
the terms of any separate license agreement you may have executed | |
with Licensor regarding such Contributions. | |
6. Trademarks. This License does not grant permission to use the trade | |
names, trademarks, service marks, or product names of the Licensor, | |
except as required for reasonable and customary use in describing the | |
origin of the Work and reproducing the content of the NOTICE file. | |
7. Disclaimer of Warranty. Unless required by applicable law or | |
agreed to in writing, Licensor provides the Work (and each | |
Contributor provides its Contributions) on an "AS IS" BASIS, | |
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | |
implied, including, without limitation, any warranties or conditions | |
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A | |
PARTICULAR PURPOSE. You are solely responsible for determining the | |
appropriateness of using or redistributing the Work and assume any | |
risks associated with Your exercise of permissions under this License. | |
8. Limitation of Liability. In no event and under no legal theory, | |
whether in tort (including negligence), contract, or otherwise, | |
unless required by applicable law (such as deliberate and grossly | |
negligent acts) or agreed to in writing, shall any Contributor be | |
liable to You for damages, including any direct, indirect, special, | |
incidental, or consequential damages of any character arising as a | |
result of this License or out of the use or inability to use the | |
Work (including but not limited to damages for loss of goodwill, | |
work stoppage, computer failure or malfunction, or any and all | |
other commercial damages or losses), even if such Contributor | |
has been advised of the possibility of such damages. | |
9. Accepting Warranty or Additional Liability. While redistributing | |
the Work or Derivative Works thereof, You may choose to offer, | |
and charge a fee for, acceptance of support, warranty, indemnity, | |
or other liability obligations and/or rights consistent with this | |
License. However, in accepting such obligations, You may act only | |
on Your own behalf and on Your sole responsibility, not on behalf | |
of any other Contributor, and only if You agree to indemnify, | |
defend, and hold each Contributor harmless for any liability | |
incurred by, or claims asserted against, such Contributor by reason | |
of your accepting any such warranty or additional liability. | |
END OF TERMS AND CONDITIONS | |
APPENDIX: How to apply the Apache License to your work. | |
To apply the Apache License to your work, attach the following | |
boilerplate notice, with the fields enclosed by brackets "[]" | |
replaced with your own identifying information. (Don't include | |
the brackets!) The text should be enclosed in the appropriate | |
comment syntax for the file format. We also recommend that a | |
file or class name and description of purpose be included on the | |
same "printed page" as the copyright notice for easier | |
identification within third-party archives. | |
Copyright [yyyy] [name of copyright owner] | |
Licensed under the Apache License, Version 2.0 (the "License"); | |
you may not use this file except in compliance with the License. | |
You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software | |
distributed under the License is distributed on an "AS IS" BASIS, | |
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
See the License for the specific language governing permissions and | |
limitations under the License. | |
INSTALL file for transitfeed distribution | |
To download and install in one step make sure you have easy-install installed and run | |
easy_install transitfeed | |
Since you got this far chances are you have downloaded a copy of the source | |
code. Install with the command | |
python setup.py install | |
If you don't want to install you may be able to run the scripts from this | |
directory. For example, try running | |
./feedvalidator.py -n test/data/good_feed.zip | |
Metadata-Version: 1.0 | |
Name: transitfeed | |
Version: 1.2.5 | |
Summary: Google Transit Feed Specification library and tools | |
Home-page: http://code.google.com/p/googletransitdatafeed/ | |
Author: Tom Brown | |
Author-email: tom.brown.code@gmail.com | |
License: Apache License, Version 2.0 | |
Download-URL: http://googletransitdatafeed.googlecode.com/files/transitfeed-1.2.5.tar.gz | |
Description: This module provides a library for reading, writing and validating Google Transit Feed Specification files. It includes some scripts that validate a feed, display it using the Google Maps API and the start of a KML importer and exporter. | |
Platform: OS Independent | |
Classifier: Development Status :: 4 - Beta | |
Classifier: Intended Audience :: Developers | |
Classifier: Intended Audience :: Information Technology | |
Classifier: Intended Audience :: Other Audience | |
Classifier: License :: OSI Approved :: Apache Software License | |
Classifier: Operating System :: OS Independent | |
Classifier: Programming Language :: Python | |
Classifier: Topic :: Scientific/Engineering :: GIS | |
Classifier: Topic :: Software Development :: Libraries :: Python Modules | |
README file for transitfeed distribution | |
This distribution contains a library to help you parse and generate Google | |
Transit Feed files. It also contains some sample tools that demonstrate the | |
library and are useful in their own right when maintaining Google | |
Transit Feed files. You may fetch the specification from | |
http://code.google.com/transit/spec/transit_feed_specification.htm | |
See INSTALL for installation instructions | |
The most recent source can be downloaded from our subversion repository at | |
http://googletransitdatafeed.googlecode.com/svn/trunk/python/ | |
See http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution | |
for more information. | |
__doc__ = """ | |
Package holding files for Google Transit Feed Specification Schedule Viewer. | |
""" | |
# This package contains the data files for schedule_viewer.py, a script that | |
# comes with the transitfeed distribution. According to the thread | |
# "[Distutils] distutils data_files and setuptools.pkg_resources are driving | |
# me crazy" this is the easiest way to include data files. My experience | |
# agrees. - Tom 2007-05-29 | |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" | |
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> | |
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml"> | |
<head> | |
<meta http-equiv="content-type" content="text/html; charset=utf-8"/> | |
<title>[agency]</title> | |
<link href="file/style.css" rel="stylesheet" type="text/css" /> | |
<style type="text/css"> | |
v\:* { | |
behavior:url(#default#VML); | |
} | |
</style> | |
<script src="http://[host]/maps?file=api&v=2&key=[key]" type="text/javascript"></script> | |
<script src="/file/labeled_marker.js" type="text/javascript"></script> | |
<script language="VBScript" src="/file/svgcheck.vbs"></script> | |
<script type="text/javascript"> | |
//<![CDATA[ | |
var map; | |
// Set to true when debugging for log statements about HTTP requests. | |
var log = false; | |
var twelveHourTime = false; // set to true to see AM/PM | |
var selectedRoute = null; | |
var forbid_editing = [forbid_editing]; | |
function load() { | |
if (GBrowserIsCompatible()) { | |
sizeRouteList(); | |
var map_dom = document.getElementById("map"); | |
map = new GMap2(map_dom); | |
map.addControl(new GLargeMapControl()); | |
map.addControl(new GMapTypeControl()); | |
map.addControl(new GOverviewMapControl()); | |
map.enableScrollWheelZoom(); | |
var bb = new GLatLngBounds(new GLatLng([min_lat], [min_lon]),new GLatLng([max_lat], [max_lon])); | |
map.setCenter(bb.getCenter(), map.getBoundsZoomLevel(bb)); | |
map.enableDoubleClickZoom(); | |
initIcons(); | |
GEvent.addListener(map, "moveend", callbackMoveEnd); | |
GEvent.addListener(map, "zoomend", callbackZoomEnd); | |
callbackMoveEnd(); // Pretend we just moved to current center | |
fetchRoutes(); | |
} | |
} | |
function callbackZoomEnd() { | |
} | |
function callbackMoveEnd() { | |
// Map moved, search for stops near the center | |
fetchStopsInBounds(map.getBounds()); | |
} | |
/** | |
* Fetch a sample of stops in the bounding box. | |
*/ | |
function fetchStopsInBounds(bounds) { | |
url = "/json/boundboxstops?n=" + bounds.getNorthEast().lat() | |
+ "&e=" + bounds.getNorthEast().lng() | |
+ "&s=" + bounds.getSouthWest().lat() | |
+ "&w=" + bounds.getSouthWest().lng() | |
+ "&limit=50"; | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayStopsBackground); | |
} | |
/** | |
* Displays stops returned by the server on the map. Expected to be called | |
* when GDownloadUrl finishes. | |
* | |
* @param {String} data JSON encoded list of list, each | |
* containing a row of stops.txt | |
* @param {Number} responseCode Response code from server | |
*/ | |
function callbackDisplayStops(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
clearMap(); | |
var stops = eval(data); | |
if (stops.length == 1) { | |
var marker = addStopMarkerFromList(stops[0], true); | |
fetchStopInfoWindow(marker); | |
} else { | |
for (var i=0; i<stops.length; ++i) { | |
addStopMarkerFromList(stops[i], true); | |
} | |
} | |
} | |
function stopTextSearchSubmit() { | |
var text = document.getElementById("stopTextSearchInput").value; | |
var url = "/json/stopsearch?q=" + text; // TODO URI escape | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayStops); | |
} | |
function tripTextSearchSubmit() { | |
var text = document.getElementById("tripTextSearchInput").value; | |
selectTrip(text); | |
} | |
/** | |
* Add stops markers to the map and remove stops no longer in the | |
* background. | |
*/ | |
function callbackDisplayStopsBackground(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var stops = eval(data); | |
// Make a list of all background markers | |
var oldStopMarkers = {}; | |
for (var stopId in stopMarkersBackground) { | |
oldStopMarkers[stopId] = 1; | |
} | |
// Add new markers to the map and remove from oldStopMarkers | |
for (var i=0; i<stops.length; ++i) { | |
var marker = addStopMarkerFromList(stops[i], false); | |
if (oldStopMarkers[marker.stopId]) { | |
delete oldStopMarkers[marker.stopId]; | |
} | |
} | |
// Delete all markers that remain in oldStopMarkers | |
for (var stopId in oldStopMarkers) { | |
GEvent.removeListener(stopMarkersBackground[stopId].clickListener); | |
map.removeOverlay(stopMarkersBackground[stopId]); | |
delete stopMarkersBackground[stopId] | |
} | |
} | |
/** | |
* Remove all overlays from the map | |
*/ | |
function clearMap() { | |
boundsOfPolyLine = null; | |
for (var stopId in stopMarkersSelected) { | |
GEvent.removeListener(stopMarkersSelected[stopId].clickListener); | |
} | |
for (var stopId in stopMarkersBackground) { | |
GEvent.removeListener(stopMarkersBackground[stopId].clickListener); | |
} | |
stopMarkersSelected = {}; | |
stopMarkersBackground = {}; | |
map.clearOverlays(); | |
} | |
/** | |
* Return a new GIcon used for stops | |
*/ | |
function makeStopIcon() { | |
var icon = new GIcon(); | |
icon.iconSize = new GSize(12, 20); | |
icon.shadowSize = new GSize(22, 20); | |
icon.iconAnchor = new GPoint(6, 20); | |
icon.infoWindowAnchor = new GPoint(5, 1); | |
return icon; | |
} | |
/** | |
* Initialize icons. Call once during load. | |
*/ | |
function initIcons() { | |
iconSelected = makeStopIcon(); | |
iconSelected.image = "/file/mm_20_yellow.png"; | |
iconSelected.shadow = "/file/mm_20_shadow.png"; | |
iconBackground = makeStopIcon(); | |
iconBackground.image = "/file/mm_20_blue_trans.png"; | |
iconBackground.shadow = "/file/mm_20_shadow_trans.png"; | |
iconBackgroundStation = makeStopIcon(); | |
iconBackgroundStation.image = "/file/mm_20_red_trans.png"; | |
iconBackgroundStation.shadow = "/file/mm_20_shadow_trans.png"; | |
} | |
var iconSelected; | |
var iconBackground; | |
var iconBackgroundStation; | |
// Map from stopId to GMarker object for stops selected because they are | |
// part of a trip, etc | |
var stopMarkersSelected = {}; | |
// Map from stopId to GMarker object for stops found by the background | |
// passive search | |
var stopMarkersBackground = {}; | |
/** | |
* Add a stop to the map, given a row from stops.txt. | |
*/ | |
function addStopMarkerFromList(list, selected, text) { | |
return addStopMarker(list[0], list[1], list[2], list[3], list[4], selected, text); | |
} | |
/** | |
* Add a stop to the map, returning the new marker | |
*/ | |
function addStopMarker(stopId, stopName, stopLat, stopLon, locationType, selected, text) { | |
if (stopMarkersSelected[stopId]) { | |
// stop was selected | |
var marker = stopMarkersSelected[stopId]; | |
if (text) { | |
oldText = marker.getText(); | |
if (oldText) { | |
oldText = oldText + "<br>"; | |
} | |
marker.setText(oldText + text); | |
} | |
return marker; | |
} | |
if (stopMarkersBackground[stopId]) { | |
// Stop was in the background. Either delete it from the background or | |
// leave it where it is. | |
if (selected) { | |
map.removeOverlay(stopMarkersBackground[stopId]); | |
delete stopMarkersBackground[stopId]; | |
} else { | |
return stopMarkersBackground[stopId]; | |
} | |
} | |
var icon; | |
if (selected) { | |
icon = iconSelected; | |
} else if (locationType == 1) { | |
icon = iconBackgroundStation | |
} else { | |
icon = iconBackground; | |
} | |
var ll = new GLatLng(stopLat,stopLon); | |
var marker; | |
if (selected || text) { | |
if (!text) { | |
text = ""; // Make sure every selected icon has a text box, even if empty | |
} | |
var markerOpts = new Object(); | |
markerOpts.icon = icon; | |
markerOpts.labelText = text; | |
markerOpts.labelClass = "tooltip"; | |
markerOpts.labelOffset = new GSize(6, -20); | |
marker = new LabeledMarker(ll, markerOpts); | |
} else { | |
marker = new GMarker(ll, {icon: icon, draggable: !forbid_editing}); | |
} | |
marker.stopName = stopName; | |
marker.stopId = stopId; | |
if (selected) { | |
stopMarkersSelected[stopId] = marker; | |
} else { | |
stopMarkersBackground[stopId] = marker; | |
} | |
map.addOverlay(marker); | |
marker.clickListener = GEvent.addListener(marker, "click", function() {fetchStopInfoWindow(marker);}); | |
GEvent.addListener(marker, "dragend", function() { | |
document.getElementById("edit").style.visibility = "visible"; | |
document.getElementById("edit_status").innerHTML = "updating..." | |
changeStopLocation(marker); | |
}); | |
return marker; | |
} | |
/** | |
* Sends new location of a stop to server. | |
*/ | |
function changeStopLocation(marker) { | |
var url = "/json/setstoplocation?id=" + | |
encodeURIComponent(marker.stopId) + | |
"&lat=" + encodeURIComponent(marker.getLatLng().lat()) + | |
"&lng=" + encodeURIComponent(marker.getLatLng().lng()); | |
GDownloadUrl(url, function(data, responseCode) { | |
document.getElementById("edit_status").innerHTML = unescape(data); | |
} ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
/** | |
* Saves the current state of the data file opened at server side to file. | |
*/ | |
function saveData() { | |
var url = "/json/savedata"; | |
GDownloadUrl(url, function(data, responseCode) { | |
document.getElementById("edit_status").innerHTML = data;} ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
/** | |
* Fetch the next departing trips from the stop for display in an info | |
* window. | |
*/ | |
function fetchStopInfoWindow(marker) { | |
var url = "/json/stoptrips?stop=" + encodeURIComponent(marker.stopId) + "&time=" + parseTimeInput(); | |
GDownloadUrl(url, function(data, responseCode) { | |
callbackDisplayStopInfoWindow(marker, data, responseCode); } ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
function callbackDisplayStopInfoWindow(marker, data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var timeTrips = eval(data); | |
var html = "<b>" + marker.stopName + "</b> (" + marker.stopId + ")<br>"; | |
var latLng = marker.getLatLng(); | |
html = html + "(" + latLng.lat() + ", " + latLng.lng() + ")<br>"; | |
html = html + "<table><tr><th>service_id<th>time<th>name</tr>"; | |
for (var i=0; i < timeTrips.length; ++i) { | |
var time = timeTrips[i][0]; | |
var tripid = timeTrips[i][1][0]; | |
var tripname = timeTrips[i][1][1]; | |
var service_id = timeTrips[i][1][2]; | |
var timepoint = timeTrips[i][2]; | |
html = html + "<tr onClick='map.closeInfoWindow();selectTrip(\"" + | |
tripid + "\")'>" + | |
"<td>" + service_id + | |
"<td align='right'>" + (timepoint ? "" : "~") + | |
formatTime(time) + "<td>" + tripname + "</tr>"; | |
} | |
html = html + "</table>"; | |
marker.openInfoWindowHtml(html); | |
} | |
function leadingZero(digit) { | |
if (digit < 10) | |
return "0" + digit; | |
else | |
return "" + digit; | |
} | |
function formatTime(secSinceMidnight) { | |
var hours = Math.floor(secSinceMidnight / 3600); | |
var suffix = ""; | |
if (twelveHourTime) { | |
suffix = (hours >= 12) ? "p" : "a"; | |
suffix += (hours >= 24) ? " next day" : ""; | |
hours = hours % 12; | |
if (hours == 0) | |
hours = 12; | |
} | |
var minutes = Math.floor(secSinceMidnight / 60) % 60; | |
var seconds = secSinceMidnight % 60; | |
if (seconds == 0) { | |
return hours + ":" + leadingZero(minutes) + suffix; | |
} else { | |
return hours + ":" + leadingZero(minutes) + ":" + leadingZero(seconds) + suffix; | |
} | |
} | |
function parseTimeInput() { | |
var text = document.getElementById("timeInput").value; | |
var m = text.match(/([012]?\d):([012345]?\d)(:([012345]?\d))?/); | |
if (m) { | |
var seconds = parseInt(m[1], 10) * 3600; | |
seconds += parseInt(m[2], 10) * 60; | |
if (m[4]) { | |
second += parseInt(m[4], 10); | |
} | |
return seconds; | |
} else { | |
if (log) | |
GLog.write("Couldn't match " + text); | |
} | |
} | |
/** | |
* Create a string of dots that gets longer with the log of count. | |
*/ | |
function countToRepeatedDots(count) { | |
// Find ln_2(count) + 1 | |
var logCount = Math.ceil(Math.log(count) / 0.693148) + 1; | |
return new Array(logCount + 1).join("."); | |
} | |
function fetchRoutes() { | |
url = "/json/routes"; | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayRoutes); | |
} | |
function callbackDisplayRoutes(data, responseCode) { | |
if (responseCode != 200) { | |
patternDiv.appendChild(div); | |
} | |
var routes = eval(data); | |
var routesList = document.getElementById("routeList"); | |
while (routesList.hasChildNodes()) { | |
routesList.removeChild(routesList.firstChild); | |
} | |
for (i = 0; i < routes.length; ++i) { | |
var routeId = routes[i][0]; | |
var shortName = document.createElement("span"); | |
shortName.className = "shortName"; | |
shortName.appendChild(document.createTextNode(routes[i][1] + " ")); | |
var routeName = routes[i][2]; | |
var elem = document.createElement("div"); | |
elem.appendChild(shortName); | |
elem.appendChild(document.createTextNode(routeName)); | |
elem.id = "route_" + routeId; | |
elem.className = "routeChoice"; | |
elem.title = routeName; | |
GEvent.addDomListener(elem, "click", makeClosure(selectRoute, routeId)); | |
var routeContainer = document.createElement("div"); | |
routeContainer.id = "route_container_" + routeId; | |
routeContainer.className = "routeContainer"; | |
routeContainer.appendChild(elem); | |
routesList.appendChild(routeContainer); | |
} | |
} | |
function selectRoute(routeId) { | |
var routesList = document.getElementById("routeList"); | |
routeSpans = routesList.getElementsByTagName("div"); | |
for (var i = 0; i < routeSpans.length; ++i) { | |
if (routeSpans[i].className == "routeChoiceSelected") { | |
routeSpans[i].className = "routeChoice"; | |
} | |
} | |
// remove any previously-expanded route | |
var tripInfo = document.getElementById("tripInfo"); | |
if (tripInfo) | |
tripInfo.parentNode.removeChild(tripInfo); | |
selectedRoute = routeId; | |
var span = document.getElementById("route_" + routeId); | |
span.className = "routeChoiceSelected"; | |
fetchPatterns(routeId); | |
} | |
function fetchPatterns(routeId) { | |
url = "/json/routepatterns?route=" + encodeURIComponent(routeId) + "&time=" + parseTimeInput(); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayPatterns); | |
} | |
function callbackDisplayPatterns(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var div = document.createElement("div"); | |
div.className = "tripSection"; | |
div.id = "tripInfo"; | |
var firstTrip = null; | |
var patterns = eval(data); | |
clearMap(); | |
for (i = 0; i < patterns.length; ++i) { | |
patternDiv = document.createElement("div") | |
patternDiv.className = 'patternSection'; | |
div.appendChild(patternDiv) | |
var pat = patterns[i]; // [patName, patId, len(early trips), trips, len(later trips), has_non_zero_trip_type] | |
if (pat[5] == '1') { | |
patternDiv.className += " unusualPattern" | |
} | |
patternDiv.appendChild(document.createTextNode(pat[0])); | |
patternDiv.appendChild(document.createTextNode(", " + (pat[2] + pat[3].length + pat[4]) + " trips: ")); | |
if (pat[2] > 0) { | |
patternDiv.appendChild(document.createTextNode(countToRepeatedDots(pat[2]) + " ")); | |
} | |
for (j = 0; j < pat[3].length; ++j) { | |
var trip = pat[3][j]; | |
var tripId = trip[1]; | |
if ((i == 0) && (j == 0)) | |
firstTrip = tripId; | |
patternDiv.appendChild(document.createTextNode(" ")); | |
var span = document.createElement("span"); | |
span.appendChild(document.createTextNode(formatTime(trip[0]))); | |
span.id = "trip_" + tripId; | |
GEvent.addDomListener(span, "click", makeClosure(selectTrip, tripId)); | |
patternDiv.appendChild(span) | |
span.className = "tripChoice"; | |
} | |
if (pat[4] > 0) { | |
patternDiv.appendChild(document.createTextNode(" " + countToRepeatedDots(pat[4]))); | |
} | |
patternDiv.appendChild(document.createElement("br")); | |
} | |
route = document.getElementById("route_container_" + selectedRoute); | |
route.appendChild(div); | |
if (tripId != null) | |
selectTrip(firstTrip); | |
} | |
// Needed to get around limitation in javascript scope rules. | |
// See http://calculist.blogspot.com/2005/12/gotcha-gotcha.html | |
function makeClosure(f, a, b, c) { | |
return function() { f(a, b, c); }; | |
} | |
function make1ArgClosure(f, a, b, c) { | |
return function(x) { f(x, a, b, c); }; | |
} | |
function make2ArgClosure(f, a, b, c) { | |
return function(x, y) { f(x, y, a, b, c); }; | |
} | |
function selectTrip(tripId) { | |
var tripInfo = document.getElementById("tripInfo"); | |
if (tripInfo) { | |
tripSpans = tripInfo.getElementsByTagName('span'); | |
for (var i = 0; i < tripSpans.length; ++i) { | |
tripSpans[i].className = 'tripChoice'; | |
} | |
} | |
var span = document.getElementById("trip_" + tripId); | |
// Won't find the span if a different route is selected | |
if (span) { | |
span.className = 'tripChoiceSelected'; | |
} | |
clearMap(); | |
url = "/json/tripstoptimes?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayTripStopTimes); | |
fetchTripPolyLine(tripId); | |
fetchTripRows(tripId); | |
} | |
function callbackDisplayTripStopTimes(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var stopsTimes = eval(data); | |
if (!stopsTimes) return; | |
displayTripStopTimes(stopsTimes[0], stopsTimes[1]); | |
} | |
function fetchTripPolyLine(tripId) { | |
url = "/json/tripshape?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayTripPolyLine); | |
} | |
function callbackDisplayTripPolyLine(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var points = eval(data); | |
if (!points) return; | |
displayPolyLine(points); | |
} | |
var boundsOfPolyLine = null; | |
function expandBoundingBox(latLng) { | |
if (boundsOfPolyLine == null) { | |
boundsOfPolyLine = new GLatLngBounds(latLng, latLng); | |
} else { | |
boundsOfPolyLine.extend(latLng); | |
} | |
} | |
/** | |
* Display a line given a list of points | |
* | |
* @param {Array} List of lat,lng pairs | |
*/ | |
function displayPolyLine(points) { | |
var linePoints = Array(); | |
for (i = 0; i < points.length; ++i) { | |
var ll = new GLatLng(points[i][0], points[i][1]); | |
expandBoundingBox(ll); | |
linePoints[linePoints.length] = ll; | |
} | |
var polyline = new GPolyline(linePoints, "#FF0000", 4); | |
map.addOverlay(polyline); | |
map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine)); | |
} | |
function displayTripStopTimes(stops, times) { | |
for (i = 0; i < stops.length; ++i) { | |
var marker; | |
if (times && times[i] != null) { | |
marker = addStopMarkerFromList(stops[i], true, formatTime(times[i])); | |
} else { | |
marker = addStopMarkerFromList(stops[i], true); | |
} | |
expandBoundingBox(marker.getPoint()); | |
} | |
map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine)); | |
} | |
function fetchTripRows(tripId) { | |
url = "/json/triprows?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, make2ArgClosure(callbackDisplayTripRows, tripId)); | |
} | |
function callbackDisplayTripRows(data, responseCode, tripId) { | |
if (responseCode != 200) { | |
return; | |
} | |
var rows = eval(data); | |
if (!rows) return; | |
var html = ""; | |
for (var i = 0; i < rows.length; ++i) { | |
var filename = rows[i][0]; | |
var row = rows[i][1]; | |
html += "<b>" + filename + "</b>: " + formatDictionary(row) + "<br>"; | |
} | |
html += svgTag("/ttablegraph?height=100&trip=" + tripId, "height='115' width='100%'"); | |
var bottombarDiv = document.getElementById("bottombar"); | |
bottombarDiv.style.display = "block"; | |
bottombarDiv.style.height = "175px"; | |
bottombarDiv.innerHTML = html; | |
sizeRouteList(); | |
} | |
/** | |
* Return HTML to embed a SVG object in this page. src is the location of | |
* the SVG and attributes is inserted directly into the object or embed | |
* tag. | |
*/ | |
function svgTag(src, attributes) { | |
if (navigator.userAgent.toLowerCase().indexOf("msie") != -1) { | |
if (isSVGControlInstalled()) { | |
return "<embed pluginspage='http://www.adobe.com/svg/viewer/install/' src='" + src + "' " + attributes +"></embed>"; | |
} else { | |
return "<p>Please install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a> to get SVG support in IE</p>"; | |
} | |
} else { | |
return "<object data='" + src + "' type='image/svg+xml' " + attributes + "><p>No SVG support in your browser. Try Firefox 1.5 or newer or install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a></p></object>"; | |
} | |
} | |
/** | |
* Format an Array object containing key-value pairs into a human readable | |
* string. | |
*/ | |
function formatDictionary(d) { | |
var output = ""; | |
var first = 1; | |
for (var k in d) { | |
if (first) { | |
first = 0; | |
} else { | |
output += " "; | |
} | |
output += "<b>" + k + "</b>=" + d[k]; | |
} | |
return output; | |
} | |
function windowHeight() { | |
// Standard browsers (Mozilla, Safari, etc.) | |
if (self.innerHeight) | |
return self.innerHeight; | |
// IE 6 | |
if (document.documentElement && document.documentElement.clientHeight) | |
return document.documentElement.clientHeight; | |
// IE 5 | |
if (document.body) | |
return document.body.clientHeight; | |
// Just in case. | |
return 0; | |
} | |
function sizeRouteList() { | |
var bottombarHeight = 0; | |
var bottombarDiv = document.getElementById('bottombar'); | |
if (bottombarDiv.style.display != 'none') { | |
bottombarHeight = document.getElementById('bottombar').offsetHeight | |
+ document.getElementById('bottombar').style.marginTop; | |
} | |
var height = windowHeight() - document.getElementById('topbar').offsetHeight - 15 - bottombarHeight; | |
document.getElementById('content').style.height = height + 'px'; | |
if (map) { | |
// Without this displayPolyLine does not use the correct map size | |
map.checkResize(); | |
} | |
} | |
//]]> | |
</script> | |
</head> | |
<body class='sidebar-left' onload="load();" onunload="GUnload()" onresize="sizeRouteList()"> | |
<div id='topbar'> | |
<div id="edit"> | |
<span id="edit_status">...</span> | |
<form onSubmit="saveData(); return false;"><input value="Save" type="submit"> | |
</div> | |
<div id="agencyHeader">[agency]</div> | |
</div> | |
<div id='content'> | |
<div id='sidebar-wrapper'><div id='sidebar'> | |
Time: <input type="text" value="8:00" width="9" id="timeInput"><br> | |
<form onSubmit="stopTextSearchSubmit(); return false;"> | |
Find Station: <input type="text" id="stopTextSearchInput"><input value="Search" type="submit"></form><br> | |
<form onSubmit="tripTextSearchSubmit(); return false;"> | |
Find Trip ID: <input type="text" id="tripTextSearchInput"><input value="Search" type="submit"></form><br> | |
<div id="routeList">routelist</div> | |
</div></div> | |
<div id='map-wrapper'> <div id='map'></div> </div> | |
</div> | |
<div id='bottombar'>bottom bar</div> | |
</body> | |
</html> | |
/* | |
* LabeledMarker Class | |
* | |
* Copyright 2007 Mike Purvis (http://uwmike.com) | |
* | |
* Licensed under the Apache License, Version 2.0 (the "License"); | |
* you may not use this file except in compliance with the License. | |
* You may obtain a copy of the License at | |
* | |
* http://www.apache.org/licenses/LICENSE-2.0 | |
* | |
* Unless required by applicable law or agreed to in writing, software | |
* distributed under the License is distributed on an "AS IS" BASIS, | |
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
* See the License for the specific language governing permissions and | |
* limitations under the License. | |
* | |
* This class extends the Maps API's standard GMarker class with the ability | |
* to support markers with textual labels. Please see articles here: | |
* | |
* http://googlemapsbook.com/2007/01/22/extending-gmarker/ | |
* http://googlemapsbook.com/2007/03/06/clickable-labeledmarker/ | |
*/ | |
/** | |
* Constructor for LabeledMarker, which picks up on strings from the GMarker | |
* options array, and then calls the GMarker constructor. | |
* | |
* @param {GLatLng} latlng | |
* @param {GMarkerOptions} Named optional arguments: | |
* opt_opts.labelText {String} text to place in the overlay div. | |
* opt_opts.labelClass {String} class to use for the overlay div. | |
* (default "markerLabel") | |
* opt_opts.labelOffset {GSize} label offset, the x- and y-distance between | |
* the marker's latlng and the upper-left corner of the text div. | |
*/ | |
function LabeledMarker(latlng, opt_opts){ | |
this.latlng_ = latlng; | |
this.opts_ = opt_opts; | |
this.initText_ = opt_opts.labelText || ""; | |
this.labelClass_ = opt_opts.labelClass || "markerLabel"; | |
this.labelOffset_ = opt_opts.labelOffset || new GSize(0, 0); | |
this.clickable_ = opt_opts.clickable || true; | |
if (opt_opts.draggable) { | |
// This version of LabeledMarker doesn't support dragging. | |
opt_opts.draggable = false; | |
} | |
GMarker.apply(this, arguments); | |
} | |
// It's a limitation of JavaScript inheritance that we can't conveniently | |
// inherit from GMarker without having to run its constructor. In order for | |
// the constructor to run, it requires some dummy GLatLng. | |
LabeledMarker.prototype = new GMarker(new GLatLng(0, 0)); | |
/** | |
* Is called by GMap2's addOverlay method. Creates the text div and adds it | |
* to the relevant parent div. | |
* | |
* @param {GMap2} map the map that has had this labeledmarker added to it. | |
*/ | |
LabeledMarker.prototype.initialize = function(map) { | |
// Do the GMarker constructor first. | |
GMarker.prototype.initialize.apply(this, arguments); | |
this.map_ = map; | |
this.setText(this.initText_); | |
} | |
/** | |
* Create a new div for this label. | |
*/ | |
LabeledMarker.prototype.makeDiv_ = function(map) { | |
if (this.div_) { | |
return; | |
} | |
this.div_ = document.createElement("div"); | |
this.div_.className = this.labelClass_; | |
this.div_.style.position = "absolute"; | |
this.div_.style.cursor = "pointer"; | |
this.map_.getPane(G_MAP_MARKER_PANE).appendChild(this.div_); | |
if (this.clickable_) { | |
/** | |
* Creates a closure for passing events through to the source marker | |
* This is located in here to avoid cluttering the global namespace. | |
* The downside is that the local variables from initialize() continue | |
* to occupy space on the stack. | |
* | |
* @param {Object} object to receive event trigger. | |
* @param {GEventListener} event to be triggered. | |
*/ | |
function newEventPassthru(obj, event) { | |
return function() { | |
GEvent.trigger(obj, event); | |
}; | |
} | |
// Pass through events fired on the text div to the marker. | |
var eventPassthrus = ['click', 'dblclick', 'mousedown', 'mouseup', 'mouseover', 'mouseout']; | |
for(var i = 0; i < eventPassthrus.length; i++) { | |
var name = eventPassthrus[i]; | |
GEvent.addDomListener(this.div_, name, newEventPassthru(this, name)); | |
} | |
} | |
} | |
/** | |
* Return the html in the div of this label, or "" if none is set | |
*/ | |
LabeledMarker.prototype.getText = function(text) { | |
if (this.div_) { | |
return this.div_.innerHTML; | |
} else { | |
return ""; | |
} | |
} | |
/** | |
* Set the html in the div of this label to text. If text is "" or null remove | |
* the div. | |
*/ | |
LabeledMarker.prototype.setText = function(text) { | |
if (this.div_) { | |
if (text) { | |
this.div_.innerHTML = text; | |
} else { | |
// remove div | |
GEvent.clearInstanceListeners(this.div_); | |
this.div_.parentNode.removeChild(this.div_); | |
this.div_ = null; | |
} | |
} else { | |
if (text) { | |
this.makeDiv_(); | |
this.div_.innerHTML = text; | |
this.redraw(); | |
} | |
} | |
} | |
/** | |
* Move the text div based on current projection and zoom level, call the redraw() | |
* handler in GMarker. | |
* | |
* @param {Boolean} force will be true when pixel coordinates need to be recomputed. | |
*/ | |
LabeledMarker.prototype.redraw = function(force) { | |
GMarker.prototype.redraw.apply(this, arguments); | |
if (this.div_) { | |
// Calculate the DIV coordinates of two opposite corners of our bounds to | |
// get the size and position of our rectangle | |
var p = this.map_.fromLatLngToDivPixel(this.latlng_); | |
var z = GOverlay.getZIndex(this.latlng_.lat()); | |
// Now position our div based on the div coordinates of our bounds | |
this.div_.style.left = (p.x + this.labelOffset_.width) + "px"; | |
this.div_.style.top = (p.y + this.labelOffset_.height) + "px"; | |
this.div_.style.zIndex = z; // in front of the marker | |
} | |
} | |
/** | |
* Remove the text div from the map pane, destroy event passthrus, and calls the | |
* default remove() handler in GMarker. | |
*/ | |
LabeledMarker.prototype.remove = function() { | |
this.setText(null); | |
GMarker.prototype.remove.apply(this, arguments); | |
} | |
/** | |
* Return a copy of this overlay, for the parent Map to duplicate itself in full. This | |
* is part of the Overlay interface and is used, for example, to copy everything in the | |
* main view into the mini-map. | |
*/ | |
LabeledMarker.prototype.copy = function() { | |
return new LabeledMarker(this.latlng_, this.opt_opts_); | |
} | |
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_blue.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_blue_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_red_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_shadow.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_shadow_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_yellow.png and /dev/null differ
html { overflow: hidden; } | |
html, body { | |
margin: 0; | |
padding: 0; | |
height: 100%; | |
} | |
body { margin: 5px; } | |
#content { | |
position: relative; | |
margin-top: 5px; | |
} | |
#map-wrapper { | |
position: relative; | |
height: 100%; | |
width: auto; | |
left: 0; | |
top: 0; | |
z-index: 100; | |
} | |
#map { | |
position: relative; | |
height: 100%; | |
width: auto; | |
border: 1px solid #aaa; | |
} | |
#sidebar-wrapper { | |
position: absolute; | |
height: 100%; | |
width: 220px; | |
top: 0; | |
border: 1px solid #aaa; | |
overflow: auto; | |
z-index: 300; | |
} | |
#sidebar { | |
position: relative; | |
width: auto; | |
padding: 4px; | |
overflow: hidden; | |
} | |
#topbar { | |
position: relative; | |
padding: 2px; | |
border: 1px solid #aaa; | |
margin: 0; | |
} | |
#topbar h1 { | |
white-space: nowrap; | |
overflow: hidden; | |
font-size: 14pt; | |
font-weight: bold; | |
font-face: | |
margin: 0; | |
} | |
body.sidebar-right #map-wrapper { margin-right: 229px; } | |
body.sidebar-right #sidebar-wrapper { right: 0; } | |
body.sidebar-left #map { margin-left: 229px; } | |
body.sidebar-left #sidebar { left: 0; } | |
body.nosidebar #map { margin: 0; } | |
body.nosidebar #sidebar { display: none; } | |
#bottombar { | |
position: relative; | |
padding: 2px; | |
border: 1px solid #aaa; | |
margin-top: 5px; | |
display: none; | |
} | |
/* holly hack for IE to get position:bottom right | |
see: http://www.positioniseverything.net/abs_relbugs.html | |
\*/ | |
* html #topbar { height: 1px; } | |
/* */ | |
body { | |
font-family:helvetica,arial,sans, sans-serif; | |
} | |
h1 { | |
margin-top: 0.5em; | |
margin-bottom: 0.5em; | |
} | |
h2 { | |
margin-top: 0.2em; | |
margin-bottom: 0.2em; | |
} | |
h3 { | |
margin-top: 0.2em; | |
margin-bottom: 0.2em; | |
} | |
.tooltip { | |
white-space: nowrap; | |
padding: 2px; | |
color: black; | |
font-size: 12px; | |
background-color: white; | |
border: 1px solid black; | |
cursor: pointer; | |
filter:alpha(opacity=60); | |
-moz-opacity: 0.6; | |
opacity: 0.6; | |
} | |
#routeList { | |
border: 1px solid black; | |
overflow: auto; | |
} | |
.shortName { | |
font-size: bigger; | |
font-weight: bold; | |
} | |
.routeChoice,.tripChoice,.routeChoiceSelected,.tripChoiceSelected { | |
white-space: nowrap; | |
cursor: pointer; | |
padding: 0px 2px; | |
color: black; | |
line-height: 1.4em; | |
font-size: smaller; | |
overflow: hidden; | |
} | |
.tripChoice { | |
color: blue; | |
} | |
.routeChoiceSelected,.tripChoiceSelected { | |
background-color: blue; | |
color: white; | |
} | |
.tripSection { | |
padding-left: 0px; | |
font-size: 10pt; | |
background-color: lightblue; | |
} | |
.patternSection { | |
margin-left: 8px; | |
padding-left: 2px; | |
border-bottom: 1px solid grey; | |
} | |
.unusualPattern { | |
background-color: #aaa; | |
color: #444; | |
} | |
/* Following styles are used by location_editor.py */ | |
#edit { | |
visibility: hidden; | |
float: right; | |
font-size: 80%; | |
} | |
#edit form { | |
display: inline; | |
} |
' Copyright 1999-2000 Adobe Systems Inc. All rights reserved. Permission to redistribute | |
' granted provided that this file is not modified in any way. This file is provided with | |
' absolutely no warranties of any kind. | |
Function isSVGControlInstalled() | |
on error resume next | |
isSVGControlInstalled = IsObject(CreateObject("Adobe.SVGCtl")) | |
end Function | |
#!/usr/bin/python2.5 | |
# | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Output svg/xml data for a marey graph | |
Marey graphs are a visualization form typically used for timetables. Time | |
is on the x-axis and position on the y-axis. This module reads data from a | |
transitfeed.Schedule and creates a marey graph in svg/xml format. The graph | |
shows the speed between stops for each trip of a route. | |
TODO: This module was taken from an internal Google tool. It works but is not | |
well intergrated into transitfeed and schedule_viewer. Also, it has lots of | |
ugly hacks to compensate set canvas size and so on which could be cleaned up. | |
For a little more information see (I didn't make this URL ;-) | |
http://transliteracies.english.ucsb.edu/post/research-project/research-clearinghouse-individual/research-reports/the-indexical-imagination-marey%e2%80%99s-graphic-method-and-the-technological-transformation-of-writing-in-the-nineteenth-century | |
MareyGraph: Class, keeps cache of graph data and graph properties | |
and draws marey graphs in svg/xml format on request. | |
""" | |
import itertools | |
import transitfeed | |
class MareyGraph: | |
"""Produces and caches marey graph from transit feed data.""" | |
_MAX_ZOOM = 5.0 # change docstring of ChangeScaleFactor if this changes | |
_DUMMY_SEPARATOR = 10 #pixel | |
def __init__(self): | |
# Timetablerelated state | |
self._cache = str() | |
self._stoplist = [] | |
self._tlist = [] | |
self._stations = [] | |
self._decorators = [] | |
# TODO: Initialize default values via constructor parameters | |
# or via a class constants | |
# Graph properties | |
self._tspan = 30 # number of hours to display | |
self._offset = 0 # starting hour | |
self._hour_grid = 60 # number of pixels for an hour | |
self._min_grid = 5 # number of pixels between subhour lines | |
# Canvas properties | |
self._zoomfactor = 0.9 # svg Scaling factor | |
self._xoffset = 0 # move graph horizontally | |
self._yoffset = 0 # move graph veritcally | |
self._bgcolor = "lightgrey" | |
# height/width of graph canvas before transform | |
self._gwidth = self._tspan * self._hour_grid | |
def Draw(self, stoplist=None, triplist=None, height=520): | |
"""Main interface for drawing the marey graph. | |
If called without arguments, the data generated in the previous call | |
will be used. New decorators can be added between calls. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
Returns: | |
# A string that contain a svg/xml web-page with a marey graph. | |
" <svg width="1440" height="520" version="1.1" ... " | |
""" | |
output = str() | |
if not triplist: | |
triplist = [] | |
if not stoplist: | |
stoplist = [] | |
if not self._cache or triplist or stoplist: | |
self._gheight = height | |
self._tlist=triplist | |
self._slist=stoplist | |
self._decorators = [] | |
self._stations = self._BuildStations(stoplist) | |
self._cache = "%s %s %s %s" % (self._DrawBox(), | |
self._DrawHours(), | |
self._DrawStations(), | |
self._DrawTrips(triplist)) | |
output = "%s %s %s %s" % (self._DrawHeader(), | |
self._cache, | |
self._DrawDecorators(), | |
self._DrawFooter()) | |
return output | |
def _DrawHeader(self): | |
svg_header = """ | |
<svg width="%s" height="%s" version="1.1" | |
xmlns="http://www.w3.org/2000/svg"> | |
<script type="text/ecmascript"><![CDATA[ | |
function init(evt) { | |
if ( window.svgDocument == null ) | |
svgDocument = evt.target.ownerDocument; | |
} | |
var oldLine = 0; | |
var oldStroke = 0; | |
var hoffset= %s; // Data from python | |
function parseLinePoints(pointnode){ | |
var wordlist = pointnode.split(" "); | |
var xlist = new Array(); | |
var h; | |
var m; | |
// TODO: add linebreaks as appropriate | |
var xstr = " Stop Times :"; | |
for (i=0;i<wordlist.length;i=i+2){ | |
var coord = wordlist[i].split(","); | |
h = Math.floor(parseInt((coord[0])-20)/60); | |
m = parseInt((coord[0]-20))%%60; | |
xstr = xstr +" "+ (hoffset+h) +":"+m; | |
} | |
return xstr; | |
} | |
function LineClick(tripid, x) { | |
var line = document.getElementById(tripid); | |
if (oldLine) | |
oldLine.setAttribute("stroke",oldStroke); | |
oldLine = line; | |
oldStroke = line.getAttribute("stroke"); | |
line.setAttribute("stroke","#fff"); | |
var dynTxt = document.getElementById("dynamicText"); | |
var tripIdTxt = document.createTextNode(x); | |
while (dynTxt.hasChildNodes()){ | |
dynTxt.removeChild(dynTxt.firstChild); | |
} | |
dynTxt.appendChild(tripIdTxt); | |
} | |
]]> </script> | |
<style type="text/css"><![CDATA[ | |
.T { fill:none; stroke-width:1.5 } | |
.TB { fill:none; stroke:#e20; stroke-width:2 } | |
.Station { fill:none; stroke-width:1 } | |
.Dec { fill:none; stroke-width:1.5 } | |
.FullHour { fill:none; stroke:#eee; stroke-width:1 } | |
.SubHour { fill:none; stroke:#ddd; stroke-width:1 } | |
.Label { fill:#aaa; font-family:Helvetica,Arial,sans; | |
text-anchor:middle } | |
.Info { fill:#111; font-family:Helvetica,Arial,sans; | |
text-anchor:start; } | |
]]></style> | |
<text class="Info" id="dynamicText" x="0" y="%d"></text> | |
<g id="mcanvas" transform="translate(%s,%s)"> | |
<g id="zcanvas" transform="scale(%s)"> | |
""" % (self._gwidth + self._xoffset + 20, self._gheight + 15, | |
self._offset, self._gheight + 10, | |
self._xoffset, self._yoffset, self._zoomfactor) | |
return svg_header | |
def _DrawFooter(self): | |
return "</g></g></svg>" | |
def _DrawDecorators(self): | |
"""Used to draw fancy overlays on trip graphs.""" | |
return " ".join(self._decorators) | |
def _DrawBox(self): | |
tmpstr = """<rect x="%s" y="%s" width="%s" height="%s" | |
fill="lightgrey" stroke="%s" stroke-width="2" /> | |
""" % (0, 0, self._gwidth + 20, self._gheight, self._bgcolor) | |
return tmpstr | |
def _BuildStations(self, stoplist): | |
"""Dispatches the best algorithm for calculating station line position. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
Returns: | |
# One integer y-coordinate for each station normalized between | |
# 0 and X, where X is the height of the graph in pixels | |
[0, 33, 140, ... , X] | |
""" | |
stations = [] | |
dists = self._EuclidianDistances(stoplist) | |
stations = self._CalculateYLines(dists) | |
return stations | |
def _EuclidianDistances(self,slist): | |
"""Calculate euclidian distances between stops. | |
Uses the stoplists long/lats to approximate distances | |
between stations and build a list with y-coordinates for the | |
horizontal lines in the graph. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
Returns: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
[0,33,140, ... ,X] | |
""" | |
e_dists2 = [transitfeed.ApproximateDistanceBetweenStops(stop, tail) for | |
(stop,tail) in itertools.izip(slist, slist[1:])] | |
return e_dists2 | |
def _CalculateYLines(self, dists): | |
"""Builds a list with y-coordinates for the horizontal lines in the graph. | |
Args: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
dists: [0,33,140, ... ,X] | |
Returns: | |
# One integer y-coordinate for each station normalized between | |
# 0 and X, where X is the height of the graph in pixels | |
[0, 33, 140, ... , X] | |
""" | |
tot_dist = sum(dists) | |
if tot_dist > 0: | |
pixel_dist = [float(d * (self._gheight-20))/tot_dist for d in dists] | |
pixel_grid = [0]+[int(pd + sum(pixel_dist[0:i])) for i,pd in | |
enumerate(pixel_dist)] | |
else: | |
pixel_grid = [] | |
return pixel_grid | |
def _TravelTimes(self,triplist,index=0): | |
""" Calculate distances and plot stops. | |
Uses a timetable to approximate distances | |
between stations | |
Args: | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
# (Optional) Index of Triplist prefered for timetable Calculation | |
index: 3 | |
Returns: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
[0,33,140, ... ,X] | |
""" | |
def DistanceInTravelTime(dep_secs, arr_secs): | |
t_dist = arr_secs-dep_secs | |
if t_dist<0: | |
t_dist = self._DUMMY_SEPARATOR # min separation | |
return t_dist | |
if not triplist: | |
return [] | |
if 0 < index < len(triplist): | |
trip = triplist[index] | |
else: | |
trip = triplist[0] | |
t_dists2 = [DistanceInTravelTime(stop[3],tail[2]) for (stop,tail) | |
in itertools.izip(trip.GetTimeStops(),trip.GetTimeStops()[1:])] | |
return t_dists2 | |
def _AddWarning(self, str): | |
print str | |
def _DrawTrips(self,triplist,colpar=""): | |
"""Generates svg polylines for each transit trip. | |
Args: | |
# Class Trip is defined in transitfeed.py | |
[Trip, Trip, ...] | |
Returns: | |
# A string containing a polyline tag for each trip | |
' <polyline class="T" stroke="#336633" points="433,0 ...' | |
""" | |
stations = [] | |
if not self._stations and triplist: | |
self._stations = self._CalculateYLines(self._TravelTimes(triplist)) | |
if not self._stations: | |
self._AddWarning("Failed to use traveltimes for graph") | |
self._stations = self._CalculateYLines(self._Uniform(triplist)) | |
if not self._stations: | |
self._AddWarning("Failed to calculate station distances") | |
return | |
stations = self._stations | |
tmpstrs = [] | |
servlist = [] | |
for t in triplist: | |
if not colpar: | |
if t.service_id not in servlist: | |
servlist.append(t.service_id) | |
shade = int(servlist.index(t.service_id) * (200/len(servlist))+55) | |
color = "#00%s00" % hex(shade)[2:4] | |
else: | |
color=colpar | |
start_offsets = [0] | |
first_stop = t.GetTimeStops()[0] | |
for j,freq_offset in enumerate(start_offsets): | |
if j>0 and not colpar: | |
color="purple" | |
scriptcall = 'onmouseover="LineClick(\'%s\',\'Trip %s starting %s\')"' % (t.trip_id, | |
t.trip_id, transitfeed.FormatSecondsSinceMidnight(t.GetStartTime())) | |
tmpstrhead = '<polyline class="T" id="%s" stroke="%s" %s points="' % \ | |
(str(t.trip_id),color, scriptcall) | |
tmpstrs.append(tmpstrhead) | |
for i, s in enumerate(t.GetTimeStops()): | |
arr_t = s[0] | |
dep_t = s[1] | |
if arr_t is None or dep_t is None: | |
continue | |
arr_x = int(arr_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset | |
dep_x = int(dep_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset | |
tmpstrs.append("%s,%s " % (int(arr_x+20), int(stations[i]+20))) | |
tmpstrs.append("%s,%s " % (int(dep_x+20), int(stations[i]+20))) | |
tmpstrs.append('" />') | |
return "".join(tmpstrs) | |
def _Uniform(self, triplist): | |
"""Fallback to assuming uniform distance between stations""" | |
# This should not be neseccary, but we are in fallback mode | |
longest = max([len(t.GetTimeStops()) for t in triplist]) | |
return [100] * longest | |
def _DrawStations(self, color="#aaa"): | |
"""Generates svg with a horizontal line for each station/stop. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stations: [Stop, Stop, ...] | |
Returns: | |
# A string containing a polyline tag for each stop | |
" <polyline class="Station" stroke="#336633" points="20,0 ..." | |
""" | |
stations=self._stations | |
tmpstrs = [] | |
for y in stations: | |
tmpstrs.append(' <polyline class="Station" stroke="%s" \ | |
points="%s,%s, %s,%s" />' %(color,20,20+y+.5,self._gwidth+20,20+y+.5)) | |
return "".join(tmpstrs) | |
def _DrawHours(self): | |
"""Generates svg to show a vertical hour and sub-hour grid | |
Returns: | |
# A string containing a polyline tag for each grid line | |
" <polyline class="FullHour" points="20,0 ..." | |
""" | |
tmpstrs = [] | |
for i in range(0, self._gwidth, self._min_grid): | |
if i % self._hour_grid == 0: | |
tmpstrs.append('<polyline class="FullHour" points="%d,%d, %d,%d" />' \ | |
% (i + .5 + 20, 20, i + .5 + 20, self._gheight)) | |
tmpstrs.append('<text class="Label" x="%d" y="%d">%d</text>' | |
% (i + 20, 20, | |
(i / self._hour_grid + self._offset) % 24)) | |
else: | |
tmpstrs.append('<polyline class="SubHour" points="%d,%d,%d,%d" />' \ | |
% (i + .5 + 20, 20, i + .5 + 20, self._gheight)) | |
return "".join(tmpstrs) | |
def AddStationDecoration(self, index, color="#f00"): | |
"""Flushes existing decorations and highlights the given station-line. | |
Args: | |
# Integer, index of stop to be highlighted. | |
index: 4 | |
# An optional string with a html color code | |
color: "#fff" | |
""" | |
tmpstr = str() | |
num_stations = len(self._stations) | |
ind = int(index) | |
if self._stations: | |
if 0<ind<num_stations: | |
y = self._stations[ind] | |
tmpstr = '<polyline class="Dec" stroke="%s" points="%s,%s,%s,%s" />' \ | |
% (color, 20, 20+y+.5, self._gwidth+20, 20+y+.5) | |
self._decorators.append(tmpstr) | |
def AddTripDecoration(self, triplist, color="#f00"): | |
"""Flushes existing decorations and highlights the given trips. | |
Args: | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
# An optional string with a html color code | |
color: "#fff" | |
""" | |
tmpstr = self._DrawTrips(triplist,color) | |
self._decorators.append(tmpstr) | |
def ChangeScaleFactor(self, newfactor): | |
"""Changes the zoom of the graph manually. | |
1.0 is the original canvas size. | |
Args: | |
# float value between 0.0 and 5.0 | |
newfactor: 0.7 | |
""" | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ScaleLarger(self): | |
"""Increases the zoom of the graph one step (0.1 units).""" | |
newfactor = self._zoomfactor + 0.1 | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ScaleSmaller(self): | |
"""Decreases the zoom of the graph one step(0.1 units).""" | |
newfactor = self._zoomfactor - 0.1 | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ClearDecorators(self): | |
"""Removes all the current decorators. | |
""" | |
self._decorators = [] | |
def AddTextStripDecoration(self,txtstr): | |
tmpstr = '<text class="Info" x="%d" y="%d">%s</text>' % (0, | |
20 + self._gheight, txtstr) | |
self._decorators.append(tmpstr) | |
def SetSpan(self, first_arr, last_arr, mint=5 ,maxt=30): | |
s_hour = (first_arr / 3600) - 1 | |
e_hour = (last_arr / 3600) + 1 | |
self._offset = max(min(s_hour, 23), 0) | |
self._tspan = max(min(e_hour - s_hour, maxt), mint) | |
self._gwidth = self._tspan * self._hour_grid | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Expose some modules in this package. | |
Before transitfeed version 1.2.4 all our library code was distributed in a | |
one file module, transitfeed.py, and could be used as | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
At that time the module (one file, transitfeed.py) was converted into a | |
package (a directory named transitfeed containing __init__.py and multiple .py | |
files). Classes and attributes exposed by the old module may still be imported | |
in the same way. Indeed, code that depends on the library <em>should</em> | |
continue to use import commands such as the above and ignore _transitfeed. | |
""" | |
from _transitfeed import * | |
__version__ = _transitfeed.__version__ | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Easy interface for handling a Google Transit Feed file. | |
Do not import this module directly. Thanks to __init__.py you should do | |
something like: | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
... | |
This module is a library to help you create, read and write Google | |
Transit Feed files. Refer to the feed specification, available at | |
http://code.google.com/transit/spec/transit_feed_specification.htm, for a | |
complete description how the transit feed represents a transit schedule. This | |
library supports all required parts of the specification but does not yet | |
support all optional parts. Patches welcome! | |
The specification describes several tables such as stops, routes and trips. | |
In a feed file these are stored as comma separeted value files. This library | |
represents each row of these tables with a single Python object. This object has | |
attributes for each value on the row. For example, schedule.AddStop returns a | |
Stop object which has attributes such as stop_lat and stop_name. | |
Schedule: Central object of the parser | |
GenericGTFSObject: A base class for each of the objects below | |
Route: Represents a single route | |
Trip: Represents a single trip | |
Stop: Represents a single stop | |
ServicePeriod: Represents a single service, a set of dates | |
Agency: Represents the agency in this feed | |
Transfer: Represents a single transfer rule | |
TimeToSecondsSinceMidnight(): Convert HH:MM:SS into seconds since midnight. | |
FormatSecondsSinceMidnight(s): Formats number of seconds past midnight into a string | |
""" | |
# TODO: Preserve arbitrary columns? | |
import bisect | |
import cStringIO as StringIO | |
import codecs | |
from transitfeed.util import defaultdict | |
import csv | |
import datetime | |
import logging | |
import math | |
import os | |
import random | |
try: | |
import sqlite3 as sqlite | |
except ImportError: | |
from pysqlite2 import dbapi2 as sqlite | |
import re | |
import tempfile | |
import time | |
import warnings | |
# Objects in a schedule (Route, Trip, etc) should not keep a strong reference | |
# to the Schedule object to avoid a reference cycle. Schedule needs to use | |
# __del__ to cleanup its temporary file. The garbage collector can't handle | |
# reference cycles containing objects with custom cleanup code. | |
import weakref | |
import zipfile | |
OUTPUT_ENCODING = 'utf-8' | |
MAX_DISTANCE_FROM_STOP_TO_SHAPE = 1000 | |
MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING = 100.0 | |
MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR = 1000.0 | |
__version__ = '1.2.5' | |
def EncodeUnicode(text): | |
""" | |
Optionally encode text and return it. The result should be safe to print. | |
""" | |
if type(text) == type(u''): | |
return text.encode(OUTPUT_ENCODING) | |
else: | |
return text | |
# These are used to distinguish between errors (not allowed by the spec) | |
# and warnings (not recommended) when reporting issues. | |
TYPE_ERROR = 0 | |
TYPE_WARNING = 1 | |
class ProblemReporterBase: | |
"""Base class for problem reporters. Tracks the current context and creates | |
an exception object for each problem. Subclasses must implement | |
_Report(self, e)""" | |
def __init__(self): | |
self.ClearContext() | |
def ClearContext(self): | |
"""Clear any previous context.""" | |
self._context = None | |
def SetFileContext(self, file_name, row_num, row, headers): | |
"""Save the current context to be output with any errors. | |
Args: | |
file_name: string | |
row_num: int | |
row: list of strings | |
headers: list of column headers, its order corresponding to row's | |
""" | |
self._context = (file_name, row_num, row, headers) | |
def FeedNotFound(self, feed_name, context=None): | |
e = FeedNotFound(feed_name=feed_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def UnknownFormat(self, feed_name, context=None): | |
e = UnknownFormat(feed_name=feed_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def FileFormat(self, problem, context=None): | |
e = FileFormat(problem=problem, context=context, | |
context2=self._context) | |
self._Report(e) | |
def MissingFile(self, file_name, context=None): | |
e = MissingFile(file_name=file_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def UnknownFile(self, file_name, context=None): | |
e = UnknownFile(file_name=file_name, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def EmptyFile(self, file_name, context=None): | |
e = EmptyFile(file_name=file_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def MissingColumn(self, file_name, column_name, context=None): | |
e = MissingColumn(file_name=file_name, column_name=column_name, | |
context=context, context2=self._context) | |
self._Report(e) | |
def UnrecognizedColumn(self, file_name, column_name, context=None): | |
e = UnrecognizedColumn(file_name=file_name, column_name=column_name, | |
context=context, context2=self._context, | |
type=TYPE_WARNING) | |
self._Report(e) | |
def CsvSyntax(self, description=None, context=None, type=TYPE_ERROR): | |
e = CsvSyntax(description=description, context=context, | |
context2=self._context, type=type) | |
self._Report(e) | |
def DuplicateColumn(self, file_name, header, count, type=TYPE_ERROR, | |
context=None): | |
e = DuplicateColumn(file_name=file_name, | |
header=header, | |
count=count, | |
type=type, | |
context=context, | |
context2=self._context) | |
self._Report(e) | |
def MissingValue(self, column_name, reason=None, context=None): | |
e = MissingValue(column_name=column_name, reason=reason, context=context, | |
context2=self._context) | |
self._Report(e) | |
def InvalidValue(self, column_name, value, reason=None, context=None, | |
type=TYPE_ERROR): | |
e = InvalidValue(column_name=column_name, value=value, reason=reason, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def DuplicateID(self, column_names, values, context=None, type=TYPE_ERROR): | |
if isinstance(column_names, tuple): | |
column_names = '(' + ', '.join(column_names) + ')' | |
if isinstance(values, tuple): | |
values = '(' + ', '.join(values) + ')' | |
e = DuplicateID(column_name=column_names, value=values, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def UnusedStop(self, stop_id, stop_name, context=None): | |
e = UnusedStop(stop_id=stop_id, stop_name=stop_name, | |
context=context, context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def UsedStation(self, stop_id, stop_name, context=None): | |
e = UsedStation(stop_id=stop_id, stop_name=stop_name, | |
context=context, context2=self._context, type=TYPE_ERROR) | |
self._Report(e) | |
def StopTooFarFromParentStation(self, stop_id, stop_name, parent_stop_id, | |
parent_stop_name, distance, | |
type=TYPE_WARNING, context=None): | |
e = StopTooFarFromParentStation( | |
stop_id=stop_id, stop_name=stop_name, | |
parent_stop_id=parent_stop_id, | |
parent_stop_name=parent_stop_name, distance=distance, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def StopsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b, | |
distance, type=TYPE_WARNING, context=None): | |
e = StopsTooClose( | |
stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b, | |
stop_id_b=stop_id_b, distance=distance, context=context, | |
context2=self._context, type=type) | |
self._Report(e) | |
def StationsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b, | |
distance, type=TYPE_WARNING, context=None): | |
e = StationsTooClose( | |
stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b, | |
stop_id_b=stop_id_b, distance=distance, context=context, | |
context2=self._context, type=type) | |
self._Report(e) | |
def DifferentStationTooClose(self, stop_name, stop_id, | |
station_stop_name, station_stop_id, | |
distance, type=TYPE_WARNING, context=None): | |
e = DifferentStationTooClose( | |
stop_name=stop_name, stop_id=stop_id, | |
station_stop_name=station_stop_name, station_stop_id=station_stop_id, | |
distance=distance, context=context, context2=self._context, type=type) | |
self._Report(e) | |
def StopTooFarFromShapeWithDistTraveled(self, trip_id, stop_name, stop_id, | |
shape_dist_traveled, shape_id, | |
distance, max_distance, | |
type=TYPE_WARNING): | |
e = StopTooFarFromShapeWithDistTraveled( | |
trip_id=trip_id, stop_name=stop_name, stop_id=stop_id, | |
shape_dist_traveled=shape_dist_traveled, shape_id=shape_id, | |
distance=distance, max_distance=max_distance, type=type) | |
self._Report(e) | |
def ExpirationDate(self, expiration, context=None): | |
e = ExpirationDate(expiration=expiration, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def FutureService(self, start_date, context=None): | |
e = FutureService(start_date=start_date, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def InvalidLineEnd(self, bad_line_end, context=None): | |
"""bad_line_end is a human readable string.""" | |
e = InvalidLineEnd(bad_line_end=bad_line_end, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def TooFastTravel(self, trip_id, prev_stop, next_stop, dist, time, speed, | |
type=TYPE_ERROR): | |
e = TooFastTravel(trip_id=trip_id, prev_stop=prev_stop, | |
next_stop=next_stop, time=time, dist=dist, speed=speed, | |
context=None, context2=self._context, type=type) | |
self._Report(e) | |
def StopWithMultipleRouteTypes(self, stop_name, stop_id, route_id1, route_id2, | |
context=None): | |
e = StopWithMultipleRouteTypes(stop_name=stop_name, stop_id=stop_id, | |
route_id1=route_id1, route_id2=route_id2, | |
context=context, context2=self._context, | |
type=TYPE_WARNING) | |
self._Report(e) | |
def DuplicateTrip(self, trip_id1, route_id1, trip_id2, route_id2, | |
context=None): | |
e = DuplicateTrip(trip_id1=trip_id1, route_id1=route_id1, trip_id2=trip_id2, | |
route_id2=route_id2, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def OtherProblem(self, description, context=None, type=TYPE_ERROR): | |
e = OtherProblem(description=description, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def TooManyDaysWithoutService(self, | |
first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service, | |
context=None, | |
type=TYPE_WARNING): | |
e = TooManyDaysWithoutService( | |
first_day_without_service=first_day_without_service, | |
last_day_without_service=last_day_without_service, | |
consecutive_days_without_service=consecutive_days_without_service, | |
context=context, | |
context2=self._context, | |
type=type) | |
self._Report(e) | |
class ProblemReporter(ProblemReporterBase): | |
"""This is a basic problem reporter that just prints to console.""" | |
def _Report(self, e): | |
context = e.FormatContext() | |
if context: | |
print context | |
print EncodeUnicode(self._LineWrap(e.FormatProblem(), 78)) | |
@staticmethod | |
def _LineWrap(text, width): | |
""" | |
A word-wrap function that preserves existing line breaks | |
and most spaces in the text. Expects that existing line | |
breaks are posix newlines (\n). | |
Taken from: | |
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/148061 | |
""" | |
return reduce(lambda line, word, width=width: '%s%s%s' % | |
(line, | |
' \n'[(len(line) - line.rfind('\n') - 1 + | |
len(word.split('\n', 1)[0]) >= width)], | |
word), | |
text.split(' ') | |
) | |
class ExceptionWithContext(Exception): | |
def __init__(self, context=None, context2=None, **kwargs): | |
"""Initialize an exception object, saving all keyword arguments in self. | |
context and context2, if present, must be a tuple of (file_name, row_num, | |
row, headers). context2 comes from ProblemReporter.SetFileContext. context | |
was passed in with the keyword arguments. context2 is ignored if context | |
is present.""" | |
Exception.__init__(self) | |
if context: | |
self.__dict__.update(self.ContextTupleToDict(context)) | |
elif context2: | |
self.__dict__.update(self.ContextTupleToDict(context2)) | |
self.__dict__.update(kwargs) | |
if ('type' in kwargs) and (kwargs['type'] == TYPE_WARNING): | |
self._type = TYPE_WARNING | |
else: | |
self._type = TYPE_ERROR | |
def GetType(self): | |
return self._type | |
def IsError(self): | |
return self._type == TYPE_ERROR | |
def IsWarning(self): | |
return self._type == TYPE_WARNING | |
CONTEXT_PARTS = ['file_name', 'row_num', 'row', 'headers'] | |
@staticmethod | |
def ContextTupleToDict(context): | |
"""Convert a tuple representing a context into a dict of (key, value) pairs""" | |
d = {} | |
if not context: | |
return d | |
for k, v in zip(ExceptionWithContext.CONTEXT_PARTS, context): | |
if v != '' and v != None: # Don't ignore int(0), a valid row_num | |
d[k] = v | |
return d | |
def __str__(self): | |
return self.FormatProblem() | |
def GetDictToFormat(self): | |
"""Return a copy of self as a dict, suitable for passing to FormatProblem""" | |
d = {} | |
for k, v in self.__dict__.items(): | |
# TODO: Better handling of unicode/utf-8 within Schedule objects. | |
# Concatinating a unicode and utf-8 str object causes an exception such | |
# as "UnicodeDecodeError: 'ascii' codec can't decode byte ..." as python | |
# tries to convert the str to a unicode. To avoid that happening within | |
# the problem reporter convert all unicode attributes to utf-8. | |
# Currently valid utf-8 fields are converted to unicode in _ReadCsvDict. | |
# Perhaps all fields should be left as utf-8. | |
d[k] = EncodeUnicode(v) | |
return d | |
def FormatProblem(self, d=None): | |
"""Return a text string describing the problem. | |
Args: | |
d: map returned by GetDictToFormat with with formatting added | |
""" | |
if not d: | |
d = self.GetDictToFormat() | |
output_error_text = self.__class__.ERROR_TEXT % d | |
if ('reason' in d) and d['reason']: | |
return '%s\n%s' % (output_error_text, d['reason']) | |
else: | |
return output_error_text | |
def FormatContext(self): | |
"""Return a text string describing the context""" | |
text = '' | |
if hasattr(self, 'feed_name'): | |
text += "In feed '%s': " % self.feed_name | |
if hasattr(self, 'file_name'): | |
text += self.file_name | |
if hasattr(self, 'row_num'): | |
text += ":%i" % self.row_num | |
if hasattr(self, 'column_name'): | |
text += " column %s" % self.column_name | |
return text | |
def __cmp__(self, y): | |
"""Return an int <0/0/>0 when self is more/same/less significant than y. | |
Subclasses should define this if exceptions should be listed in something | |
other than the order they are reported. | |
Args: | |
y: object to compare to self | |
Returns: | |
An int which is negative if self is more significant than y, 0 if they | |
are similar significance and positive if self is less significant than | |
y. Returning a float won't work. | |
Raises: | |
TypeError by default, meaning objects of the type can not be compared. | |
""" | |
raise TypeError("__cmp__ not defined") | |
class MissingFile(ExceptionWithContext): | |
ERROR_TEXT = "File %(file_name)s is not found" | |
class EmptyFile(ExceptionWithContext): | |
ERROR_TEXT = "File %(file_name)s is empty" | |
class UnknownFile(ExceptionWithContext): | |
ERROR_TEXT = 'The file named %(file_name)s was not expected.\n' \ | |
'This may be a misspelled file name or the file may be ' \ | |
'included in a subdirectory. Please check spellings and ' \ | |
'make sure that there are no subdirectories within the feed' | |
class FeedNotFound(ExceptionWithContext): | |
ERROR_TEXT = 'Couldn\'t find a feed named %(feed_name)s' | |
class UnknownFormat(ExceptionWithContext): | |
ERROR_TEXT = 'The feed named %(feed_name)s had an unknown format:\n' \ | |
'feeds should be either .zip files or directories.' | |
class FileFormat(ExceptionWithContext): | |
ERROR_TEXT = 'Files must be encoded in utf-8 and may not contain ' \ | |
'any null bytes (0x00). %(file_name)s %(problem)s.' | |
class MissingColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Missing column %(column_name)s in file %(file_name)s' | |
class UnrecognizedColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Unrecognized column %(column_name)s in file %(file_name)s. ' \ | |
'This might be a misspelled column name (capitalization ' \ | |
'matters!). Or it could be extra information (such as a ' \ | |
'proposed feed extension) that the validator doesn\'t know ' \ | |
'about yet. Extra information is fine; this warning is here ' \ | |
'to catch misspelled optional column names.' | |
class CsvSyntax(ExceptionWithContext): | |
ERROR_TEXT = '%(description)s' | |
class DuplicateColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Column %(header)s appears %(count)i times in file %(file_name)s' | |
class MissingValue(ExceptionWithContext): | |
ERROR_TEXT = 'Missing value for column %(column_name)s' | |
class InvalidValue(ExceptionWithContext): | |
ERROR_TEXT = 'Invalid value %(value)s in field %(column_name)s' | |
class DuplicateID(ExceptionWithContext): | |
ERROR_TEXT = 'Duplicate ID %(value)s in column %(column_name)s' | |
class UnusedStop(ExceptionWithContext): | |
ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) isn't used in any trips" | |
class UsedStation(ExceptionWithContext): | |
ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) has location_type=1 " \ | |
"(station) so it should not appear in stop_times" | |
class StopTooFarFromParentStation(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"%(stop_name)s (ID %(stop_id)s) is too far from its parent station " | |
"%(parent_stop_name)s (ID %(parent_stop_id)s) : %(distance).2f meters.") | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. | |
return cmp(y.distance, self.distance) | |
class StopsTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The stops \"%(stop_name_a)s\" (ID %(stop_id_a)s) and \"%(stop_name_b)s\"" | |
" (ID %(stop_id_b)s) are %(distance)0.2fm apart and probably represent " | |
"the same location.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class StationsTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The stations \"%(stop_name_a)s\" (ID %(stop_id_a)s) and " | |
"\"%(stop_name_b)s\" (ID %(stop_id_b)s) are %(distance)0.2fm apart and " | |
"probably represent the same location.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class DifferentStationTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The parent_station of stop \"%(stop_name)s\" (ID %(stop_id)s) is not " | |
"station \"%(station_stop_name)s\" (ID %(station_stop_id)s) but they are " | |
"only %(distance)0.2fm apart.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class StopTooFarFromShapeWithDistTraveled(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"For trip %(trip_id)s the stop \"%(stop_name)s\" (ID %(stop_id)s) is " | |
"%(distance).0f meters away from the corresponding point " | |
"(shape_dist_traveled: %(shape_dist_traveled)f) on shape %(shape_id)s. " | |
"It should be closer than %(max_distance).0f meters.") | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. | |
return cmp(y.distance, self.distance) | |
class TooManyDaysWithoutService(ExceptionWithContext): | |
ERROR_TEXT = "There are %(consecutive_days_without_service)i consecutive"\ | |
" days, from %(first_day_without_service)s to" \ | |
" %(last_day_without_service)s, without any scheduled service." \ | |
" Please ensure this is intentional." | |
class ExpirationDate(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
expiration = d['expiration'] | |
formatted_date = time.strftime("%B %d, %Y", | |
time.localtime(expiration)) | |
if (expiration < time.mktime(time.localtime())): | |
return "This feed expired on %s" % formatted_date | |
else: | |
return "This feed will soon expire, on %s" % formatted_date | |
class FutureService(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
formatted_date = time.strftime("%B %d, %Y", time.localtime(d['start_date'])) | |
return ("The earliest service date in this feed is in the future, on %s. " | |
"Published feeds must always include the current date." % | |
formatted_date) | |
class InvalidLineEnd(ExceptionWithContext): | |
ERROR_TEXT = "Each line must end with CR LF or LF except for the last line " \ | |
"of the file. This line ends with \"%(bad_line_end)s\"." | |
class StopWithMultipleRouteTypes(ExceptionWithContext): | |
ERROR_TEXT = "Stop %(stop_name)s (ID=%(stop_id)s) belongs to both " \ | |
"subway (ID=%(route_id1)s) and bus line (ID=%(route_id2)s)." | |
class TooFastTravel(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
if not d['speed']: | |
return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \ | |
" to %(next_stop)s. %(dist).0f meters in %(time)d seconds." % d | |
else: | |
return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \ | |
" to %(next_stop)s. %(dist).0f meters in %(time)d seconds." \ | |
" (%(speed).0f km/h)." % d | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. We | |
# can't sort by speed because not all TooFastTravel objects have a speed. | |
return cmp(y.dist, self.dist) | |
class DuplicateTrip(ExceptionWithContext): | |
ERROR_TEXT = "Trip %(trip_id1)s of route %(route_id1)s might be duplicated " \ | |
"with trip %(trip_id2)s of route %(route_id2)s. They go " \ | |
"through the same stops with same service." | |
class OtherProblem(ExceptionWithContext): | |
ERROR_TEXT = '%(description)s' | |
class ExceptionProblemReporter(ProblemReporter): | |
def __init__(self, raise_warnings=False): | |
ProblemReporterBase.__init__(self) | |
self.raise_warnings = raise_warnings | |
def _Report(self, e): | |
if self.raise_warnings or e.IsError(): | |
raise e | |
else: | |
ProblemReporter._Report(self, e) | |
default_problem_reporter = ExceptionProblemReporter() | |
# Add a default handler to send log messages to console | |
console = logging.StreamHandler() | |
console.setLevel(logging.WARNING) | |
log = logging.getLogger("schedule_builder") | |
log.addHandler(console) | |
class Error(Exception): | |
pass | |
def IsValidURL(url): | |
"""Checks the validity of a URL value.""" | |
# TODO: Add more thorough checking of URL | |
return url.startswith(u'http://') or url.startswith(u'https://') | |
def IsValidColor(color): | |
"""Checks the validity of a hex color value.""" | |
return not re.match('^[0-9a-fA-F]{6}$', color) == None | |
def ColorLuminance(color): | |
"""Compute the brightness of an sRGB color using the formula from | |
http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast. | |
Args: | |
color: a string of six hex digits in the format verified by IsValidColor(). | |
Returns: | |
A floating-point number between 0.0 (black) and 255.0 (white). """ | |
r = int(color[0:2], 16) | |
g = int(color[2:4], 16) | |
b = int(color[4:6], 16) | |
return (299*r + 587*g + 114*b) / 1000.0 | |
def IsEmpty(value): | |
return value is None or (isinstance(value, basestring) and not value.strip()) | |
def FindUniqueId(dic): | |
"""Return a string not used as a key in the dictionary dic""" | |
name = str(len(dic)) | |
while name in dic: | |
name = str(random.randint(1, 999999999)) | |
return name | |
def TimeToSecondsSinceMidnight(time_string): | |
"""Convert HHH:MM:SS into seconds since midnight. | |
For example "01:02:03" returns 3723. The leading zero of the hours may be | |
omitted. HH may be more than 23 if the time is on the following day.""" | |
m = re.match(r'(\d{1,3}):([0-5]\d):([0-5]\d)$', time_string) | |
# ignored: matching for leap seconds | |
if not m: | |
raise Error, 'Bad HH:MM:SS "%s"' % time_string | |
return int(m.group(1)) * 3600 + int(m.group(2)) * 60 + int(m.group(3)) | |
def FormatSecondsSinceMidnight(s): | |
"""Formats an int number of seconds past midnight into a string | |
as "HH:MM:SS".""" | |
return "%02d:%02d:%02d" % (s / 3600, (s / 60) % 60, s % 60) | |
def DateStringToDateObject(date_string): | |
"""Return a date object for a string "YYYYMMDD".""" | |
# If this becomes a bottleneck date objects could be cached | |
return datetime.date(int(date_string[0:4]), int(date_string[4:6]), | |
int(date_string[6:8])) | |
def FloatStringToFloat(float_string): | |
"""Convert a float as a string to a float or raise an exception""" | |
# Will raise TypeError unless a string | |
if not re.match(r"^[+-]?\d+(\.\d+)?$", float_string): | |
raise ValueError() | |
return float(float_string) | |
def NonNegIntStringToInt(int_string): | |
"""Convert an non-negative integer string to an int or raise an exception""" | |
# Will raise TypeError unless a string | |
if not re.match(r"^(?:0|[1-9]\d*)$", int_string): | |
raise ValueError() | |
return int(int_string) | |
EARTH_RADIUS = 6378135 # in meters | |
def ApproximateDistance(degree_lat1, degree_lng1, degree_lat2, degree_lng2): | |
"""Compute approximate distance between two points in meters. Assumes the | |
Earth is a sphere.""" | |
# TODO: change to ellipsoid approximation, such as | |
# http://www.codeguru.com/Cpp/Cpp/algorithms/article.php/c5115/ | |
lat1 = math.radians(degree_lat1) | |
lng1 = math.radians(degree_lng1) | |
lat2 = math.radians(degree_lat2) | |
lng2 = math.radians(degree_lng2) | |
dlat = math.sin(0.5 * (lat2 - lat1)) | |
dlng = math.sin(0.5 * (lng2 - lng1)) | |
x = dlat * dlat + dlng * dlng * math.cos(lat1) * math.cos(lat2) | |
return EARTH_RADIUS * (2 * math.atan2(math.sqrt(x), | |
math.sqrt(max(0.0, 1.0 - x)))) | |
def ApproximateDistanceBetweenStops(stop1, stop2): | |
"""Compute approximate distance between two stops in meters. Assumes the | |
Earth is a sphere.""" | |
return ApproximateDistance(stop1.stop_lat, stop1.stop_lon, | |
stop2.stop_lat, stop2.stop_lon) | |
class GenericGTFSObject(object): | |
"""Object with arbitrary attributes which may be added to a schedule. | |
This class should be used as the base class for GTFS objects which may | |
be stored in a Schedule. It defines some methods for reading and writing | |
attributes. If self._schedule is None than the object is not in a Schedule. | |
Subclasses must: | |
* define an __init__ method which sets the _schedule member to None or a | |
weakref to a Schedule | |
* Set the _TABLE_NAME class variable to a name such as 'stops', 'agency', ... | |
* define methods to validate objects of that type | |
""" | |
def __getitem__(self, name): | |
"""Return a unicode or str representation of name or "" if not set.""" | |
if name in self.__dict__ and self.__dict__[name] is not None: | |
return "%s" % self.__dict__[name] | |
else: | |
return "" | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method is only called when name is not found in __dict__. | |
""" | |
if name in self.__class__._FIELD_NAMES: | |
return None | |
else: | |
raise AttributeError(name) | |
def iteritems(self): | |
"""Return a iterable for (name, value) pairs of public attributes.""" | |
for name, value in self.__dict__.iteritems(): | |
if (not name) or name[0] == "_": | |
continue | |
yield name, value | |
def __setattr__(self, name, value): | |
"""Set an attribute, adding name to the list of columns as needed.""" | |
object.__setattr__(self, name, value) | |
if name[0] != '_' and self._schedule: | |
self._schedule.AddTableColumn(self.__class__._TABLE_NAME, name) | |
def __eq__(self, other): | |
"""Return true iff self and other are equivalent""" | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
for k in self.keys().union(other.keys()): | |
# use __getitem__ which returns "" for missing columns values | |
if self[k] != other[k]: | |
return False | |
return True | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<%s %s>" % (self.__class__.__name__, sorted(self.iteritems())) | |
def keys(self): | |
"""Return iterable of columns used by this object.""" | |
columns = set() | |
for name in vars(self): | |
if (not name) or name[0] == "_": | |
continue | |
columns.add(name) | |
return columns | |
def _ColumnNames(self): | |
return self.keys() | |
class Stop(GenericGTFSObject): | |
"""Represents a single stop. A stop must have a latitude, longitude and name. | |
Callers may assign arbitrary values to instance attributes. | |
Stop.ParseAttributes validates attributes according to GTFS and converts some | |
into native types. ParseAttributes may delete invalid attributes. | |
Accessing an attribute that is a column in GTFS will return None if this | |
object does not have a value or it is ''. | |
A Stop object acts like a dict with string values. | |
Attributes: | |
stop_lat: a float representing the latitude of the stop | |
stop_lon: a float representing the longitude of the stop | |
All other attributes are strings. | |
""" | |
_REQUIRED_FIELD_NAMES = ['stop_id', 'stop_name', 'stop_lat', 'stop_lon'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + \ | |
['stop_desc', 'zone_id', 'stop_url', 'stop_code', | |
'location_type', 'parent_station'] | |
_TABLE_NAME = 'stops' | |
def __init__(self, lat=None, lng=None, name=None, stop_id=None, | |
field_dict=None, stop_code=None): | |
"""Initialize a new Stop object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
lat: a float, ignored when field_dict is present | |
lng: a float, ignored when field_dict is present | |
name: a string, ignored when field_dict is present | |
stop_id: a string, ignored when field_dict is present | |
stop_code: a string, ignored when field_dict is present | |
""" | |
self._schedule = None | |
if field_dict: | |
if isinstance(field_dict, Stop): | |
# Special case so that we don't need to re-parse the attributes to | |
# native types iteritems returns all attributes that don't start with _ | |
for k, v in field_dict.iteritems(): | |
self.__dict__[k] = v | |
else: | |
self.__dict__.update(field_dict) | |
else: | |
if lat is not None: | |
self.stop_lat = lat | |
if lng is not None: | |
self.stop_lon = lng | |
if name is not None: | |
self.stop_name = name | |
if stop_id is not None: | |
self.stop_id = stop_id | |
if stop_code is not None: | |
self.stop_code = stop_code | |
def GetTrips(self, schedule=None): | |
"""Return iterable containing trips that visit this stop.""" | |
return [trip for trip, ss in self._GetTripSequence(schedule)] | |
def _GetTripSequence(self, schedule=None): | |
"""Return a list of (trip, stop_sequence) for all trips visiting this stop. | |
A trip may be in the list multiple times with different index. | |
stop_sequence is an integer. | |
Args: | |
schedule: Deprecated, do not use. | |
""" | |
if schedule is None: | |
schedule = getattr(self, "_schedule", None) | |
if schedule is None: | |
warnings.warn("No longer supported. _schedule attribute is used to get " | |
"stop_times table", DeprecationWarning) | |
cursor = schedule._connection.cursor() | |
cursor.execute("SELECT trip_id,stop_sequence FROM stop_times " | |
"WHERE stop_id=?", | |
(self.stop_id, )) | |
return [(schedule.GetTrip(row[0]), row[1]) for row in cursor] | |
def _GetTripIndex(self, schedule=None): | |
"""Return a list of (trip, index). | |
trip: a Trip object | |
index: an offset in trip.GetStopTimes() | |
""" | |
trip_index = [] | |
for trip, sequence in self._GetTripSequence(schedule): | |
for index, st in enumerate(trip.GetStopTimes()): | |
if st.stop_sequence == sequence: | |
trip_index.append((trip, index)) | |
break | |
else: | |
raise RuntimeError("stop_sequence %d not found in trip_id %s" % | |
sequence, trip.trip_id) | |
return trip_index | |
def GetStopTimeTrips(self, schedule=None): | |
"""Return a list of (time, (trip, index), is_timepoint). | |
time: an integer. It might be interpolated. | |
trip: a Trip object. | |
index: the offset of this stop in trip.GetStopTimes(), which may be | |
different from the stop_sequence. | |
is_timepoint: a bool | |
""" | |
time_trips = [] | |
for trip, index in self._GetTripIndex(schedule): | |
secs, stoptime, is_timepoint = trip.GetTimeInterpolatedStops()[index] | |
time_trips.append((secs, (trip, index), is_timepoint)) | |
return time_trips | |
def ParseAttributes(self, problems): | |
"""Parse all attributes, calling problems as needed.""" | |
# Need to use items() instead of iteritems() because _CheckAndSetAttr may | |
# modify self.__dict__ | |
for name, value in vars(self).items(): | |
if name[0] == "_": | |
continue | |
self._CheckAndSetAttr(name, value, problems) | |
def _CheckAndSetAttr(self, name, value, problems): | |
"""If value is valid for attribute name store it. | |
If value is not valid call problems. Return a new value of the correct type | |
or None if value couldn't be converted. | |
""" | |
if name == 'stop_lat': | |
try: | |
if isinstance(value, (float, int)): | |
self.stop_lat = value | |
else: | |
self.stop_lat = FloatStringToFloat(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('stop_lat', value) | |
del self.stop_lat | |
else: | |
if self.stop_lat > 90 or self.stop_lat < -90: | |
problems.InvalidValue('stop_lat', value) | |
elif name == 'stop_lon': | |
try: | |
if isinstance(value, (float, int)): | |
self.stop_lon = value | |
else: | |
self.stop_lon = FloatStringToFloat(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('stop_lon', value) | |
del self.stop_lon | |
else: | |
if self.stop_lon > 180 or self.stop_lon < -180: | |
problems.InvalidValue('stop_lon', value) | |
elif name == 'stop_url': | |
if value and not IsValidURL(value): | |
problems.InvalidValue('stop_url', value) | |
del self.stop_url | |
elif name == 'location_type': | |
if value == '': | |
self.location_type = 0 | |
else: | |
try: | |
self.location_type = int(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('location_type', value) | |
del self.location_type | |
else: | |
if self.location_type not in (0, 1): | |
problems.InvalidValue('location_type', value, type=TYPE_WARNING) | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method is only called when name is not found in __dict__. | |
""" | |
if name == "location_type": | |
return 0 | |
elif name == "trip_index": | |
return self._GetTripIndex() | |
elif name in Stop._FIELD_NAMES: | |
return None | |
else: | |
raise AttributeError(name) | |
def Validate(self, problems=default_problem_reporter): | |
# First check that all required fields are present because ParseAttributes | |
# may remove invalid attributes. | |
for required in Stop._REQUIRED_FIELD_NAMES: | |
if IsEmpty(getattr(self, required, None)): | |
# TODO: For now I'm keeping the API stable but it would be cleaner to | |
# treat whitespace stop_id as invalid, instead of missing | |
problems.MissingValue(required) | |
# Check individual values and convert to native types | |
self.ParseAttributes(problems) | |
# Check that this object is consistent with itself | |
if (self.stop_lat is not None and self.stop_lon is not None and | |
abs(self.stop_lat) < 1.0) and (abs(self.stop_lon) < 1.0): | |
problems.InvalidValue('stop_lat', self.stop_lat, | |
'Stop location too close to 0, 0', | |
type=TYPE_WARNING) | |
if (self.stop_desc is not None and self.stop_name is not None and | |
self.stop_desc and self.stop_name and | |
not IsEmpty(self.stop_desc) and | |
self.stop_name.strip().lower() == self.stop_desc.strip().lower()): | |
problems.InvalidValue('stop_desc', self.stop_desc, | |
'stop_desc should not be the same as stop_name') | |
if self.parent_station and self.location_type == 1: | |
problems.InvalidValue('parent_station', self.parent_station, | |
'Stop row with location_type=1 (a station) must ' | |
'not have a parent_station') | |
class Route(GenericGTFSObject): | |
"""Represents a single route.""" | |
_REQUIRED_FIELD_NAMES = [ | |
'route_id', 'route_short_name', 'route_long_name', 'route_type' | |
] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + [ | |
'agency_id', 'route_desc', 'route_url', 'route_color', 'route_text_color' | |
] | |
_ROUTE_TYPES = { | |
0: {'name':'Tram', 'max_speed':100}, | |
1: {'name':'Subway', 'max_speed':150}, | |
2: {'name':'Rail', 'max_speed':300}, | |
3: {'name':'Bus', 'max_speed':100}, | |
4: {'name':'Ferry', 'max_speed':80}, | |
5: {'name':'Cable Car', 'max_speed':50}, | |
6: {'name':'Gondola', 'max_speed':50}, | |
7: {'name':'Funicular', 'max_speed':50}, | |
} | |
# Create a reverse lookup dict of route type names to route types. | |
_ROUTE_TYPE_IDS = set(_ROUTE_TYPES.keys()) | |
_ROUTE_TYPE_NAMES = dict((v['name'], k) for k, v in _ROUTE_TYPES.items()) | |
_TABLE_NAME = 'routes' | |
def __init__(self, short_name=None, long_name=None, route_type=None, | |
route_id=None, agency_id=None, field_dict=None): | |
self._schedule = None | |
self._trips = [] | |
if not field_dict: | |
field_dict = {} | |
if short_name is not None: | |
field_dict['route_short_name'] = short_name | |
if long_name is not None: | |
field_dict['route_long_name'] = long_name | |
if route_type is not None: | |
if route_type in Route._ROUTE_TYPE_NAMES: | |
self.route_type = Route._ROUTE_TYPE_NAMES[route_type] | |
else: | |
field_dict['route_type'] = route_type | |
if route_id is not None: | |
field_dict['route_id'] = route_id | |
if agency_id is not None: | |
field_dict['agency_id'] = agency_id | |
self.__dict__.update(field_dict) | |
def AddTrip(self, schedule, headsign, service_period=None, trip_id=None): | |
""" Adds a trip to this route. | |
Args: | |
headsign: headsign of the trip as a string | |
Returns: | |
a new Trip object | |
""" | |
if trip_id is None: | |
trip_id = unicode(len(schedule.trips)) | |
if service_period is None: | |
service_period = schedule.GetDefaultServicePeriod() | |
trip = Trip(route=self, headsign=headsign, service_period=service_period, | |
trip_id=trip_id) | |
schedule.AddTripObject(trip) | |
return trip | |
def _AddTripObject(self, trip): | |
# Only class Schedule may call this. Users of the API should call | |
# Route.AddTrip or schedule.AddTripObject. | |
self._trips.append(trip) | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method overrides GenericGTFSObject.__getattr__ to provide backwards | |
compatible access to trips. | |
""" | |
if name == 'trips': | |
return self._trips | |
else: | |
return GenericGTFSObject.__getattr__(self, name) | |
def GetPatternIdTripDict(self): | |
"""Return a dictionary that maps pattern_id to a list of Trip objects.""" | |
d = {} | |
for t in self._trips: | |
d.setdefault(t.pattern_id, []).append(t) | |
return d | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.route_id): | |
problems.MissingValue('route_id') | |
if IsEmpty(self.route_type): | |
problems.MissingValue('route_type') | |
if IsEmpty(self.route_short_name) and IsEmpty(self.route_long_name): | |
problems.InvalidValue('route_short_name', | |
self.route_short_name, | |
'Both route_short_name and ' | |
'route_long name are blank.') | |
if self.route_short_name and len(self.route_short_name) > 6: | |
problems.InvalidValue('route_short_name', | |
self.route_short_name, | |
'This route_short_name is relatively long, which ' | |
'probably means that it contains a place name. ' | |
'You should only use this field to hold a short ' | |
'code that riders use to identify a route. ' | |
'If this route doesn\'t have such a code, it\'s ' | |
'OK to leave this field empty.', type=TYPE_WARNING) | |
if self.route_short_name and self.route_long_name: | |
short_name = self.route_short_name.strip().lower() | |
long_name = self.route_long_name.strip().lower() | |
if (long_name.startswith(short_name + ' ') or | |
long_name.startswith(short_name + '(') or | |
long_name.startswith(short_name + '-')): | |
problems.InvalidValue('route_long_name', | |
self.route_long_name, | |
'route_long_name shouldn\'t contain ' | |
'the route_short_name value, as both ' | |
'fields are often displayed ' | |
'side-by-side.', type=TYPE_WARNING) | |
if long_name == short_name: | |
problems.InvalidValue('route_long_name', | |
self.route_long_name, | |
'route_long_name shouldn\'t be the same ' | |
'the route_short_name value, as both ' | |
'fields are often displayed ' | |
'side-by-side. It\'s OK to omit either the ' | |
'short or long name (but not both).', | |
type=TYPE_WARNING) | |
if (self.route_desc and | |
((self.route_desc == self.route_short_name) or | |
(self.route_desc == self.route_long_name))): | |
problems.InvalidValue('route_desc', | |
self.route_desc, | |
'route_desc shouldn\'t be the same as ' | |
'route_short_name or route_long_name') | |
if self.route_type is not None: | |
try: | |
if not isinstance(self.route_type, int): | |
self.route_type = NonNegIntStringToInt(self.route_type) | |
except (TypeError, ValueError): | |
problems.InvalidValue('route_type', self.route_type) | |
else: | |
if self.route_type not in Route._ROUTE_TYPE_IDS: | |
problems.InvalidValue('route_type', | |
self.route_type, | |
type=TYPE_WARNING) | |
if self.route_url and not IsValidURL(self.route_url): | |
problems.InvalidValue('route_url', self.route_url) | |
txt_lum = ColorLuminance('000000') # black (default) | |
bg_lum = ColorLuminance('ffffff') # white (default) | |
if self.route_color: | |
if IsValidColor(self.route_color): | |
bg_lum = ColorLuminance(self.route_color) | |
else: | |
problems.InvalidValue('route_color', self.route_color, | |
'route_color should be a valid color description ' | |
'which consists of 6 hexadecimal characters ' | |
'representing the RGB values. Example: 44AA06') | |
if self.route_text_color: | |
if IsValidColor(self.route_text_color): | |
txt_lum = ColorLuminance(self.route_text_color) | |
else: | |
problems.InvalidValue('route_text_color', self.route_text_color, | |
'route_text_color should be a valid color ' | |
'description, which consists of 6 hexadecimal ' | |
'characters representing the RGB values. ' | |
'Example: 44AA06') | |
if abs(txt_lum - bg_lum) < 510/7.: | |
# http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast recommends | |
# a threshold of 125, but that is for normal text and too harsh for | |
# big colored logos like line names, so we keep the original threshold | |
# from r541 (but note that weight has shifted between RGB components). | |
problems.InvalidValue('route_color', self.route_color, | |
'The route_text_color and route_color should ' | |
'be set to contrasting colors, as they are used ' | |
'as the text and background color (respectively) ' | |
'for displaying route names. When left blank, ' | |
'route_text_color defaults to 000000 (black) and ' | |
'route_color defaults to FFFFFF (white). A common ' | |
'source of issues here is setting route_color to ' | |
'a dark color, while leaving route_text_color set ' | |
'to black. In this case, route_text_color should ' | |
'be set to a lighter color like FFFFFF to ensure ' | |
'a legible contrast between the two.', | |
type=TYPE_WARNING) | |
def SortListOfTripByTime(trips): | |
trips.sort(key=Trip.GetStartTime) | |
class StopTime(object): | |
""" | |
Represents a single stop of a trip. StopTime contains most of the columns | |
from the stop_times.txt file. It does not contain trip_id, which is implied | |
by the Trip used to access it. | |
See the Google Transit Feed Specification for the semantic details. | |
stop: A Stop object | |
arrival_time: str in the form HH:MM:SS; readonly after __init__ | |
departure_time: str in the form HH:MM:SS; readonly after __init__ | |
arrival_secs: int number of seconds since midnight | |
departure_secs: int number of seconds since midnight | |
stop_headsign: str | |
pickup_type: int | |
drop_off_type: int | |
shape_dist_traveled: float | |
stop_id: str; readonly | |
stop_time: The only time given for this stop. If present, it is used | |
for both arrival and departure time. | |
stop_sequence: int | |
""" | |
_REQUIRED_FIELD_NAMES = ['trip_id', 'arrival_time', 'departure_time', | |
'stop_id', 'stop_sequence'] | |
_OPTIONAL_FIELD_NAMES = ['stop_headsign', 'pickup_type', | |
'drop_off_type', 'shape_dist_traveled'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + _OPTIONAL_FIELD_NAMES | |
_SQL_FIELD_NAMES = ['trip_id', 'arrival_secs', 'departure_secs', | |
'stop_id', 'stop_sequence', 'stop_headsign', | |
'pickup_type', 'drop_off_type', 'shape_dist_traveled'] | |
__slots__ = ('arrival_secs', 'departure_secs', 'stop_headsign', 'stop', | |
'stop_headsign', 'pickup_type', 'drop_off_type', | |
'shape_dist_traveled', 'stop_sequence') | |
def __init__(self, problems, stop, | |
arrival_time=None, departure_time=None, | |
stop_headsign=None, pickup_type=None, drop_off_type=None, | |
shape_dist_traveled=None, arrival_secs=None, | |
departure_secs=None, stop_time=None, stop_sequence=None): | |
if stop_time != None: | |
arrival_time = departure_time = stop_time | |
if arrival_secs != None: | |
self.arrival_secs = arrival_secs | |
elif arrival_time in (None, ""): | |
self.arrival_secs = None # Untimed | |
arrival_time = None | |
else: | |
try: | |
self.arrival_secs = TimeToSecondsSinceMidnight(arrival_time) | |
except Error: | |
problems.InvalidValue('arrival_time', arrival_time) | |
self.arrival_secs = None | |
if departure_secs != None: | |
self.departure_secs = departure_secs | |
elif departure_time in (None, ""): | |
self.departure_secs = None | |
departure_time = None | |
else: | |
try: | |
self.departure_secs = TimeToSecondsSinceMidnight(departure_time) | |
except Error: | |
problems.InvalidValue('departure_time', departure_time) | |
self.departure_secs = None | |
if not isinstance(stop, Stop): | |
# Not quite correct, but better than letting the problem propagate | |
problems.InvalidValue('stop', stop) | |
self.stop = stop | |
self.stop_headsign = stop_headsign | |
if pickup_type in (None, ""): | |
self.pickup_type = None | |
else: | |
try: | |
pickup_type = int(pickup_type) | |
except ValueError: | |
problems.InvalidValue('pickup_type', pickup_type) | |
else: | |
if pickup_type < 0 or pickup_type > 3: | |
problems.InvalidValue('pickup_type', pickup_type) | |
self.pickup_type = pickup_type | |
if drop_off_type in (None, ""): | |
self.drop_off_type = None | |
else: | |
try: | |
drop_off_type = int(drop_off_type) | |
except ValueError: | |
problems.InvalidValue('drop_off_type', drop_off_type) | |
else: | |
if drop_off_type < 0 or drop_off_type > 3: | |
problems.InvalidValue('drop_off_type', drop_off_type) | |
self.drop_off_type = drop_off_type | |
if (self.pickup_type == 1 and self.drop_off_type == 1 and | |
self.arrival_secs == None and self.departure_secs == None): | |
problems.OtherProblem('This stop time has a pickup_type and ' | |
'drop_off_type of 1, indicating that riders ' | |
'can\'t get on or off here. Since it doesn\'t ' | |
'define a timepoint either, this entry serves no ' | |
'purpose and should be excluded from the trip.', | |
type=TYPE_WARNING) | |
if ((self.arrival_secs != None) and (self.departure_secs != None) and | |
(self.departure_secs < self.arrival_secs)): | |
problems.InvalidValue('departure_time', departure_time, | |
'The departure time at this stop (%s) is before ' | |
'the arrival time (%s). This is often caused by ' | |
'problems in the feed exporter\'s time conversion') | |
# If the caller passed a valid arrival time but didn't attempt to pass a | |
# departure time complain | |
if (self.arrival_secs != None and | |
self.departure_secs == None and departure_time == None): | |
# self.departure_secs might be None because departure_time was invalid, | |
# so we need to check both | |
problems.MissingValue('departure_time', | |
'arrival_time and departure_time should either ' | |
'both be provided or both be left blank. ' | |
'It\'s OK to set them both to the same value.') | |
# If the caller passed a valid departure time but didn't attempt to pass a | |
# arrival time complain | |
if (self.departure_secs != None and | |
self.arrival_secs == None and arrival_time == None): | |
problems.MissingValue('arrival_time', | |
'arrival_time and departure_time should either ' | |
'both be provided or both be left blank. ' | |
'It\'s OK to set them both to the same value.') | |
if shape_dist_traveled in (None, ""): | |
self.shape_dist_traveled = None | |
else: | |
try: | |
self.shape_dist_traveled = float(shape_dist_traveled) | |
except ValueError: | |
problems.InvalidValue('shape_dist_traveled', shape_dist_traveled) | |
if stop_sequence is not None: | |
self.stop_sequence = stop_sequence | |
def GetFieldValuesTuple(self, trip_id): | |
"""Return a tuple that outputs a row of _FIELD_NAMES. | |
trip must be provided because it is not stored in StopTime. | |
""" | |
result = [] | |
for fn in StopTime._FIELD_NAMES: | |
if fn == 'trip_id': | |
result.append(trip_id) | |
else: | |
result.append(getattr(self, fn) or '' ) | |
return tuple(result) | |
def GetSqlValuesTuple(self, trip_id): | |
result = [] | |
for fn in StopTime._SQL_FIELD_NAMES: | |
if fn == 'trip_id': | |
result.append(trip_id) | |
else: | |
# This might append None, which will be inserted into SQLite as NULL | |
result.append(getattr(self, fn)) | |
return tuple(result) | |
def GetTimeSecs(self): | |
"""Return the first of arrival_secs and departure_secs that is not None. | |
If both are None return None.""" | |
if self.arrival_secs != None: | |
return self.arrival_secs | |
elif self.departure_secs != None: | |
return self.departure_secs | |
else: | |
return None | |
def __getattr__(self, name): | |
if name == 'stop_id': | |
return self.stop.stop_id | |
elif name == 'arrival_time': | |
return (self.arrival_secs != None and | |
FormatSecondsSinceMidnight(self.arrival_secs) or '') | |
elif name == 'departure_time': | |
return (self.departure_secs != None and | |
FormatSecondsSinceMidnight(self.departure_secs) or '') | |
elif name == 'shape_dist_traveled': | |
return '' | |
raise AttributeError(name) | |
class Trip(GenericGTFSObject): | |
_REQUIRED_FIELD_NAMES = ['route_id', 'service_id', 'trip_id'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + [ | |
'trip_headsign', 'direction_id', 'block_id', 'shape_id' | |
] | |
_FIELD_NAMES_HEADWAY = ['trip_id', 'start_time', 'end_time', 'headway_secs'] | |
_TABLE_NAME= "trips" | |
def __init__(self, headsign=None, service_period=None, | |
route=None, trip_id=None, field_dict=None): | |
self._schedule = None | |
self._headways = [] # [(start_time, end_time, headway_secs)] | |
if not field_dict: | |
field_dict = {} | |
if headsign is not None: | |
field_dict['trip_headsign'] = headsign | |
if route: | |
field_dict['route_id'] = route.route_id | |
if trip_id is not None: | |
field_dict['trip_id'] = trip_id | |
if service_period is not None: | |
field_dict['service_id'] = service_period.service_id | |
# Earlier versions of transitfeed.py assigned self.service_period here | |
# and allowed the caller to set self.service_id. Schedule.Validate | |
# checked the service_id attribute if it was assigned and changed it to a | |
# service_period attribute. Now only the service_id attribute is used and | |
# it is validated by Trip.Validate. | |
if service_period is not None: | |
# For backwards compatibility | |
self.service_id = service_period.service_id | |
self.__dict__.update(field_dict) | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) or '' for fn in Trip._FIELD_NAMES] | |
def AddStopTime(self, stop, problems=None, schedule=None, **kwargs): | |
"""Add a stop to this trip. Stops must be added in the order visited. | |
Args: | |
stop: A Stop object | |
kwargs: remaining keyword args passed to StopTime.__init__ | |
Returns: | |
None | |
""" | |
if problems is None: | |
# TODO: delete this branch when StopTime.__init__ doesn't need a | |
# ProblemReporter | |
problems = default_problem_reporter | |
stoptime = StopTime(problems=problems, stop=stop, **kwargs) | |
self.AddStopTimeObject(stoptime, schedule) | |
def _AddStopTimeObjectUnordered(self, stoptime, schedule): | |
"""Add StopTime object to this trip. | |
The trip isn't checked for duplicate sequence numbers so it must be | |
validated later.""" | |
cursor = schedule._connection.cursor() | |
insert_query = "INSERT INTO stop_times (%s) VALUES (%s);" % ( | |
','.join(StopTime._SQL_FIELD_NAMES), | |
','.join(['?'] * len(StopTime._SQL_FIELD_NAMES))) | |
cursor = schedule._connection.cursor() | |
cursor.execute( | |
insert_query, stoptime.GetSqlValuesTuple(self.trip_id)) | |
def ReplaceStopTimeObject(self, stoptime, schedule=None): | |
"""Replace a StopTime object from this trip with the given one. | |
Keys the StopTime object to be replaced by trip_id, stop_sequence | |
and stop_id as 'stoptime', with the object 'stoptime'. | |
""" | |
if schedule is None: | |
schedule = self._schedule | |
new_secs = stoptime.GetTimeSecs() | |
cursor = schedule._connection.cursor() | |
cursor.execute("DELETE FROM stop_times WHERE trip_id=? and " | |
"stop_sequence=? and stop_id=?", | |
(self.trip_id, stoptime.stop_sequence, stoptime.stop_id)) | |
if cursor.rowcount == 0: | |
raise Error, 'Attempted replacement of StopTime object which does not exist' | |
self._AddStopTimeObjectUnordered(stoptime, schedule) | |
def AddStopTimeObject(self, stoptime, schedule=None, problems=None): | |
"""Add a StopTime object to the end of this trip. | |
Args: | |
stoptime: A StopTime object. Should not be reused in multiple trips. | |
schedule: Schedule object containing this trip which must be | |
passed to Trip.__init__ or here | |
problems: ProblemReporter object for validating the StopTime in its new | |
home | |
Returns: | |
None | |
""" | |
if schedule is None: | |
schedule = self._schedule | |
if schedule is None: | |
warnings.warn("No longer supported. _schedule attribute is used to get " | |
"stop_times table", DeprecationWarning) | |
if problems is None: | |
problems = schedule.problem_reporter | |
new_secs = stoptime.GetTimeSecs() | |
cursor = schedule._connection.cursor() | |
cursor.execute("SELECT max(stop_sequence), max(arrival_secs), " | |
"max(departure_secs) FROM stop_times WHERE trip_id=?", | |
(self.trip_id,)) | |
row = cursor.fetchone() | |
if row[0] is None: | |
# This is the first stop_time of the trip | |
stoptime.stop_sequence = 1 | |
if new_secs == None: | |
problems.OtherProblem( | |
'No time for first StopTime of trip_id "%s"' % (self.trip_id,)) | |
else: | |
stoptime.stop_sequence = row[0] + 1 | |
prev_secs = max(row[1], row[2]) | |
if new_secs != None and new_secs < prev_secs: | |
problems.OtherProblem( | |
'out of order stop time for stop_id=%s trip_id=%s %s < %s' % | |
(EncodeUnicode(stoptime.stop_id), EncodeUnicode(self.trip_id), | |
FormatSecondsSinceMidnight(new_secs), | |
FormatSecondsSinceMidnight(prev_secs))) | |
self._AddStopTimeObjectUnordered(stoptime, schedule) | |
def GetTimeStops(self): | |
"""Return a list of (arrival_secs, departure_secs, stop) tuples. | |
Caution: arrival_secs and departure_secs may be 0, a false value meaning a | |
stop at midnight or None, a false value meaning the stop is untimed.""" | |
return [(st.arrival_secs, st.departure_secs, st.stop) for st in | |
self.GetStopTimes()] | |
def GetCountStopTimes(self): | |
"""Return the number of stops made by this trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT count(*) FROM stop_times WHERE trip_id=?', (self.trip_id,)) | |
return cursor.fetchone()[0] | |
def GetTimeInterpolatedStops(self): | |
"""Return a list of (secs, stoptime, is_timepoint) tuples. | |
secs will always be an int. If the StopTime object does not have explict | |
times this method guesses using distance. stoptime is a StopTime object and | |
is_timepoint is a bool. | |
Raises: | |
ValueError if this trip does not have the times needed to interpolate | |
""" | |
rv = [] | |
stoptimes = self.GetStopTimes() | |
# If there are no stoptimes [] is the correct return value but if the start | |
# or end are missing times there is no correct return value. | |
if not stoptimes: | |
return [] | |
if (stoptimes[0].GetTimeSecs() is None or | |
stoptimes[-1].GetTimeSecs() is None): | |
raise ValueError("%s must have time at first and last stop" % (self)) | |
cur_timepoint = None | |
next_timepoint = None | |
distance_between_timepoints = 0 | |
distance_traveled_between_timepoints = 0 | |
for i, st in enumerate(stoptimes): | |
if st.GetTimeSecs() != None: | |
cur_timepoint = st | |
distance_between_timepoints = 0 | |
distance_traveled_between_timepoints = 0 | |
if i + 1 < len(stoptimes): | |
k = i + 1 | |
distance_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop) | |
while stoptimes[k].GetTimeSecs() == None: | |
k += 1 | |
distance_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop) | |
next_timepoint = stoptimes[k] | |
rv.append( (st.GetTimeSecs(), st, True) ) | |
else: | |
distance_traveled_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[i-1].stop, st.stop) | |
distance_percent = distance_traveled_between_timepoints / distance_between_timepoints | |
total_time = next_timepoint.GetTimeSecs() - cur_timepoint.GetTimeSecs() | |
time_estimate = distance_percent * total_time + cur_timepoint.GetTimeSecs() | |
rv.append( (int(round(time_estimate)), st, False) ) | |
return rv | |
def ClearStopTimes(self): | |
"""Remove all stop times from this trip. | |
StopTime objects previously returned by GetStopTimes are unchanged but are | |
no longer associated with this trip. | |
""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute('DELETE FROM stop_times WHERE trip_id=?', (self.trip_id,)) | |
def GetStopTimes(self, problems=None): | |
"""Return a sorted list of StopTime objects for this trip.""" | |
# In theory problems=None should be safe because data from database has been | |
# validated. See comment in _LoadStopTimes for why this isn't always true. | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs,stop_headsign,pickup_type,' | |
'drop_off_type,shape_dist_traveled,stop_id,stop_sequence FROM ' | |
'stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence', (self.trip_id,)) | |
stop_times = [] | |
for row in cursor.fetchall(): | |
stop = self._schedule.GetStop(row[6]) | |
stop_times.append(StopTime(problems=problems, stop=stop, arrival_secs=row[0], | |
departure_secs=row[1], | |
stop_headsign=row[2], | |
pickup_type=row[3], | |
drop_off_type=row[4], | |
shape_dist_traveled=row[5], | |
stop_sequence=row[7])) | |
return stop_times | |
def GetHeadwayStopTimes(self, problems=None): | |
"""Return a list of StopTime objects for each headway-based run. | |
Returns: | |
a list of list of StopTime objects. Each list of StopTime objects | |
represents one run. If this trip doesn't have headways returns an empty | |
list. | |
""" | |
stoptimes_list = [] # list of stoptime lists to be returned | |
stoptime_pattern = self.GetStopTimes() | |
first_secs = stoptime_pattern[0].arrival_secs # first time of the trip | |
# for each start time of a headway run | |
for run_secs in self.GetHeadwayStartTimes(): | |
# stop time list for a headway run | |
stoptimes = [] | |
# go through the pattern and generate stoptimes | |
for st in stoptime_pattern: | |
arrival_secs, departure_secs = None, None # default value if the stoptime is not timepoint | |
if st.arrival_secs != None: | |
arrival_secs = st.arrival_secs - first_secs + run_secs | |
if st.departure_secs != None: | |
departure_secs = st.departure_secs - first_secs + run_secs | |
# append stoptime | |
stoptimes.append(StopTime(problems=problems, stop=st.stop, | |
arrival_secs=arrival_secs, | |
departure_secs=departure_secs, | |
stop_headsign=st.stop_headsign, | |
pickup_type=st.pickup_type, | |
drop_off_type=st.drop_off_type, | |
shape_dist_traveled=st.shape_dist_traveled, | |
stop_sequence=st.stop_sequence)) | |
# add stoptimes to the stoptimes_list | |
stoptimes_list.append ( stoptimes ) | |
return stoptimes_list | |
def GetStartTime(self, problems=default_problem_reporter): | |
"""Return the first time of the trip. TODO: For trips defined by frequency | |
return the first time of the first trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs FROM stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence LIMIT 1', (self.trip_id,)) | |
(arrival_secs, departure_secs) = cursor.fetchone() | |
if arrival_secs != None: | |
return arrival_secs | |
elif departure_secs != None: | |
return departure_secs | |
else: | |
problems.InvalidValue('departure_time', '', | |
'The first stop_time in trip %s is missing ' | |
'times.' % self.trip_id) | |
def GetHeadwayStartTimes(self): | |
"""Return a list of start time for each headway-based run. | |
Returns: | |
a sorted list of seconds since midnight, the start time of each run. If | |
this trip doesn't have headways returns an empty list.""" | |
start_times = [] | |
# for each headway period of the trip | |
for start_secs, end_secs, headway_secs in self.GetHeadwayPeriodTuples(): | |
# reset run secs to the start of the timeframe | |
run_secs = start_secs | |
while run_secs < end_secs: | |
start_times.append(run_secs) | |
# increment current run secs by headway secs | |
run_secs += headway_secs | |
return start_times | |
def GetEndTime(self, problems=default_problem_reporter): | |
"""Return the last time of the trip. TODO: For trips defined by frequency | |
return the last time of the last trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs FROM stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence DESC LIMIT 1', (self.trip_id,)) | |
(arrival_secs, departure_secs) = cursor.fetchone() | |
if departure_secs != None: | |
return departure_secs | |
elif arrival_secs != None: | |
return arrival_secs | |
else: | |
problems.InvalidValue('arrival_time', '', | |
'The last stop_time in trip %s is missing ' | |
'times.' % self.trip_id) | |
def _GenerateStopTimesTuples(self): | |
"""Generator for rows of the stop_times file""" | |
stoptimes = self.GetStopTimes() | |
for i, st in enumerate(stoptimes): | |
yield st.GetFieldValuesTuple(self.trip_id) | |
def GetStopTimesTuples(self): | |
results = [] | |
for time_tuple in self._GenerateStopTimesTuples(): | |
results.append(time_tuple) | |
return results | |
def GetPattern(self): | |
"""Return a tuple of Stop objects, in the order visited""" | |
stoptimes = self.GetStopTimes() | |
return tuple(st.stop for st in stoptimes) | |
def AddHeadwayPeriod(self, start_time, end_time, headway_secs, | |
problem_reporter=default_problem_reporter): | |
"""Adds a period to this trip during which the vehicle travels | |
at regular intervals (rather than specifying exact times for each stop). | |
Args: | |
start_time: The time at which this headway period starts, either in | |
numerical seconds since midnight or as "HH:MM:SS" since midnight. | |
end_time: The time at which this headway period ends, either in | |
numerical seconds since midnight or as "HH:MM:SS" since midnight. | |
This value should be larger than start_time. | |
headway_secs: The amount of time, in seconds, between occurences of | |
this trip. | |
problem_reporter: Optional parameter that can be used to select | |
how any errors in the other input parameters will be reported. | |
Returns: | |
None | |
""" | |
if start_time == None or start_time == '': # 0 is OK | |
problem_reporter.MissingValue('start_time') | |
return | |
if isinstance(start_time, basestring): | |
try: | |
start_time = TimeToSecondsSinceMidnight(start_time) | |
except Error: | |
problem_reporter.InvalidValue('start_time', start_time) | |
return | |
elif start_time < 0: | |
problem_reporter.InvalidValue('start_time', start_time) | |
if end_time == None or end_time == '': | |
problem_reporter.MissingValue('end_time') | |
return | |
if isinstance(end_time, basestring): | |
try: | |
end_time = TimeToSecondsSinceMidnight(end_time) | |
except Error: | |
problem_reporter.InvalidValue('end_time', end_time) | |
return | |
elif end_time < 0: | |
problem_reporter.InvalidValue('end_time', end_time) | |
return | |
if not headway_secs: | |
problem_reporter.MissingValue('headway_secs') | |
return | |
try: | |
headway_secs = int(headway_secs) | |
except ValueError: | |
problem_reporter.InvalidValue('headway_secs', headway_secs) | |
return | |
if headway_secs <= 0: | |
problem_reporter.InvalidValue('headway_secs', headway_secs) | |
return | |
if end_time <= start_time: | |
problem_reporter.InvalidValue('end_time', end_time, | |
'should be greater than start_time') | |
self._headways.append((start_time, end_time, headway_secs)) | |
def ClearHeadwayPeriods(self): | |
self._headways = [] | |
def _HeadwayOutputTuple(self, headway): | |
return (self.trip_id, | |
FormatSecondsSinceMidnight(headway[0]), | |
FormatSecondsSinceMidnight(headway[1]), | |
unicode(headway[2])) | |
def GetHeadwayPeriodOutputTuples(self): | |
tuples = [] | |
for headway in self._headways: | |
tuples.append(self._HeadwayOutputTuple(headway)) | |
return tuples | |
def GetHeadwayPeriodTuples(self): | |
return self._headways | |
def __getattr__(self, name): | |
if name == 'service_period': | |
assert self._schedule, "Must be in a schedule to get service_period" | |
return self._schedule.GetServicePeriod(self.service_id) | |
elif name == 'pattern_id': | |
if '_pattern_id' not in self.__dict__: | |
self.__dict__['_pattern_id'] = hash(self.GetPattern()) | |
return self.__dict__['_pattern_id'] | |
else: | |
return GenericGTFSObject.__getattr__(self, name) | |
def Validate(self, problems, validate_children=True): | |
"""Validate attributes of this object. | |
Check that this object has all required values set to a valid value without | |
reference to the rest of the schedule. If the _schedule attribute is set | |
then check that references such as route_id and service_id are correct. | |
Args: | |
problems: A ProblemReporter object | |
validate_children: if True and the _schedule attribute is set than call | |
ValidateChildren | |
""" | |
if IsEmpty(self.route_id): | |
problems.MissingValue('route_id') | |
if 'service_period' in self.__dict__: | |
# Some tests assign to the service_period attribute. Patch up self before | |
# proceeding with validation. See also comment in Trip.__init__. | |
self.service_id = self.__dict__['service_period'].service_id | |
del self.service_period | |
if IsEmpty(self.service_id): | |
problems.MissingValue('service_id') | |
if IsEmpty(self.trip_id): | |
problems.MissingValue('trip_id') | |
if hasattr(self, 'direction_id') and (not IsEmpty(self.direction_id)) and \ | |
(self.direction_id != '0') and (self.direction_id != '1'): | |
problems.InvalidValue('direction_id', self.direction_id, | |
'direction_id must be "0" or "1"') | |
if self._schedule: | |
if self.shape_id and self.shape_id not in self._schedule._shapes: | |
problems.InvalidValue('shape_id', self.shape_id) | |
if self.route_id and self.route_id not in self._schedule.routes: | |
problems.InvalidValue('route_id', self.route_id) | |
if (self.service_id and | |
self.service_id not in self._schedule.service_periods): | |
problems.InvalidValue('service_id', self.service_id) | |
if validate_children: | |
self.ValidateChildren(problems) | |
def ValidateChildren(self, problems): | |
"""Validate StopTimes and headways of this trip.""" | |
assert self._schedule, "Trip must be in a schedule to ValidateChildren" | |
# TODO: validate distance values in stop times (if applicable) | |
cursor = self._schedule._connection.cursor() | |
cursor.execute("SELECT COUNT(stop_sequence) AS a FROM stop_times " | |
"WHERE trip_id=? GROUP BY stop_sequence HAVING a > 1", | |
(self.trip_id,)) | |
for row in cursor: | |
problems.InvalidValue('stop_sequence', row[0], | |
'Duplicate stop_sequence in trip_id %s' % | |
self.trip_id) | |
stoptimes = self.GetStopTimes(problems) | |
if stoptimes: | |
if stoptimes[0].arrival_time is None and stoptimes[0].departure_time is None: | |
problems.OtherProblem( | |
'No time for start of trip_id "%s""' % (self.trip_id)) | |
if stoptimes[-1].arrival_time is None and stoptimes[-1].departure_time is None: | |
problems.OtherProblem( | |
'No time for end of trip_id "%s""' % (self.trip_id)) | |
# Sorts the stoptimes by sequence and then checks that the arrival time | |
# for each time point is after the departure time of the previous. | |
stoptimes.sort(key=lambda x: x.stop_sequence) | |
prev_departure = 0 | |
prev_stop = None | |
prev_distance = None | |
try: | |
route_type = self._schedule.GetRoute(self.route_id).route_type | |
max_speed = Route._ROUTE_TYPES[route_type]['max_speed'] | |
except KeyError, e: | |
# If route_type cannot be found, assume it is 0 (Tram) for checking | |
# speeds between stops. | |
max_speed = Route._ROUTE_TYPES[0]['max_speed'] | |
for timepoint in stoptimes: | |
# Distance should be a nonnegative float number, so it should be | |
# always larger than None. | |
distance = timepoint.shape_dist_traveled | |
if distance is not None: | |
if distance > prev_distance and distance >= 0: | |
prev_distance = distance | |
else: | |
if distance == prev_distance: | |
type = TYPE_WARNING | |
else: | |
type = TYPE_ERROR | |
problems.InvalidValue('stoptimes.shape_dist_traveled', distance, | |
'For the trip %s the stop %s has shape_dist_traveled=%s, ' | |
'which should be larger than the previous ones. In this ' | |
'case, the previous distance was %s.' % | |
(self.trip_id, timepoint.stop_id, distance, prev_distance), | |
type=type) | |
if timepoint.arrival_secs is not None: | |
self._CheckSpeed(prev_stop, timepoint.stop, prev_departure, | |
timepoint.arrival_secs, max_speed, problems) | |
if timepoint.arrival_secs >= prev_departure: | |
prev_departure = timepoint.departure_secs | |
prev_stop = timepoint.stop | |
else: | |
problems.OtherProblem('Timetravel detected! Arrival time ' | |
'is before previous departure ' | |
'at sequence number %s in trip %s' % | |
(timepoint.stop_sequence, self.trip_id)) | |
if self.shape_id and self.shape_id in self._schedule._shapes: | |
shape = self._schedule.GetShape(self.shape_id) | |
max_shape_dist = shape.max_distance | |
st = stoptimes[-1] | |
if (st.shape_dist_traveled and | |
st.shape_dist_traveled > max_shape_dist): | |
problems.OtherProblem( | |
'In stop_times.txt, the stop with trip_id=%s and ' | |
'stop_sequence=%d has shape_dist_traveled=%f, which is larger ' | |
'than the max shape_dist_traveled=%f of the corresponding ' | |
'shape (shape_id=%s)' % | |
(self.trip_id, st.stop_sequence, st.shape_dist_traveled, | |
max_shape_dist, self.shape_id), type=TYPE_WARNING) | |
# shape_dist_traveled is valid in shape if max_shape_dist larger than | |
# 0. | |
if max_shape_dist > 0: | |
for st in stoptimes: | |
if st.shape_dist_traveled is None: | |
continue | |
pt = shape.GetPointWithDistanceTraveled(st.shape_dist_traveled) | |
if pt: | |
stop = self._schedule.GetStop(st.stop_id) | |
distance = ApproximateDistance(stop.stop_lat, stop.stop_lon, | |
pt[0], pt[1]) | |
if distance > MAX_DISTANCE_FROM_STOP_TO_SHAPE: | |
problems.StopTooFarFromShapeWithDistTraveled( | |
self.trip_id, stop.stop_name, stop.stop_id, pt[2], | |
self.shape_id, distance, MAX_DISTANCE_FROM_STOP_TO_SHAPE) | |
# O(n^2), but we don't anticipate many headway periods per trip | |
for headway_index, headway in enumerate(self._headways[0:-1]): | |
for other in self._headways[headway_index + 1:]: | |
if (other[0] < headway[1]) and (other[1] > headway[0]): | |
problems.OtherProblem('Trip contains overlapping headway periods ' | |
'%s and %s' % | |
(self._HeadwayOutputTuple(headway), | |
self._HeadwayOutputTuple(other))) | |
def _CheckSpeed(self, prev_stop, next_stop, depart_time, | |
arrive_time, max_speed, problems): | |
# Checks that the speed between two stops is not faster than max_speed | |
if prev_stop != None: | |
try: | |
time_between_stops = arrive_time - depart_time | |
except TypeError: | |
return | |
try: | |
dist_between_stops = \ | |
ApproximateDistanceBetweenStops(next_stop, prev_stop) | |
except TypeError, e: | |
return | |
if time_between_stops == 0: | |
# HASTUS makes it hard to output GTFS with times to the nearest second; | |
# it rounds times to the nearest minute. Therefore stop_times at the | |
# same time ending in :00 are fairly common. These times off by no more | |
# than 30 have not caused a problem. See | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=193 | |
# Show a warning if times are not rounded to the nearest minute or | |
# distance is more than max_speed for one minute. | |
if depart_time % 60 != 0 or dist_between_stops / 1000 * 60 > max_speed: | |
problems.TooFastTravel(self.trip_id, | |
prev_stop.stop_name, | |
next_stop.stop_name, | |
dist_between_stops, | |
time_between_stops, | |
speed=None, | |
type=TYPE_WARNING) | |
return | |
# This needs floating point division for precision. | |
speed_between_stops = ((float(dist_between_stops) / 1000) / | |
(float(time_between_stops) / 3600)) | |
if speed_between_stops > max_speed: | |
problems.TooFastTravel(self.trip_id, | |
prev_stop.stop_name, | |
next_stop.stop_name, | |
dist_between_stops, | |
time_between_stops, | |
speed_between_stops, | |
type=TYPE_WARNING) | |
# TODO: move these into a separate file | |
class ISO4217(object): | |
"""Represents the set of currencies recognized by the ISO-4217 spec.""" | |
codes = { # map of alpha code to numerical code | |
'AED': 784, 'AFN': 971, 'ALL': 8, 'AMD': 51, 'ANG': 532, 'AOA': 973, | |
'ARS': 32, 'AUD': 36, 'AWG': 533, 'AZN': 944, 'BAM': 977, 'BBD': 52, | |
'BDT': 50, 'BGN': 975, 'BHD': 48, 'BIF': 108, 'BMD': 60, 'BND': 96, | |
'BOB': 68, 'BOV': 984, 'BRL': 986, 'BSD': 44, 'BTN': 64, 'BWP': 72, | |
'BYR': 974, 'BZD': 84, 'CAD': 124, 'CDF': 976, 'CHE': 947, 'CHF': 756, | |
'CHW': 948, 'CLF': 990, 'CLP': 152, 'CNY': 156, 'COP': 170, 'COU': 970, | |
'CRC': 188, 'CUP': 192, 'CVE': 132, 'CYP': 196, 'CZK': 203, 'DJF': 262, | |
'DKK': 208, 'DOP': 214, 'DZD': 12, 'EEK': 233, 'EGP': 818, 'ERN': 232, | |
'ETB': 230, 'EUR': 978, 'FJD': 242, 'FKP': 238, 'GBP': 826, 'GEL': 981, | |
'GHC': 288, 'GIP': 292, 'GMD': 270, 'GNF': 324, 'GTQ': 320, 'GYD': 328, | |
'HKD': 344, 'HNL': 340, 'HRK': 191, 'HTG': 332, 'HUF': 348, 'IDR': 360, | |
'ILS': 376, 'INR': 356, 'IQD': 368, 'IRR': 364, 'ISK': 352, 'JMD': 388, | |
'JOD': 400, 'JPY': 392, 'KES': 404, 'KGS': 417, 'KHR': 116, 'KMF': 174, | |
'KPW': 408, 'KRW': 410, 'KWD': 414, 'KYD': 136, 'KZT': 398, 'LAK': 418, | |
'LBP': 422, 'LKR': 144, 'LRD': 430, 'LSL': 426, 'LTL': 440, 'LVL': 428, | |
'LYD': 434, 'MAD': 504, 'MDL': 498, 'MGA': 969, 'MKD': 807, 'MMK': 104, | |
'MNT': 496, 'MOP': 446, 'MRO': 478, 'MTL': 470, 'MUR': 480, 'MVR': 462, | |
'MWK': 454, 'MXN': 484, 'MXV': 979, 'MYR': 458, 'MZN': 943, 'NAD': 516, | |
'NGN': 566, 'NIO': 558, 'NOK': 578, 'NPR': 524, 'NZD': 554, 'OMR': 512, | |
'PAB': 590, 'PEN': 604, 'PGK': 598, 'PHP': 608, 'PKR': 586, 'PLN': 985, | |
'PYG': 600, 'QAR': 634, 'ROL': 642, 'RON': 946, 'RSD': 941, 'RUB': 643, | |
'RWF': 646, 'SAR': 682, 'SBD': 90, 'SCR': 690, 'SDD': 736, 'SDG': 938, | |
'SEK': 752, 'SGD': 702, 'SHP': 654, 'SKK': 703, 'SLL': 694, 'SOS': 706, | |
'SRD': 968, 'STD': 678, 'SYP': 760, 'SZL': 748, 'THB': 764, 'TJS': 972, | |
'TMM': 795, 'TND': 788, 'TOP': 776, 'TRY': 949, 'TTD': 780, 'TWD': 901, | |
'TZS': 834, 'UAH': 980, 'UGX': 800, 'USD': 840, 'USN': 997, 'USS': 998, | |
'UYU': 858, 'UZS': 860, 'VEB': 862, 'VND': 704, 'VUV': 548, 'WST': 882, | |
'XAF': 950, 'XAG': 961, 'XAU': 959, 'XBA': 955, 'XBB': 956, 'XBC': 957, | |
'XBD': 958, 'XCD': 951, 'XDR': 960, 'XFO': None, 'XFU': None, 'XOF': 952, | |
'XPD': 964, 'XPF': 953, 'XPT': 962, 'XTS': 963, 'XXX': 999, 'YER': 886, | |
'ZAR': 710, 'ZMK': 894, 'ZWD': 716, | |
} | |
class Fare(object): | |
"""Represents a fare type.""" | |
_REQUIRED_FIELD_NAMES = ['fare_id', 'price', 'currency_type', | |
'payment_method', 'transfers'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['transfer_duration'] | |
def __init__(self, | |
fare_id=None, price=None, currency_type=None, | |
payment_method=None, transfers=None, transfer_duration=None, | |
field_list=None): | |
self.rules = [] | |
(self.fare_id, self.price, self.currency_type, self.payment_method, | |
self.transfers, self.transfer_duration) = \ | |
(fare_id, price, currency_type, payment_method, | |
transfers, transfer_duration) | |
if field_list: | |
(self.fare_id, self.price, self.currency_type, self.payment_method, | |
self.transfers, self.transfer_duration) = field_list | |
try: | |
self.price = float(self.price) | |
except (TypeError, ValueError): | |
pass | |
try: | |
self.payment_method = int(self.payment_method) | |
except (TypeError, ValueError): | |
pass | |
if self.transfers == None or self.transfers == "": | |
self.transfers = None | |
else: | |
try: | |
self.transfers = int(self.transfers) | |
except (TypeError, ValueError): | |
pass | |
if self.transfer_duration == None or self.transfer_duration == "": | |
self.transfer_duration = None | |
else: | |
try: | |
self.transfer_duration = int(self.transfer_duration) | |
except (TypeError, ValueError): | |
pass | |
def GetFareRuleList(self): | |
return self.rules | |
def ClearFareRules(self): | |
self.rules = [] | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in Fare._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
if self.GetFieldValuesTuple() != other.GetFieldValuesTuple(): | |
return False | |
self_rules = [r.GetFieldValuesTuple() for r in self.GetFareRuleList()] | |
self_rules.sort() | |
other_rules = [r.GetFieldValuesTuple() for r in other.GetFareRuleList()] | |
other_rules.sort() | |
return self_rules == other_rules | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.fare_id): | |
problems.MissingValue("fare_id") | |
if self.price == None: | |
problems.MissingValue("price") | |
elif not isinstance(self.price, float) and not isinstance(self.price, int): | |
problems.InvalidValue("price", self.price) | |
elif self.price < 0: | |
problems.InvalidValue("price", self.price) | |
if IsEmpty(self.currency_type): | |
problems.MissingValue("currency_type") | |
elif self.currency_type not in ISO4217.codes: | |
problems.InvalidValue("currency_type", self.currency_type) | |
if self.payment_method == "" or self.payment_method == None: | |
problems.MissingValue("payment_method") | |
elif (not isinstance(self.payment_method, int) or | |
self.payment_method not in range(0, 2)): | |
problems.InvalidValue("payment_method", self.payment_method) | |
if not ((self.transfers == None) or | |
(isinstance(self.transfers, int) and | |
self.transfers in range(0, 3))): | |
problems.InvalidValue("transfers", self.transfers) | |
if ((self.transfer_duration != None) and | |
not isinstance(self.transfer_duration, int)): | |
problems.InvalidValue("transfer_duration", self.transfer_duration) | |
if self.transfer_duration and (self.transfer_duration < 0): | |
problems.InvalidValue("transfer_duration", self.transfer_duration) | |
if (self.transfer_duration and (self.transfer_duration > 0) and | |
self.transfers == 0): | |
problems.InvalidValue("transfer_duration", self.transfer_duration, | |
"can't have a nonzero transfer_duration for " | |
"a fare that doesn't allow transfers!") | |
class FareRule(object): | |
"""This class represents a rule that determines which itineraries a | |
fare rule applies to.""" | |
_REQUIRED_FIELD_NAMES = ['fare_id'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['route_id', | |
'origin_id', 'destination_id', | |
'contains_id'] | |
def __init__(self, fare_id=None, route_id=None, | |
origin_id=None, destination_id=None, contains_id=None, | |
field_list=None): | |
(self.fare_id, self.route_id, self.origin_id, self.destination_id, | |
self.contains_id) = \ | |
(fare_id, route_id, origin_id, destination_id, contains_id) | |
if field_list: | |
(self.fare_id, self.route_id, self.origin_id, self.destination_id, | |
self.contains_id) = field_list | |
# canonicalize non-content values as None | |
if not self.route_id: | |
self.route_id = None | |
if not self.origin_id: | |
self.origin_id = None | |
if not self.destination_id: | |
self.destination_id = None | |
if not self.contains_id: | |
self.contains_id = None | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in FareRule._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.GetFieldValuesTuple() == other.GetFieldValuesTuple() | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
class Shape(object): | |
"""This class represents a geographic shape that corresponds to the route | |
taken by one or more Trips.""" | |
_REQUIRED_FIELD_NAMES = ['shape_id', 'shape_pt_lat', 'shape_pt_lon', | |
'shape_pt_sequence'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['shape_dist_traveled'] | |
def __init__(self, shape_id): | |
# List of shape point tuple (lat, lng, shape_dist_traveled), where lat and | |
# lon is the location of the shape point, and shape_dist_traveled is an | |
# increasing metric representing the distance traveled along the shape. | |
self.points = [] | |
# An ID that uniquely identifies a shape in the dataset. | |
self.shape_id = shape_id | |
# The max shape_dist_traveled of shape points in this shape. | |
self.max_distance = 0 | |
# List of shape_dist_traveled of each shape point. | |
self.distance = [] | |
def AddPoint(self, lat, lon, distance=None, | |
problems=default_problem_reporter): | |
try: | |
lat = float(lat) | |
if abs(lat) > 90.0: | |
problems.InvalidValue('shape_pt_lat', lat) | |
return | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_lat', lat) | |
return | |
try: | |
lon = float(lon) | |
if abs(lon) > 180.0: | |
problems.InvalidValue('shape_pt_lon', lon) | |
return | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_lon', lon) | |
return | |
if (abs(lat) < 1.0) and (abs(lon) < 1.0): | |
problems.InvalidValue('shape_pt_lat', lat, | |
'Point location too close to 0, 0, which means ' | |
'that it\'s probably an incorrect location.', | |
type=TYPE_WARNING) | |
return | |
if distance == '': # canonicalizing empty string to None for comparison | |
distance = None | |
if distance != None: | |
try: | |
distance = float(distance) | |
if (distance < self.max_distance and not | |
(len(self.points) == 0 and distance == 0)): # first one can be 0 | |
problems.InvalidValue('shape_dist_traveled', distance, | |
'Each subsequent point in a shape should ' | |
'have a distance value that\'s at least as ' | |
'large as the previous ones. In this case, ' | |
'the previous distance was %f.' % | |
self.max_distance) | |
return | |
else: | |
self.max_distance = distance | |
self.distance.append(distance) | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_dist_traveled', distance, | |
'This value should be a positive number.') | |
return | |
self.points.append((lat, lon, distance)) | |
def ClearPoints(self): | |
self.points = [] | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.points == other.points | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<Shape %s>" % self.__dict__ | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.shape_id): | |
problems.MissingValue('shape_id') | |
if not self.points: | |
problems.OtherProblem('The shape with shape_id "%s" contains no points.' % | |
self.shape_id, type=TYPE_WARNING) | |
def GetPointWithDistanceTraveled(self, shape_dist_traveled): | |
"""Returns a point on the shape polyline with the input shape_dist_traveled. | |
Args: | |
shape_dist_traveled: The input shape_dist_traveled. | |
Returns: | |
The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and | |
lng is the location of the shape point, and shape_dist_traveled is an | |
increasing metric representing the distance traveled along the shape. | |
Returns None if there is data error in shape. | |
""" | |
if not self.distance: | |
return None | |
if shape_dist_traveled <= self.distance[0]: | |
return self.points[0] | |
if shape_dist_traveled >= self.distance[-1]: | |
return self.points[-1] | |
index = bisect.bisect(self.distance, shape_dist_traveled) | |
(lat0, lng0, dist0) = self.points[index - 1] | |
(lat1, lng1, dist1) = self.points[index] | |
# Interpolate if shape_dist_traveled does not equal to any of the point | |
# in shape segment. | |
# (lat0, lng0) (lat, lng) (lat1, lng1) | |
# -----|--------------------|---------------------|------ | |
# dist0 shape_dist_traveled dist1 | |
# \------- ca --------/ \-------- bc -------/ | |
# \----------------- ba ------------------/ | |
ca = shape_dist_traveled - dist0 | |
bc = dist1 - shape_dist_traveled | |
ba = bc + ca | |
if ba == 0: | |
# This only happens when there's data error in shapes and should have been | |
# catched before. Check to avoid crash. | |
return None | |
# This won't work crossing longitude 180 and is only an approximation which | |
# works well for short distance. | |
lat = (lat1 * ca + lat0 * bc) / ba | |
lng = (lng1 * ca + lng0 * bc) / ba | |
return (lat, lng, shape_dist_traveled) | |
class ISO639(object): | |
# Set of all the 2-letter ISO 639-1 language codes. | |
codes_2letter = set([ | |
'aa', 'ab', 'ae', 'af', 'ak', 'am', 'an', 'ar', 'as', 'av', 'ay', 'az', | |
'ba', 'be', 'bg', 'bh', 'bi', 'bm', 'bn', 'bo', 'br', 'bs', 'ca', 'ce', | |
'ch', 'co', 'cr', 'cs', 'cu', 'cv', 'cy', 'da', 'de', 'dv', 'dz', 'ee', | |
'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fj', 'fo', 'fr', | |
'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'gv', 'ha', 'he', 'hi', 'ho', 'hr', | |
'ht', 'hu', 'hy', 'hz', 'ia', 'id', 'ie', 'ig', 'ii', 'ik', 'io', 'is', | |
'it', 'iu', 'ja', 'jv', 'ka', 'kg', 'ki', 'kj', 'kk', 'kl', 'km', 'kn', | |
'ko', 'kr', 'ks', 'ku', 'kv', 'kw', 'ky', 'la', 'lb', 'lg', 'li', 'ln', | |
'lo', 'lt', 'lu', 'lv', 'mg', 'mh', 'mi', 'mk', 'ml', 'mn', 'mo', 'mr', | |
'ms', 'mt', 'my', 'na', 'nb', 'nd', 'ne', 'ng', 'nl', 'nn', 'no', 'nr', | |
'nv', 'ny', 'oc', 'oj', 'om', 'or', 'os', 'pa', 'pi', 'pl', 'ps', 'pt', | |
'qu', 'rm', 'rn', 'ro', 'ru', 'rw', 'sa', 'sc', 'sd', 'se', 'sg', 'si', | |
'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'ss', 'st', 'su', 'sv', 'sw', | |
'ta', 'te', 'tg', 'th', 'ti', 'tk', 'tl', 'tn', 'to', 'tr', 'ts', 'tt', | |
'tw', 'ty', 'ug', 'uk', 'ur', 'uz', 've', 'vi', 'vo', 'wa', 'wo', 'xh', | |
'yi', 'yo', 'za', 'zh', 'zu', | |
]) | |
class Agency(GenericGTFSObject): | |
"""Represents an agency in a schedule. | |
Callers may assign arbitrary values to instance attributes. __init__ makes no | |
attempt at validating the attributes. Call Validate() to check that | |
attributes are valid and the agency object is consistent with itself. | |
Attributes: | |
All attributes are strings. | |
""" | |
_REQUIRED_FIELD_NAMES = ['agency_name', 'agency_url', 'agency_timezone'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['agency_id', 'agency_lang', | |
'agency_phone'] | |
_TABLE_NAME = 'agency' | |
def __init__(self, name=None, url=None, timezone=None, id=None, | |
field_dict=None, lang=None, **kwargs): | |
"""Initialize a new Agency object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
name: a string, ignored when field_dict is present | |
url: a string, ignored when field_dict is present | |
timezone: a string, ignored when field_dict is present | |
id: a string, ignored when field_dict is present | |
kwargs: arbitrary keyword arguments may be used to add attributes to the | |
new object, ignored when field_dict is present | |
""" | |
self._schedule = None | |
if not field_dict: | |
if name: | |
kwargs['agency_name'] = name | |
if url: | |
kwargs['agency_url'] = url | |
if timezone: | |
kwargs['agency_timezone'] = timezone | |
if id: | |
kwargs['agency_id'] = id | |
if lang: | |
kwargs['agency_lang'] = lang | |
field_dict = kwargs | |
self.__dict__.update(field_dict) | |
def Validate(self, problems=default_problem_reporter): | |
"""Validate attribute values and this object's internal consistency. | |
Returns: | |
True iff all validation checks passed. | |
""" | |
found_problem = False | |
for required in Agency._REQUIRED_FIELD_NAMES: | |
if IsEmpty(getattr(self, required, None)): | |
problems.MissingValue(required) | |
found_problem = True | |
if self.agency_url and not IsValidURL(self.agency_url): | |
problems.InvalidValue('agency_url', self.agency_url) | |
found_problem = True | |
if (not IsEmpty(self.agency_lang) and | |
self.agency_lang.lower() not in ISO639.codes_2letter): | |
problems.InvalidValue('agency_lang', self.agency_lang) | |
found_problem = True | |
try: | |
import pytz | |
if self.agency_timezone not in pytz.common_timezones: | |
problems.InvalidValue( | |
'agency_timezone', | |
self.agency_timezone, | |
'"%s" is not a common timezone name according to pytz version %s' % | |
(self.agency_timezone, pytz.VERSION)) | |
found_problem = True | |
except ImportError: # no pytz | |
print ("Timezone not checked " | |
"(install pytz package for timezone validation)") | |
return not found_problem | |
class Transfer(object): | |
"""Represents a transfer in a schedule""" | |
_REQUIRED_FIELD_NAMES = ['from_stop_id', 'to_stop_id', 'transfer_type'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['min_transfer_time'] | |
def __init__(self, schedule=None, from_stop_id=None, to_stop_id=None, transfer_type=None, | |
min_transfer_time=None, field_dict=None): | |
if schedule is not None: | |
self._schedule = weakref.proxy(schedule) # See weakref comment at top | |
else: | |
self._schedule = None | |
if field_dict: | |
self.__dict__.update(field_dict) | |
else: | |
self.from_stop_id = from_stop_id | |
self.to_stop_id = to_stop_id | |
self.transfer_type = transfer_type | |
self.min_transfer_time = min_transfer_time | |
if getattr(self, 'transfer_type', None) in ("", None): | |
# Use the default, recommended transfer, if attribute is not set or blank | |
self.transfer_type = 0 | |
else: | |
try: | |
self.transfer_type = NonNegIntStringToInt(self.transfer_type) | |
except (TypeError, ValueError): | |
pass | |
if hasattr(self, 'min_transfer_time'): | |
try: | |
self.min_transfer_time = NonNegIntStringToInt(self.min_transfer_time) | |
except (TypeError, ValueError): | |
pass | |
else: | |
self.min_transfer_time = None | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in Transfer._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.GetFieldValuesTuple() == other.GetFieldValuesTuple() | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<Transfer %s>" % self.__dict__ | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.from_stop_id): | |
problems.MissingValue('from_stop_id') | |
elif self._schedule: | |
if self.from_stop_id not in self._schedule.stops.keys(): | |
problems.InvalidValue('from_stop_id', self.from_stop_id) | |
if IsEmpty(self.to_stop_id): | |
problems.MissingValue('to_stop_id') | |
elif self._schedule: | |
if self.to_stop_id not in self._schedule.stops.keys(): | |
problems.InvalidValue('to_stop_id', self.to_stop_id) | |
if not IsEmpty(self.transfer_type): | |
if (not isinstance(self.transfer_type, int)) or \ | |
(self.transfer_type not in range(0, 4)): | |
problems.InvalidValue('transfer_type', self.transfer_type) | |
if not IsEmpty(self.min_transfer_time): | |
if (not isinstance(self.min_transfer_time, int)) or \ | |
self.min_transfer_time < 0: | |
problems.InvalidValue('min_transfer_time', self.min_transfer_time) | |
class ServicePeriod(object): | |
"""Represents a service, which identifies a set of dates when one or more | |
trips operate.""" | |
_DAYS_OF_WEEK = [ | |
'monday', 'tuesday', 'wednesday', 'thursday', 'friday', | |
'saturday', 'sunday' | |
] | |
_FIELD_NAMES_REQUIRED = [ | |
'service_id', 'start_date', 'end_date' | |
] + _DAYS_OF_WEEK | |
_FIELD_NAMES = _FIELD_NAMES_REQUIRED # no optional fields in this one | |
_FIELD_NAMES_CALENDAR_DATES = ['service_id', 'date', 'exception_type'] | |
def __init__(self, id=None, field_list=None): | |
self.original_day_values = [] | |
if field_list: | |
self.service_id = field_list[self._FIELD_NAMES.index('service_id')] | |
self.day_of_week = [False] * len(self._DAYS_OF_WEEK) | |
for day in self._DAYS_OF_WEEK: | |
value = field_list[self._FIELD_NAMES.index(day)] or '' # can be None | |
self.original_day_values += [value.strip()] | |
self.day_of_week[self._DAYS_OF_WEEK.index(day)] = (value == u'1') | |
self.start_date = field_list[self._FIELD_NAMES.index('start_date')] | |
self.end_date = field_list[self._FIELD_NAMES.index('end_date')] | |
else: | |
self.service_id = id | |
self.day_of_week = [False] * 7 | |
self.start_date = None | |
self.end_date = None | |
self.date_exceptions = {} # Map from 'YYYYMMDD' to 1 (add) or 2 (remove) | |
def _IsValidDate(self, date): | |
if re.match('^\d{8}$', date) == None: | |
return False | |
try: | |
time.strptime(date, "%Y%m%d") | |
return True | |
except ValueError: | |
return False | |
def GetDateRange(self): | |
"""Return the range over which this ServicePeriod is valid. | |
The range includes exception dates that add service outside of | |
(start_date, end_date), but doesn't shrink the range if exception | |
dates take away service at the edges of the range. | |
Returns: | |
A tuple of "YYYYMMDD" strings, (start date, end date) or (None, None) if | |
no dates have been given. | |
""" | |
start = self.start_date | |
end = self.end_date | |
for date in self.date_exceptions: | |
if self.date_exceptions[date] == 2: | |
continue | |
if not start or (date < start): | |
start = date | |
if not end or (date > end): | |
end = date | |
if start is None: | |
start = end | |
elif end is None: | |
end = start | |
# If start and end are None we did a little harmless shuffling | |
return (start, end) | |
def GetCalendarFieldValuesTuple(self): | |
"""Return the tuple of calendar.txt values or None if this ServicePeriod | |
should not be in calendar.txt .""" | |
if self.start_date and self.end_date: | |
return [getattr(self, fn) for fn in ServicePeriod._FIELD_NAMES] | |
def GenerateCalendarDatesFieldValuesTuples(self): | |
"""Generates tuples of calendar_dates.txt values. Yield zero tuples if | |
this ServicePeriod should not be in calendar_dates.txt .""" | |
for date, exception_type in self.date_exceptions.items(): | |
yield (self.service_id, date, unicode(exception_type)) | |
def GetCalendarDatesFieldValuesTuples(self): | |
"""Return a list of date execeptions""" | |
result = [] | |
for date_tuple in self.GenerateCalendarDatesFieldValuesTuples(): | |
result.append(date_tuple) | |
result.sort() # helps with __eq__ | |
return result | |
def SetDateHasService(self, date, has_service=True, problems=None): | |
if date in self.date_exceptions and problems: | |
problems.DuplicateID(('service_id', 'date'), | |
(self.service_id, date), | |
type=TYPE_WARNING) | |
self.date_exceptions[date] = has_service and 1 or 2 | |
def ResetDateToNormalService(self, date): | |
if date in self.date_exceptions: | |
del self.date_exceptions[date] | |
def SetStartDate(self, start_date): | |
"""Set the first day of service as a string in YYYYMMDD format""" | |
self.start_date = start_date | |
def SetEndDate(self, end_date): | |
"""Set the last day of service as a string in YYYYMMDD format""" | |
self.end_date = end_date | |
def SetDayOfWeekHasService(self, dow, has_service=True): | |
"""Set service as running (or not) on a day of the week. By default the | |
service does not run on any days. | |
Args: | |
dow: 0 for Monday through 6 for Sunday | |
has_service: True if this service operates on dow, False if it does not. | |
Returns: | |
None | |
""" | |
assert(dow >= 0 and dow < 7) | |
self.day_of_week[dow] = has_service | |
def SetWeekdayService(self, has_service=True): | |
"""Set service as running (or not) on all of Monday through Friday.""" | |
for i in range(0, 5): | |
self.SetDayOfWeekHasService(i, has_service) | |
def SetWeekendService(self, has_service=True): | |
"""Set service as running (or not) on Saturday and Sunday.""" | |
self.SetDayOfWeekHasService(5, has_service) | |
self.SetDayOfWeekHasService(6, has_service) | |
def SetServiceId(self, service_id): | |
"""Set the service_id for this schedule. Generally the default will | |
suffice so you won't need to call this method.""" | |
self.service_id = service_id | |
def IsActiveOn(self, date, date_object=None): | |
"""Test if this service period is active on a date. | |
Args: | |
date: a string of form "YYYYMMDD" | |
date_object: a date object representing the same date as date. | |
This parameter is optional, and present only for performance | |
reasons. | |
If the caller constructs the date string from a date object | |
that date object can be passed directly, thus avoiding the | |
costly conversion from string to date object. | |
Returns: | |
True iff this service is active on date. | |
""" | |
if date in self.date_exceptions: | |
if self.date_exceptions[date] == 1: | |
return True | |
else: | |
return False | |
if (self.start_date and self.end_date and self.start_date <= date and | |
date <= self.end_date): | |
if date_object is None: | |
date_object = DateStringToDateObject(date) | |
return self.day_of_week[date_object.weekday()] | |
return False | |
def ActiveDates(self): | |
"""Return dates this service period is active as a list of "YYYYMMDD".""" | |
(earliest, latest) = self.GetDateRange() | |
if earliest is None: | |
return [] | |
dates = [] | |
date_it = DateStringToDateObject(earliest) | |
date_end = DateStringToDateObject(latest) | |
delta = datetime.timedelta(days=1) | |
while date_it <= date_end: | |
date_it_string = date_it.strftime("%Y%m%d") | |
if self.IsActiveOn(date_it_string, date_it): | |
dates.append(date_it_string) | |
date_it = date_it + delta | |
return dates | |
def __getattr__(self, name): | |
try: | |
# Return 1 if value in day_of_week is True, 0 otherwise | |
return (self.day_of_week[ServicePeriod._DAYS_OF_WEEK.index(name)] | |
and 1 or 0) | |
except KeyError: | |
pass | |
except ValueError: # not a day of the week | |
pass | |
raise AttributeError(name) | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
if (self.GetCalendarFieldValuesTuple() != | |
other.GetCalendarFieldValuesTuple()): | |
return False | |
if (self.GetCalendarDatesFieldValuesTuples() != | |
other.GetCalendarDatesFieldValuesTuples()): | |
return False | |
return True | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.service_id): | |
problems.MissingValue('service_id') | |
# self.start_date/self.end_date is None in 3 cases: | |
# ServicePeriod created by loader and | |
# 1a) self.service_id wasn't in calendar.txt | |
# 1b) calendar.txt didn't have a start_date/end_date column | |
# ServicePeriod created directly and | |
# 2) start_date/end_date wasn't set | |
# In case 1a no problem is reported. In case 1b the missing required column | |
# generates an error in _ReadCSV so this method should not report another | |
# problem. There is no way to tell the difference between cases 1b and 2 | |
# so case 2 is ignored because making the feedvalidator pretty is more | |
# important than perfect validation when an API users makes a mistake. | |
start_date = None | |
if self.start_date is not None: | |
if IsEmpty(self.start_date): | |
problems.MissingValue('start_date') | |
elif self._IsValidDate(self.start_date): | |
start_date = self.start_date | |
else: | |
problems.InvalidValue('start_date', self.start_date) | |
end_date = None | |
if self.end_date is not None: | |
if IsEmpty(self.end_date): | |
problems.MissingValue('end_date') | |
elif self._IsValidDate(self.end_date): | |
end_date = self.end_date | |
else: | |
problems.InvalidValue('end_date', self.end_date) | |
if start_date and end_date and end_date < start_date: | |
problems.InvalidValue('end_date', end_date, | |
'end_date of %s is earlier than ' | |
'start_date of "%s"' % | |
(end_date, start_date)) | |
if self.original_day_values: | |
index = 0 | |
for value in self.original_day_values: | |
column_name = self._DAYS_OF_WEEK[index] | |
if IsEmpty(value): | |
problems.MissingValue(column_name) | |
elif (value != u'0') and (value != '1'): | |
problems.InvalidValue(column_name, value) | |
index += 1 | |
if (True not in self.day_of_week and | |
1 not in self.date_exceptions.values()): | |
problems.OtherProblem('Service period with service_id "%s" ' | |
'doesn\'t have service on any days ' | |
'of the week.' % self.service_id, | |
type=TYPE_WARNING) | |
for date in self.date_exceptions: | |
if not self._IsValidDate(date): | |
problems.InvalidValue('date', date) | |
class CsvUnicodeWriter: | |
""" | |
Create a wrapper around a csv writer object which can safely write unicode | |
values. Passes all arguments to csv.writer. | |
""" | |
def __init__(self, *args, **kwargs): | |
self.writer = csv.writer(*args, **kwargs) | |
def writerow(self, row): | |
"""Write row to the csv file. Any unicode strings in row are encoded as | |
utf-8.""" | |
encoded_row = [] | |
for s in row: | |
if isinstance(s, unicode): | |
encoded_row.append(s.encode("utf-8")) | |
else: | |
encoded_row.append(s) | |
try: | |
self.writer.writerow(encoded_row) | |
except Exception, e: | |
print 'error writing %s as %s' % (row, encoded_row) | |
raise e | |
def writerows(self, rows): | |
"""Write rows to the csv file. Any unicode strings in rows are encoded as | |
utf-8.""" | |
for row in rows: | |
self.writerow(row) | |
def __getattr__(self, name): | |
return getattr(self.writer, name) | |
class Schedule: | |
"""Represents a Schedule, a collection of stops, routes, trips and | |
an agency. This is the main class for this module.""" | |
def __init__(self, problem_reporter=default_problem_reporter, | |
memory_db=True, check_duplicate_trips=False): | |
# Map from table name to list of columns present in this schedule | |
self._table_columns = {} | |
self._agencies = {} | |
self.stops = {} | |
self.routes = {} | |
self.trips = {} | |
self.service_periods = {} | |
self.fares = {} | |
self.fare_zones = {} # represents the set of all known fare zones | |
self._shapes = {} # shape_id to Shape | |
self._transfers = [] # list of transfers | |
self._default_service_period = None | |
self._default_agency = None | |
self.problem_reporter = problem_reporter | |
self._check_duplicate_trips = check_duplicate_trips | |
self.ConnectDb(memory_db) | |
def AddTableColumn(self, table, column): | |
"""Add column to table if it is not already there.""" | |
if column not in self._table_columns[table]: | |
self._table_columns[table].append(column) | |
def AddTableColumns(self, table, columns): | |
"""Add columns to table if they are not already there. | |
Args: | |
table: table name as a string | |
columns: an iterable of column names""" | |
table_columns = self._table_columns.setdefault(table, []) | |
for attr in columns: | |
if attr not in table_columns: | |
table_columns.append(attr) | |
def GetTableColumns(self, table): | |
"""Return list of columns in a table.""" | |
return self._table_columns[table] | |
def __del__(self): | |
if hasattr(self, '_temp_db_filename'): | |
os.remove(self._temp_db_filename) | |
def ConnectDb(self, memory_db): | |
if memory_db: | |
self._connection = sqlite.connect(":memory:") | |
else: | |
try: | |
self._temp_db_file = tempfile.NamedTemporaryFile() | |
self._connection = sqlite.connect(self._temp_db_file.name) | |
except sqlite.OperationalError: | |
# Windows won't let a file be opened twice. mkstemp does not remove the | |
# file when all handles to it are closed. | |
self._temp_db_file = None | |
(fd, self._temp_db_filename) = tempfile.mkstemp(".db") | |
os.close(fd) | |
self._connection = sqlite.connect(self._temp_db_filename) | |
cursor = self._connection.cursor() | |
cursor.execute("""CREATE TABLE stop_times ( | |
trip_id CHAR(50), | |
arrival_secs INTEGER, | |
departure_secs INTEGER, | |
stop_id CHAR(50), | |
stop_sequence INTEGER, | |
stop_headsign VAR CHAR(100), | |
pickup_type INTEGER, | |
drop_off_type INTEGER, | |
shape_dist_traveled FLOAT);""") | |
cursor.execute("""CREATE INDEX trip_index ON stop_times (trip_id);""") | |
cursor.execute("""CREATE INDEX stop_index ON stop_times (stop_id);""") | |
def GetStopBoundingBox(self): | |
return (min(s.stop_lat for s in self.stops.values()), | |
min(s.stop_lon for s in self.stops.values()), | |
max(s.stop_lat for s in self.stops.values()), | |
max(s.stop_lon for s in self.stops.values()), | |
) | |
def AddAgency(self, name, url, timezone, agency_id=None): | |
"""Adds an agency to this schedule.""" | |
agency = Agency(name, url, timezone, agency_id) | |
self.AddAgencyObject(agency) | |
return agency | |
def AddAgencyObject(self, agency, problem_reporter=None, validate=True): | |
assert agency._schedule is None | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if agency.agency_id in self._agencies: | |
problem_reporter.DuplicateID('agency_id', agency.agency_id) | |
return | |
self.AddTableColumns('agency', agency._ColumnNames()) | |
agency._schedule = weakref.proxy(self) | |
if validate: | |
agency.Validate(problem_reporter) | |
self._agencies[agency.agency_id] = agency | |
def GetAgency(self, agency_id): | |
"""Return Agency with agency_id or throw a KeyError""" | |
return self._agencies[agency_id] | |
def GetDefaultAgency(self): | |
"""Return the default Agency. If no default Agency has been set select the | |
default depending on how many Agency objects are in the Schedule. If there | |
are 0 make a new Agency the default, if there is 1 it becomes the default, | |
if there is more than 1 then return None. | |
""" | |
if not self._default_agency: | |
if len(self._agencies) == 0: | |
self.NewDefaultAgency() | |
elif len(self._agencies) == 1: | |
self._default_agency = self._agencies.values()[0] | |
return self._default_agency | |
def NewDefaultAgency(self, **kwargs): | |
"""Create a new Agency object and make it the default agency for this Schedule""" | |
agency = Agency(**kwargs) | |
if not agency.agency_id: | |
agency.agency_id = FindUniqueId(self._agencies) | |
self._default_agency = agency | |
self.SetDefaultAgency(agency, validate=False) # Blank agency won't validate | |
return agency | |
def SetDefaultAgency(self, agency, validate=True): | |
"""Make agency the default and add it to the schedule if not already added""" | |
assert isinstance(agency, Agency) | |
self._default_agency = agency | |
if agency.agency_id not in self._agencies: | |
self.AddAgencyObject(agency, validate=validate) | |
def GetAgencyList(self): | |
"""Returns the list of Agency objects known to this Schedule.""" | |
return self._agencies.values() | |
def GetServicePeriod(self, service_id): | |
"""Returns the ServicePeriod object with the given ID.""" | |
return self.service_periods[service_id] | |
def GetDefaultServicePeriod(self): | |
"""Return the default ServicePeriod. If no default ServicePeriod has been | |
set select the default depending on how many ServicePeriod objects are in | |
the Schedule. If there are 0 make a new ServicePeriod the default, if there | |
is 1 it becomes the default, if there is more than 1 then return None. | |
""" | |
if not self._default_service_period: | |
if len(self.service_periods) == 0: | |
self.NewDefaultServicePeriod() | |
elif len(self.service_periods) == 1: | |
self._default_service_period = self.service_periods.values()[0] | |
return self._default_service_period | |
def NewDefaultServicePeriod(self): | |
"""Create a new ServicePeriod object, make it the default service period and | |
return it. The default service period is used when you create a trip without | |
providing an explict service period. """ | |
service_period = ServicePeriod() | |
service_period.service_id = FindUniqueId(self.service_periods) | |
# blank service won't validate in AddServicePeriodObject | |
self.SetDefaultServicePeriod(service_period, validate=False) | |
return service_period | |
def SetDefaultServicePeriod(self, service_period, validate=True): | |
assert isinstance(service_period, ServicePeriod) | |
self._default_service_period = service_period | |
if service_period.service_id not in self.service_periods: | |
self.AddServicePeriodObject(service_period, validate=validate) | |
def AddServicePeriodObject(self, service_period, problem_reporter=None, | |
validate=True): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if service_period.service_id in self.service_periods: | |
problem_reporter.DuplicateID('service_id', service_period.service_id) | |
return | |
if validate: | |
service_period.Validate(problem_reporter) | |
self.service_periods[service_period.service_id] = service_period | |
def GetServicePeriodList(self): | |
return self.service_periods.values() | |
def GetDateRange(self): | |
"""Returns a tuple of (earliest, latest) dates on which the service | |
periods in the schedule define service, in YYYYMMDD form.""" | |
ranges = [period.GetDateRange() for period in self.GetServicePeriodList()] | |
starts = filter(lambda x: x, [item[0] for item in ranges]) | |
ends = filter(lambda x: x, [item[1] for item in ranges]) | |
if not starts or not ends: | |
return (None, None) | |
return (min(starts), max(ends)) | |
def GetServicePeriodsActiveEachDate(self, date_start, date_end): | |
"""Return a list of tuples (date, [period1, period2, ...]). | |
For each date in the range [date_start, date_end) make list of each | |
ServicePeriod object which is active. | |
Args: | |
date_start: The first date in the list, a date object | |
date_end: The first date after the list, a date object | |
Returns: | |
A list of tuples. Each tuple contains a date object and a list of zero or | |
more ServicePeriod objects. | |
""" | |
date_it = date_start | |
one_day = datetime.timedelta(days=1) | |
date_service_period_list = [] | |
while date_it < date_end: | |
periods_today = [] | |
date_it_string = date_it.strftime("%Y%m%d") | |
for service in self.GetServicePeriodList(): | |
if service.IsActiveOn(date_it_string, date_it): | |
periods_today.append(service) | |
date_service_period_list.append((date_it, periods_today)) | |
date_it += one_day | |
return date_service_period_list | |
def AddStop(self, lat, lng, name): | |
"""Add a stop to this schedule. | |
A new stop_id is created for this stop. Do not use this method unless all | |
stops in this Schedule are created with it. See source for details. | |
Args: | |
lat: Latitude of the stop as a float or string | |
lng: Longitude of the stop as a float or string | |
name: Name of the stop, which will appear in the feed | |
Returns: | |
A new Stop object | |
""" | |
# TODO: stop_id isn't guarenteed to be unique and conflicts are not | |
# handled. Please fix. | |
stop_id = unicode(len(self.stops)) | |
stop = Stop(stop_id=stop_id, lat=lat, lng=lng, name=name) | |
self.AddStopObject(stop) | |
return stop | |
def AddStopObject(self, stop, problem_reporter=None): | |
"""Add Stop object to this schedule if stop_id is non-blank.""" | |
assert stop._schedule is None | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if not stop.stop_id: | |
return | |
if stop.stop_id in self.stops: | |
problem_reporter.DuplicateID('stop_id', stop.stop_id) | |
return | |
stop._schedule = weakref.proxy(self) | |
self.AddTableColumns('stops', stop._ColumnNames()) | |
self.stops[stop.stop_id] = stop | |
if hasattr(stop, 'zone_id') and stop.zone_id: | |
self.fare_zones[stop.zone_id] = True | |
def GetStopList(self): | |
return self.stops.values() | |
def AddRoute(self, short_name, long_name, route_type): | |
"""Add a route to this schedule. | |
Args: | |
short_name: Short name of the route, such as "71L" | |
long_name: Full name of the route, such as "NW 21st Ave/St Helens Rd" | |
route_type: A type such as "Tram", "Subway" or "Bus" | |
Returns: | |
A new Route object | |
""" | |
route_id = unicode(len(self.routes)) | |
route = Route(short_name=short_name, long_name=long_name, | |
route_type=route_type, route_id=route_id) | |
route.agency_id = self.GetDefaultAgency().agency_id | |
self.AddRouteObject(route) | |
return route | |
def AddRouteObject(self, route, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
route.Validate(problem_reporter) | |
if route.route_id in self.routes: | |
problem_reporter.DuplicateID('route_id', route.route_id) | |
return | |
if route.agency_id not in self._agencies: | |
if not route.agency_id and len(self._agencies) == 1: | |
# we'll just assume that the route applies to the only agency | |
pass | |
else: | |
problem_reporter.InvalidValue('agency_id', route.agency_id, | |
'Route uses an unknown agency_id.') | |
return | |
self.AddTableColumns('routes', route._ColumnNames()) | |
route._schedule = weakref.proxy(self) | |
self.routes[route.route_id] = route | |
def GetRouteList(self): | |
return self.routes.values() | |
def GetRoute(self, route_id): | |
return self.routes[route_id] | |
def AddShapeObject(self, shape, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
shape.Validate(problem_reporter) | |
if shape.shape_id in self._shapes: | |
problem_reporter.DuplicateID('shape_id', shape.shape_id) | |
return | |
self._shapes[shape.shape_id] = shape | |
def GetShapeList(self): | |
return self._shapes.values() | |
def GetShape(self, shape_id): | |
return self._shapes[shape_id] | |
def AddTripObject(self, trip, problem_reporter=None, validate=True): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if trip.trip_id in self.trips: | |
problem_reporter.DuplicateID('trip_id', trip.trip_id) | |
return | |
self.AddTableColumns('trips', trip._ColumnNames()) | |
trip._schedule = weakref.proxy(self) | |
self.trips[trip.trip_id] = trip | |
# Call Trip.Validate after setting trip._schedule so that references | |
# are checked. trip.ValidateChildren will be called directly by | |
# schedule.Validate, after stop_times has been loaded. | |
if validate: | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
trip.Validate(problem_reporter, validate_children=False) | |
try: | |
self.routes[trip.route_id]._AddTripObject(trip) | |
except KeyError: | |
# Invalid route_id was reported in the Trip.Validate call above | |
pass | |
def GetTripList(self): | |
return self.trips.values() | |
def GetTrip(self, trip_id): | |
return self.trips[trip_id] | |
def AddFareObject(self, fare, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
fare.Validate(problem_reporter) | |
if fare.fare_id in self.fares: | |
problem_reporter.DuplicateID('fare_id', fare.fare_id) | |
return | |
self.fares[fare.fare_id] = fare | |
def GetFareList(self): | |
return self.fares.values() | |
def GetFare(self, fare_id): | |
return self.fares[fare_id] | |
def AddFareRuleObject(self, rule, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if IsEmpty(rule.fare_id): | |
problem_reporter.MissingValue('fare_id') | |
return | |
if rule.route_id and rule.route_id not in self.routes: | |
problem_reporter.InvalidValue('route_id', rule.route_id) | |
if rule.origin_id and rule.origin_id not in self.fare_zones: | |
problem_reporter.InvalidValue('origin_id', rule.origin_id) | |
if rule.destination_id and rule.destination_id not in self.fare_zones: | |
problem_reporter.InvalidValue('destination_id', rule.destination_id) | |
if rule.contains_id and rule.contains_id not in self.fare_zones: | |
problem_reporter.InvalidValue('contains_id', rule.contains_id) | |
if rule.fare_id in self.fares: | |
self.GetFare(rule.fare_id).rules.append(rule) | |
else: | |
problem_reporter.InvalidValue('fare_id', rule.fare_id, | |
'(This fare_id doesn\'t correspond to any ' | |
'of the IDs defined in the ' | |
'fare attributes.)') | |
def AddTransferObject(self, transfer, problem_reporter=None): | |
assert transfer._schedule is None, "only add Transfer to a schedule once" | |
transfer._schedule = weakref.proxy(self) # See weakref comment at top | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
transfer.Validate(problem_reporter) | |
self._transfers.append(transfer) | |
def GetTransferList(self): | |
return self._transfers | |
def GetStop(self, id): | |
return self.stops[id] | |
def GetFareZones(self): | |
"""Returns the list of all fare zones that have been identified by | |
the stops that have been added.""" | |
return self.fare_zones.keys() | |
def GetNearestStops(self, lat, lon, n=1): | |
"""Return the n nearest stops to lat,lon""" | |
dist_stop_list = [] | |
for s in self.stops.values(): | |
# TODO: Use ApproximateDistanceBetweenStops? | |
dist = (s.stop_lat - lat)**2 + (s.stop_lon - lon)**2 | |
if len(dist_stop_list) < n: | |
bisect.insort(dist_stop_list, (dist, s)) | |
elif dist < dist_stop_list[-1][0]: | |
bisect.insort(dist_stop_list, (dist, s)) | |
dist_stop_list.pop() # Remove stop with greatest distance | |
return [stop for dist, stop in dist_stop_list] | |
def GetStopsInBoundingBox(self, north, east, south, west, n): | |
"""Return a sample of up to n stops in a bounding box""" | |
stop_list = [] | |
for s in self.stops.values(): | |
if (s.stop_lat <= north and s.stop_lat >= south and | |
s.stop_lon <= east and s.stop_lon >= west): | |
stop_list.append(s) | |
if len(stop_list) == n: | |
break | |
return stop_list | |
def Load(self, feed_path, extra_validation=False): | |
loader = Loader(feed_path, self, problems=self.problem_reporter, | |
extra_validation=extra_validation) | |
loader.Load() | |
def _WriteArchiveString(self, archive, filename, stringio): | |
zi = zipfile.ZipInfo(filename) | |
# See | |
# http://stackoverflow.com/questions/434641/how-do-i-set-permissions-attributes-on-a-file-in-a-zip-file-using-pythons-zipf | |
zi.external_attr = 0666 << 16L # Set unix permissions to -rw-rw-rw | |
# ZIP_DEFLATED requires zlib. zlib comes with Python 2.4 and 2.5 | |
zi.compress_type = zipfile.ZIP_DEFLATED | |
archive.writestr(zi, stringio.getvalue()) | |
def WriteGoogleTransitFeed(self, file): | |
"""Output this schedule as a Google Transit Feed in file_name. | |
Args: | |
file: path of new feed file (a string) or a file-like object | |
Returns: | |
None | |
""" | |
# Compression type given when adding each file | |
archive = zipfile.ZipFile(file, 'w') | |
if 'agency' in self._table_columns: | |
agency_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(agency_string) | |
columns = self.GetTableColumns('agency') | |
writer.writerow(columns) | |
for a in self._agencies.values(): | |
writer.writerow([EncodeUnicode(a[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'agency.txt', agency_string) | |
calendar_dates_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(calendar_dates_string) | |
writer.writerow(ServicePeriod._FIELD_NAMES_CALENDAR_DATES) | |
has_data = False | |
for period in self.service_periods.values(): | |
for row in period.GenerateCalendarDatesFieldValuesTuples(): | |
has_data = True | |
writer.writerow(row) | |
wrote_calendar_dates = False | |
if has_data: | |
wrote_calendar_dates = True | |
self._WriteArchiveString(archive, 'calendar_dates.txt', | |
calendar_dates_string) | |
calendar_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(calendar_string) | |
writer.writerow(ServicePeriod._FIELD_NAMES) | |
has_data = False | |
for s in self.service_periods.values(): | |
row = s.GetCalendarFieldValuesTuple() | |
if row: | |
has_data = True | |
writer.writerow(row) | |
if has_data or not wrote_calendar_dates: | |
self._WriteArchiveString(archive, 'calendar.txt', calendar_string) | |
if 'stops' in self._table_columns: | |
stop_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(stop_string) | |
columns = self.GetTableColumns('stops') | |
writer.writerow(columns) | |
for s in self.stops.values(): | |
writer.writerow([EncodeUnicode(s[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'stops.txt', stop_string) | |
if 'routes' in self._table_columns: | |
route_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(route_string) | |
columns = self.GetTableColumns('routes') | |
writer.writerow(columns) | |
for r in self.routes.values(): | |
writer.writerow([EncodeUnicode(r[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'routes.txt', route_string) | |
if 'trips' in self._table_columns: | |
trips_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(trips_string) | |
columns = self.GetTableColumns('trips') | |
writer.writerow(columns) | |
for t in self.trips.values(): | |
writer.writerow([EncodeUnicode(t[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'trips.txt', trips_string) | |
# write frequencies.txt (if applicable) | |
headway_rows = [] | |
for trip in self.GetTripList(): | |
headway_rows += trip.GetHeadwayPeriodOutputTuples() | |
if headway_rows: | |
headway_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(headway_string) | |
writer.writerow(Trip._FIELD_NAMES_HEADWAY) | |
writer.writerows(headway_rows) | |
self._WriteArchiveString(archive, 'frequencies.txt', headway_string) | |
# write fares (if applicable) | |
if self.GetFareList(): | |
fare_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(fare_string) | |
writer.writerow(Fare._FIELD_NAMES) | |
writer.writerows(f.GetFieldValuesTuple() for f in self.GetFareList()) | |
self._WriteArchiveString(archive, 'fare_attributes.txt', fare_string) | |
# write fare rules (if applicable) | |
rule_rows = [] | |
for fare in self.GetFareList(): | |
for rule in fare.GetFareRuleList(): | |
rule_rows.append(rule.GetFieldValuesTuple()) | |
if rule_rows: | |
rule_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(rule_string) | |
writer.writerow(FareRule._FIELD_NAMES) | |
writer.writerows(rule_rows) | |
self._WriteArchiveString(archive, 'fare_rules.txt', rule_string) | |
stop_times_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(stop_times_string) | |
writer.writerow(StopTime._FIELD_NAMES) | |
for t in self.trips.values(): | |
writer.writerows(t._GenerateStopTimesTuples()) | |
self._WriteArchiveString(archive, 'stop_times.txt', stop_times_string) | |
# write shapes (if applicable) | |
shape_rows = [] | |
for shape in self.GetShapeList(): | |
seq = 1 | |
for (lat, lon, dist) in shape.points: | |
shape_rows.append((shape.shape_id, lat, lon, seq, dist)) | |
seq += 1 | |
if shape_rows: | |
shape_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(shape_string) | |
writer.writerow(Shape._FIELD_NAMES) | |
writer.writerows(shape_rows) | |
self._WriteArchiveString(archive, 'shapes.txt', shape_string) | |
# write transfers (if applicable) | |
if self.GetTransferList(): | |
transfer_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(transfer_string) | |
writer.writerow(Transfer._FIELD_NAMES) | |
writer.writerows(f.GetFieldValuesTuple() for f in self.GetTransferList()) | |
self._WriteArchiveString(archive, 'transfers.txt', transfer_string) | |
archive.close() | |
def GenerateDateTripsDeparturesList(self, date_start, date_end): | |
"""Return a list of (date object, number of trips, number of departures). | |
The list is generated for dates in the range [date_start, date_end). | |
Args: | |
date_start: The first date in the list, a date object | |
date_end: The first date after the list, a date object | |
Returns: | |
a list of (date object, number of trips, number of departures) tuples | |
""" | |
service_id_to_trips = defaultdict(lambda: 0) | |
service_id_to_departures = defaultdict(lambda: 0) | |
for trip in self.GetTripList(): | |
headway_start_times = trip.GetHeadwayStartTimes() | |
if headway_start_times: | |
trip_runs = len(headway_start_times) | |
else: | |
trip_runs = 1 | |
service_id_to_trips[trip.service_id] += trip_runs | |
service_id_to_departures[trip.service_id] += ( | |
(trip.GetCountStopTimes() - 1) * trip_runs) | |
date_services = self.GetServicePeriodsActiveEachDate(date_start, date_end) | |
date_trips = [] | |
for date, services in date_services: | |
day_trips = sum(service_id_to_trips[s.service_id] for s in services) | |
day_departures = sum( | |
service_id_to_departures[s.service_id] for s in services) | |
date_trips.append((date, day_trips, day_departures)) | |
return date_trips | |
def ValidateFeedStartAndExpirationDates(self, | |
problems, | |
first_date, | |
last_date, | |
today): | |
"""Validate the start and expiration dates of the feed. | |
Issue a warning if it only starts in the future, or if | |
it expires within 60 days. | |
Args: | |
problems: The problem reporter object | |
first_date: A date object representing the first day the feed is active | |
last_date: A date object representing the last day the feed is active | |
today: A date object representing the date the validation is being run on | |
Returns: | |
None | |
""" | |
warning_cutoff = today + datetime.timedelta(days=60) | |
if last_date < warning_cutoff: | |
problems.ExpirationDate(time.mktime(last_date.timetuple())) | |
if first_date > today: | |
problems.FutureService(time.mktime(first_date.timetuple())) | |
def ValidateServiceGaps(self, | |
problems, | |
validation_start_date, | |
validation_end_date, | |
service_gap_interval): | |
"""Validate consecutive dates without service in the feed. | |
Issue a warning if it finds service gaps of at least | |
"service_gap_interval" consecutive days in the date range | |
[validation_start_date, last_service_date) | |
Args: | |
problems: The problem reporter object | |
validation_start_date: A date object representing the date from which the | |
validation should take place | |
validation_end_date: A date object representing the first day the feed is | |
active | |
service_gap_interval: An integer indicating how many consecutive days the | |
service gaps need to have for a warning to be issued | |
Returns: | |
None | |
""" | |
if service_gap_interval is None: | |
return | |
departures = self.GenerateDateTripsDeparturesList(validation_start_date, | |
validation_end_date) | |
# The first day without service of the _current_ gap | |
first_day_without_service = validation_start_date | |
# The last day without service of the _current_ gap | |
last_day_without_service = validation_start_date | |
consecutive_days_without_service = 0 | |
for day_date, day_trips, _ in departures: | |
if day_trips == 0: | |
if consecutive_days_without_service == 0: | |
first_day_without_service = day_date | |
consecutive_days_without_service += 1 | |
last_day_without_service = day_date | |
else: | |
if consecutive_days_without_service >= service_gap_interval: | |
problems.TooManyDaysWithoutService(first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service) | |
consecutive_days_without_service = 0 | |
# We have to check if there is a gap at the end of the specified date range | |
if consecutive_days_without_service >= service_gap_interval: | |
problems.TooManyDaysWithoutService(first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service) | |
def Validate(self, | |
problems=None, | |
validate_children=True, | |
today=None, | |
service_gap_interval=None): | |
"""Validates various holistic aspects of the schedule | |
(mostly interrelationships between the various data sets).""" | |
if today is None: | |
today = datetime.date.today() | |
if not problems: | |
problems = self.problem_reporter | |
(start_date, end_date) = self.GetDateRange() | |
if not end_date or not start_date: | |
problems.OtherProblem('This feed has no effective service dates!', | |
type=TYPE_WARNING) | |
else: | |
try: | |
last_service_day = datetime.datetime( | |
*(time.strptime(end_date, "%Y%m%d")[0:6])).date() | |
first_service_day = datetime.datetime( | |
*(time.strptime(start_date, "%Y%m%d")[0:6])).date() | |
except ValueError: | |
# Format of start_date and end_date checked in class ServicePeriod | |
pass | |
else: | |
self.ValidateFeedStartAndExpirationDates(problems, | |
first_service_day, | |
last_service_day, | |
today) | |
# We start checking for service gaps a bit in the past if the | |
# feed was active then. See | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=188 | |
# | |
# We subtract 1 from service_gap_interval so that if today has | |
# service no warning is issued. | |
# | |
# Service gaps are searched for only up to one year from today | |
if service_gap_interval is not None: | |
service_gap_timedelta = datetime.timedelta( | |
days=service_gap_interval - 1) | |
one_year = datetime.timedelta(days=365) | |
self.ValidateServiceGaps( | |
problems, | |
max(first_service_day, | |
today - service_gap_timedelta), | |
min(last_service_day, | |
today + one_year), | |
service_gap_interval) | |
# TODO: Check Trip fields against valid values | |
# Check for stops that aren't referenced by any trips and broken | |
# parent_station references. Also check that the parent station isn't too | |
# far from its child stops. | |
for stop in self.stops.values(): | |
if validate_children: | |
stop.Validate(problems) | |
cursor = self._connection.cursor() | |
cursor.execute("SELECT count(*) FROM stop_times WHERE stop_id=? LIMIT 1", | |
(stop.stop_id,)) | |
count = cursor.fetchone()[0] | |
if stop.location_type == 0 and count == 0: | |
problems.UnusedStop(stop.stop_id, stop.stop_name) | |
elif stop.location_type == 1 and count != 0: | |
problems.UsedStation(stop.stop_id, stop.stop_name) | |
if stop.location_type != 1 and stop.parent_station: | |
if stop.parent_station not in self.stops: | |
problems.InvalidValue("parent_station", | |
EncodeUnicode(stop.parent_station), | |
"parent_station '%s' not found for stop_id " | |
"'%s' in stops.txt" % | |
(EncodeUnicode(stop.parent_station), | |
EncodeUnicode(stop.stop_id))) | |
elif self.stops[stop.parent_station].location_type != 1: | |
problems.InvalidValue("parent_station", | |
EncodeUnicode(stop.parent_station), | |
"parent_station '%s' of stop_id '%s' must " | |
"have location_type=1 in stops.txt" % | |
(EncodeUnicode(stop.parent_station), | |
EncodeUnicode(stop.stop_id))) | |
else: | |
parent_station = self.stops[stop.parent_station] | |
distance = ApproximateDistanceBetweenStops(stop, parent_station) | |
if distance > MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR: | |
problems.StopTooFarFromParentStation( | |
stop.stop_id, stop.stop_name, parent_station.stop_id, | |
parent_station.stop_name, distance, TYPE_ERROR) | |
elif distance > MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING: | |
problems.StopTooFarFromParentStation( | |
stop.stop_id, stop.stop_name, parent_station.stop_id, | |
parent_station.stop_name, distance, TYPE_WARNING) | |
#TODO: check that every station is used. | |
# Then uncomment testStationWithoutReference. | |
# Check for stops that might represent the same location (specifically, | |
# stops that are less that 2 meters apart) First filter out stops without a | |
# valid lat and lon. Then sort by latitude, then find the distance between | |
# each pair of stations within 2 meters latitude of each other. This avoids | |
# doing n^2 comparisons in the average case and doesn't need a spatial | |
# index. | |
sorted_stops = filter(lambda s: s.stop_lat and s.stop_lon, | |
self.GetStopList()) | |
sorted_stops.sort(key=(lambda x: x.stop_lat)) | |
TWO_METERS_LAT = 0.000018 | |
for index, stop in enumerate(sorted_stops[:-1]): | |
index += 1 | |
while ((index < len(sorted_stops)) and | |
((sorted_stops[index].stop_lat - stop.stop_lat) < TWO_METERS_LAT)): | |
distance = ApproximateDistanceBetweenStops(stop, sorted_stops[index]) | |
if distance < 2: | |
other_stop = sorted_stops[index] | |
if stop.location_type == 0 and other_stop.location_type == 0: | |
problems.StopsTooClose( | |
EncodeUnicode(stop.stop_name), | |
EncodeUnicode(stop.stop_id), | |
EncodeUnicode(other_stop.stop_name), | |
EncodeUnicode(other_stop.stop_id), distance) | |
elif stop.location_type == 1 and other_stop.location_type == 1: | |
problems.StationsTooClose( | |
EncodeUnicode(stop.stop_name), EncodeUnicode(stop.stop_id), | |
EncodeUnicode(other_stop.stop_name), | |
EncodeUnicode(other_stop.stop_id), distance) | |
elif (stop.location_type in (0, 1) and | |
other_stop.location_type in (0, 1)): | |
if stop.location_type == 0 and other_stop.location_type == 1: | |
this_stop = stop | |
this_station = other_stop | |
elif stop.location_type == 1 and other_stop.location_type == 0: | |
this_stop = other_stop | |
this_station = stop | |
if this_stop.parent_station != this_station.stop_id: | |
problems.DifferentStationTooClose( | |
EncodeUnicode(this_stop.stop_name), | |
EncodeUnicode(this_stop.stop_id), | |
EncodeUnicode(this_station.stop_name), | |
EncodeUnicode(this_station.stop_id), distance) | |
index += 1 | |
# Check for multiple routes using same short + long name | |
route_names = {} | |
for route in self.routes.values(): | |
if validate_children: | |
route.Validate(problems) | |
short_name = '' | |
if not IsEmpty(route.route_short_name): | |
short_name = route.route_short_name.lower().strip() | |
long_name = '' | |
if not IsEmpty(route.route_long_name): | |
long_name = route.route_long_name.lower().strip() | |
name = (short_name, long_name) | |
if name in route_names: | |
problems.InvalidValue('route_long_name', | |
long_name, | |
'The same combination of ' | |
'route_short_name and route_long_name ' | |
'shouldn\'t be used for more than one ' | |
'route, as it is for the for the two routes ' | |
'with IDs "%s" and "%s".' % | |
(route.route_id, route_names[name].route_id), | |
type=TYPE_WARNING) | |
else: | |
route_names[name] = route | |
stop_types = {} # a dict mapping stop_id to [route_id, route_type, is_match] | |
trips = {} # a dict mapping tuple to (route_id, trip_id) | |
for trip in sorted(self.trips.values()): | |
if trip.route_id not in self.routes: | |
continue | |
route_type = self.GetRoute(trip.route_id).route_type | |
arrival_times = [] | |
stop_ids = [] | |
for index, st in enumerate(trip.GetStopTimes(problems)): | |
stop_id = st.stop.stop_id | |
arrival_times.append(st.arrival_time) | |
stop_ids.append(stop_id) | |
# Check a stop if which belongs to both subway and bus. | |
if (route_type == Route._ROUTE_TYPE_NAMES['Subway'] or | |
route_type == Route._ROUTE_TYPE_NAMES['Bus']): | |
if stop_id not in stop_types: | |
stop_types[stop_id] = [trip.route_id, route_type, 0] | |
elif (stop_types[stop_id][1] != route_type and | |
stop_types[stop_id][2] == 0): | |
stop_types[stop_id][2] = 1 | |
if stop_types[stop_id][1] == Route._ROUTE_TYPE_NAMES['Subway']: | |
subway_route_id = stop_types[stop_id][0] | |
bus_route_id = trip.route_id | |
else: | |
subway_route_id = trip.route_id | |
bus_route_id = stop_types[stop_id][0] | |
problems.StopWithMultipleRouteTypes(st.stop.stop_name, stop_id, | |
subway_route_id, bus_route_id) | |
# Check duplicate trips which go through the same stops with same | |
# service and start times. | |
if self._check_duplicate_trips: | |
if not stop_ids or not arrival_times: | |
continue | |
key = (trip.service_id, min(arrival_times), str(stop_ids)) | |
if key not in trips: | |
trips[key] = (trip.route_id, trip.trip_id) | |
else: | |
problems.DuplicateTrip(trips[key][1], trips[key][0], trip.trip_id, | |
trip.route_id) | |
# Check that routes' agency IDs are valid, if set | |
for route in self.routes.values(): | |
if (not IsEmpty(route.agency_id) and | |
not route.agency_id in self._agencies): | |
problems.InvalidValue('agency_id', | |
route.agency_id, | |
'The route with ID "%s" specifies agency_id ' | |
'"%s", which doesn\'t exist.' % | |
(route.route_id, route.agency_id)) | |
# Make sure all trips have stop_times | |
# We're doing this here instead of in Trip.Validate() so that | |
# Trips can be validated without error during the reading of trips.txt | |
for trip in self.trips.values(): | |
trip.ValidateChildren(problems) | |
count_stop_times = trip.GetCountStopTimes() | |
if not count_stop_times: | |
problems.OtherProblem('The trip with the trip_id "%s" doesn\'t have ' | |
'any stop times defined.' % trip.trip_id, | |
type=TYPE_WARNING) | |
if len(trip._headways) > 0: # no stoptimes, but there are headways | |
problems.OtherProblem('Frequencies defined, but no stop times given ' | |
'in trip %s' % trip.trip_id, type=TYPE_ERROR) | |
elif count_stop_times == 1: | |
problems.OtherProblem('The trip with the trip_id "%s" only has one ' | |
'stop on it; it should have at least one more ' | |
'stop so that the riders can leave!' % | |
trip.trip_id, type=TYPE_WARNING) | |
else: | |
# These methods report InvalidValue if there's no first or last time | |
trip.GetStartTime(problems=problems) | |
trip.GetEndTime(problems=problems) | |
# Check for unused shapes | |
known_shape_ids = set(self._shapes.keys()) | |
used_shape_ids = set() | |
for trip in self.GetTripList(): | |
used_shape_ids.add(trip.shape_id) | |
unused_shape_ids = known_shape_ids - used_shape_ids | |
if unused_shape_ids: | |
problems.OtherProblem('The shapes with the following shape_ids aren\'t ' | |
'used by any trips: %s' % | |
', '.join(unused_shape_ids), | |
type=TYPE_WARNING) | |
# Map from literal string that should never be found in the csv data to a human | |
# readable description | |
INVALID_LINE_SEPARATOR_UTF8 = { | |
"\x0c": "ASCII Form Feed 0x0C", | |
# May be part of end of line, but not found elsewhere | |
"\x0d": "ASCII Carriage Return 0x0D, \\r", | |
"\xe2\x80\xa8": "Unicode LINE SEPARATOR U+2028", | |
"\xe2\x80\xa9": "Unicode PARAGRAPH SEPARATOR U+2029", | |
"\xc2\x85": "Unicode NEXT LINE SEPARATOR U+0085", | |
} | |
class EndOfLineChecker: | |
"""Wrapper for a file-like object that checks for consistent line ends. | |
The check for consistent end of lines (all CR LF or all LF) only happens if | |
next() is called until it raises StopIteration. | |
""" | |
def __init__(self, f, name, problems): | |
"""Create new object. | |
Args: | |
f: file-like object to wrap | |
name: name to use for f. StringIO objects don't have a name attribute. | |
problems: a ProblemReporterBase object | |
""" | |
self._f = f | |
self._name = name | |
self._crlf = 0 | |
self._crlf_examples = [] | |
self._lf = 0 | |
self._lf_examples = [] | |
self._line_number = 0 # first line will be number 1 | |
self._problems = problems | |
def __iter__(self): | |
return self | |
def next(self): | |
"""Return next line without end of line marker or raise StopIteration.""" | |
try: | |
next_line = self._f.next() | |
except StopIteration: | |
self._FinalCheck() | |
raise | |
self._line_number += 1 | |
m_eol = re.search(r"[\x0a\x0d]*$", next_line) | |
if m_eol.group() == "\x0d\x0a": | |
self._crlf += 1 | |
if self._crlf <= 5: | |
self._crlf_examples.append(self._line_number) | |
elif m_eol.group() == "\x0a": | |
self._lf += 1 | |
if self._lf <= 5: | |
self._lf_examples.append(self._line_number) | |
elif m_eol.group() == "": | |
# Should only happen at the end of the file | |
try: | |
self._f.next() | |
raise RuntimeError("Unexpected row without new line sequence") | |
except StopIteration: | |
# Will be raised again when EndOfLineChecker.next() is next called | |
pass | |
else: | |
self._problems.InvalidLineEnd( | |
codecs.getencoder('string_escape')(m_eol.group())[0], | |
(self._name, self._line_number)) | |
next_line_contents = next_line[0:m_eol.start()] | |
for seq, name in INVALID_LINE_SEPARATOR_UTF8.items(): | |
if next_line_contents.find(seq) != -1: | |
self._problems.OtherProblem( | |
"Line contains %s" % name, | |
context=(self._name, self._line_number)) | |
return next_line_contents | |
def _FinalCheck(self): | |
if self._crlf > 0 and self._lf > 0: | |
crlf_plural = self._crlf > 1 and "s" or "" | |
crlf_lines = ", ".join(["%s" % e for e in self._crlf_examples]) | |
if self._crlf > len(self._crlf_examples): | |
crlf_lines += ", ..." | |
lf_plural = self._lf > 1 and "s" or "" | |
lf_lines = ", ".join(["%s" % e for e in self._lf_examples]) | |
if self._lf > len(self._lf_examples): | |
lf_lines += ", ..." | |
self._problems.OtherProblem( | |
"Found %d CR LF \"\\r\\n\" line end%s (line%s %s) and " | |
"%d LF \"\\n\" line end%s (line%s %s). A file must use a " | |
"consistent line end." % (self._crlf, crlf_plural, crlf_plural, | |
crlf_lines, self._lf, lf_plural, | |
lf_plural, lf_lines), | |
(self._name,)) | |
# Prevent _FinalCheck() from reporting the problem twice, in the unlikely | |
# case that it is run twice | |
self._crlf = 0 | |
self._lf = 0 | |
# Filenames specified in GTFS spec | |
KNOWN_FILENAMES = [ | |
'agency.txt', | |
'stops.txt', | |
'routes.txt', | |
'trips.txt', | |
'stop_times.txt', | |
'calendar.txt', | |
'calendar_dates.txt', | |
'fare_attributes.txt', | |
'fare_rules.txt', | |
'shapes.txt', | |
'frequencies.txt', | |
'transfers.txt', | |
] | |
class Loader: | |
def __init__(self, | |
feed_path=None, | |
schedule=None, | |
problems=default_problem_reporter, | |
extra_validation=False, | |
load_stop_times=True, | |
memory_db=True, | |
zip=None, | |
check_duplicate_trips=False): | |
"""Initialize a new Loader object. | |
Args: | |
feed_path: string path to a zip file or directory | |
schedule: a Schedule object or None to have one created | |
problems: a ProblemReporter object, the default reporter raises an | |
exception for each problem | |
extra_validation: True if you would like extra validation | |
load_stop_times: load the stop_times table, used to speed load time when | |
times are not needed. The default is True. | |
memory_db: if creating a new Schedule object use an in-memory sqlite | |
database instead of creating one in a temporary file | |
zip: a zipfile.ZipFile object, optionally used instead of path | |
""" | |
if not schedule: | |
schedule = Schedule(problem_reporter=problems, memory_db=memory_db, | |
check_duplicate_trips=check_duplicate_trips) | |
self._extra_validation = extra_validation | |
self._schedule = schedule | |
self._problems = problems | |
self._path = feed_path | |
self._zip = zip | |
self._load_stop_times = load_stop_times | |
def _DetermineFormat(self): | |
"""Determines whether the feed is in a form that we understand, and | |
if so, returns True.""" | |
if self._zip: | |
# If zip was passed to __init__ then path isn't used | |
assert not self._path | |
return True | |
if not isinstance(self._path, basestring) and hasattr(self._path, 'read'): | |
# A file-like object, used for testing with a StringIO file | |
self._zip = zipfile.ZipFile(self._path, mode='r') | |
return True | |
if not os.path.exists(self._path): | |
self._problems.FeedNotFound(self._path) | |
return False | |
if self._path.endswith('.zip'): | |
try: | |
self._zip = zipfile.ZipFile(self._path, mode='r') | |
except IOError: # self._path is a directory | |
pass | |
except zipfile.BadZipfile: | |
self._problems.UnknownFormat(self._path) | |
return False | |
if not self._zip and not os.path.isdir(self._path): | |
self._problems.UnknownFormat(self._path) | |
return False | |
return True | |
def _GetFileNames(self): | |
"""Returns a list of file names in the feed.""" | |
if self._zip: | |
return self._zip.namelist() | |
else: | |
return os.listdir(self._path) | |
def _CheckFileNames(self): | |
filenames = self._GetFileNames() | |
for feed_file in filenames: | |
if feed_file not in KNOWN_FILENAMES: | |
if not feed_file.startswith('.'): | |
# Don't worry about .svn files and other hidden files | |
# as this will break the tests. | |
self._problems.UnknownFile(feed_file) | |
def _GetUtf8Contents(self, file_name): | |
"""Check for errors in file_name and return a string for csv reader.""" | |
contents = self._FileContents(file_name) | |
if not contents: # Missing file | |
return | |
# Check for errors that will prevent csv.reader from working | |
if len(contents) >= 2 and contents[0:2] in (codecs.BOM_UTF16_BE, | |
codecs.BOM_UTF16_LE): | |
self._problems.FileFormat("appears to be encoded in utf-16", (file_name, )) | |
# Convert and continue, so we can find more errors | |
contents = codecs.getdecoder('utf-16')(contents)[0].encode('utf-8') | |
null_index = contents.find('\0') | |
if null_index != -1: | |
# It is easier to get some surrounding text than calculate the exact | |
# row_num | |
m = re.search(r'.{,20}\0.{,20}', contents, re.DOTALL) | |
self._problems.FileFormat( | |
"contains a null in text \"%s\" at byte %d" % | |
(codecs.getencoder('string_escape')(m.group()), null_index + 1), | |
(file_name, )) | |
return | |
# strip out any UTF-8 Byte Order Marker (otherwise it'll be | |
# treated as part of the first column name, causing a mis-parse) | |
contents = contents.lstrip(codecs.BOM_UTF8) | |
return contents | |
def _ReadCsvDict(self, file_name, all_cols, required): | |
"""Reads lines from file_name, yielding a dict of unicode values.""" | |
assert file_name.endswith(".txt") | |
table_name = file_name[0:-4] | |
contents = self._GetUtf8Contents(file_name) | |
if not contents: | |
return | |
eol_checker = EndOfLineChecker(StringIO.StringIO(contents), | |
file_name, self._problems) | |
# The csv module doesn't provide a way to skip trailing space, but when I | |
# checked 15/675 feeds had trailing space in a header row and 120 had spaces | |
# after fields. Space after header fields can cause a serious parsing | |
# problem, so warn. Space after body fields can cause a problem time, | |
# integer and id fields; they will be validated at higher levels. | |
reader = csv.reader(eol_checker, skipinitialspace=True) | |
raw_header = reader.next() | |
header_occurrences = defaultdict(lambda: 0) | |
header = [] | |
valid_columns = [] # Index into raw_header and raw_row | |
for i, h in enumerate(raw_header): | |
h_stripped = h.strip() | |
if not h_stripped: | |
self._problems.CsvSyntax( | |
description="The header row should not contain any blank values. " | |
"The corresponding column will be skipped for the " | |
"entire file.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=TYPE_ERROR) | |
continue | |
elif h != h_stripped: | |
self._problems.CsvSyntax( | |
description="The header row should not contain any " | |
"space characters.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=TYPE_WARNING) | |
header.append(h_stripped) | |
valid_columns.append(i) | |
header_occurrences[h_stripped] += 1 | |
for name, count in header_occurrences.items(): | |
if count > 1: | |
self._problems.DuplicateColumn( | |
header=name, | |
file_name=file_name, | |
count=count) | |
self._schedule._table_columns[table_name] = header | |
# check for unrecognized columns, which are often misspellings | |
unknown_cols = set(header) - set(all_cols) | |
if len(unknown_cols) == len(header): | |
self._problems.CsvSyntax( | |
description="The header row did not contain any known column " | |
"names. The file is most likely missing the header row " | |
"or not in the expected CSV format.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=TYPE_ERROR) | |
else: | |
for col in unknown_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.UnrecognizedColumn(file_name, col, context) | |
missing_cols = set(required) - set(header) | |
for col in missing_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.MissingColumn(file_name, col, context) | |
line_num = 1 # First line read by reader.next() above | |
for raw_row in reader: | |
line_num += 1 | |
if len(raw_row) == 0: # skip extra empty lines in file | |
continue | |
if len(raw_row) > len(raw_header): | |
self._problems.OtherProblem('Found too many cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(line_num, file_name), | |
(file_name, line_num), | |
type=TYPE_WARNING) | |
if len(raw_row) < len(raw_header): | |
self._problems.OtherProblem('Found missing cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(line_num, file_name), | |
(file_name, line_num), | |
type=TYPE_WARNING) | |
# raw_row is a list of raw bytes which should be valid utf-8. Convert each | |
# valid_columns of raw_row into Unicode. | |
valid_values = [] | |
unicode_error_columns = [] # index of valid_values elements with an error | |
for i in valid_columns: | |
try: | |
valid_values.append(raw_row[i].decode('utf-8')) | |
except UnicodeDecodeError: | |
# Replace all invalid characters with REPLACEMENT CHARACTER (U+FFFD) | |
valid_values.append(codecs.getdecoder("utf8") | |
(raw_row[i], errors="replace")[0]) | |
unicode_error_columns.append(len(valid_values) - 1) | |
except IndexError: | |
break | |
# The error report may contain a dump of all values in valid_values so | |
# problems can not be reported until after converting all of raw_row to | |
# Unicode. | |
for i in unicode_error_columns: | |
self._problems.InvalidValue(header[i], valid_values[i], | |
'Unicode error', | |
(file_name, line_num, | |
valid_values, header)) | |
d = dict(zip(header, valid_values)) | |
yield (d, line_num, header, valid_values) | |
# TODO: Add testing for this specific function | |
def _ReadCSV(self, file_name, cols, required): | |
"""Reads lines from file_name, yielding a list of unicode values | |
corresponding to the column names in cols.""" | |
contents = self._GetUtf8Contents(file_name) | |
if not contents: | |
return | |
eol_checker = EndOfLineChecker(StringIO.StringIO(contents), | |
file_name, self._problems) | |
reader = csv.reader(eol_checker) # Use excel dialect | |
header = reader.next() | |
header = map(lambda x: x.strip(), header) # trim any whitespace | |
header_occurrences = defaultdict(lambda: 0) | |
for column_header in header: | |
header_occurrences[column_header] += 1 | |
for name, count in header_occurrences.items(): | |
if count > 1: | |
self._problems.DuplicateColumn( | |
header=name, | |
file_name=file_name, | |
count=count) | |
# check for unrecognized columns, which are often misspellings | |
unknown_cols = set(header).difference(set(cols)) | |
for col in unknown_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.UnrecognizedColumn(file_name, col, context) | |
col_index = [-1] * len(cols) | |
for i in range(len(cols)): | |
if cols[i] in header: | |
col_index[i] = header.index(cols[i]) | |
elif cols[i] in required: | |
self._problems.MissingColumn(file_name, cols[i]) | |
row_num = 1 | |
for row in reader: | |
row_num += 1 | |
if len(row) == 0: # skip extra empty lines in file | |
continue | |
if len(row) > len(header): | |
self._problems.OtherProblem('Found too many cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(row_num, file_name), (file_name, row_num), | |
type=TYPE_WARNING) | |
if len(row) < len(header): | |
self._problems.OtherProblem('Found missing cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(row_num, file_name), (file_name, row_num), | |
type=TYPE_WARNING) | |
result = [None] * len(cols) | |
unicode_error_columns = [] # A list of column numbers with an error | |
for i in range(len(cols)): | |
ci = col_index[i] | |
if ci >= 0: | |
if len(row) <= ci: # handle short CSV rows | |
result[i] = u'' | |
else: | |
try: | |
result[i] = row[ci].decode('utf-8').strip() | |
except UnicodeDecodeError: | |
# Replace all invalid characters with | |
# REPLACEMENT CHARACTER (U+FFFD) | |
result[i] = codecs.getdecoder("utf8")(row[ci], | |
errors="replace")[0].strip() | |
unicode_error_columns.append(i) | |
for i in unicode_error_columns: | |
self._problems.InvalidValue(cols[i], result[i], | |
'Unicode error', | |
(file_name, row_num, result, cols)) | |
yield (result, row_num, cols) | |
def _HasFile(self, file_name): | |
"""Returns True if there's a file in the current feed with the | |
given file_name in the current feed.""" | |
if self._zip: | |
return file_name in self._zip.namelist() | |
else: | |
file_path = os.path.join(self._path, file_name) | |
return os.path.exists(file_path) and os.path.isfile(file_path) | |
def _FileContents(self, file_name): | |
results = None | |
if self._zip: | |
try: | |
results = self._zip.read(file_name) | |
except KeyError: # file not found in archve | |
self._problems.MissingFile(file_name) | |
return None | |
else: | |
try: | |
data_file = open(os.path.join(self._path, file_name), 'rb') | |
results = data_file.read() | |
except IOError: # file not found | |
self._problems.MissingFile(file_name) | |
return None | |
if not results: | |
self._problems.EmptyFile(file_name) | |
return results | |
def _LoadAgencies(self): | |
for (d, row_num, header, row) in self._ReadCsvDict('agency.txt', | |
Agency._FIELD_NAMES, | |
Agency._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('agency.txt', row_num, row, header) | |
agency = Agency(field_dict=d) | |
self._schedule.AddAgencyObject(agency, self._problems) | |
self._problems.ClearContext() | |
def _LoadStops(self): | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
'stops.txt', | |
Stop._FIELD_NAMES, | |
Stop._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('stops.txt', row_num, row, header) | |
stop = Stop(field_dict=d) | |
stop.Validate(self._problems) | |
self._schedule.AddStopObject(stop, self._problems) | |
self._problems.ClearContext() | |
def _LoadRoutes(self): | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
'routes.txt', | |
Route._FIELD_NAMES, | |
Route._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('routes.txt', row_num, row, header) | |
route = Route(field_dict=d) | |
self._schedule.AddRouteObject(route, self._problems) | |
self._problems.ClearContext() | |
def _LoadCalendar(self): | |
file_name = 'calendar.txt' | |
file_name_dates = 'calendar_dates.txt' | |
if not self._HasFile(file_name) and not self._HasFile(file_name_dates): | |
self._problems.MissingFile(file_name) | |
return | |
# map period IDs to (period object, (file_name, row_num, row, cols)) | |
periods = {} | |
# process calendar.txt | |
if self._HasFile(file_name): | |
has_useful_contents = False | |
for (row, row_num, cols) in \ | |
self._ReadCSV(file_name, | |
ServicePeriod._FIELD_NAMES, | |
ServicePeriod._FIELD_NAMES_REQUIRED): | |
context = (file_name, row_num, row, cols) | |
self._problems.SetFileContext(*context) | |
period = ServicePeriod(field_list=row) | |
if period.service_id in periods: | |
self._problems.DuplicateID('service_id', period.service_id) | |
else: | |
periods[period.service_id] = (period, context) | |
self._problems.ClearContext() | |
# process calendar_dates.txt | |
if self._HasFile(file_name_dates): | |
# ['service_id', 'date', 'exception_type'] | |
fields = ServicePeriod._FIELD_NAMES_CALENDAR_DATES | |
for (row, row_num, cols) in self._ReadCSV(file_name_dates, | |
fields, fields): | |
context = (file_name_dates, row_num, row, cols) | |
self._problems.SetFileContext(*context) | |
service_id = row[0] | |
period = None | |
if service_id in periods: | |
period = periods[service_id][0] | |
else: | |
period = ServicePeriod(service_id) | |
periods[period.service_id] = (period, context) | |
exception_type = row[2] | |
if exception_type == u'1': | |
period.SetDateHasService(row[1], True, self._problems) | |
elif exception_type == u'2': | |
period.SetDateHasService(row[1], False, self._problems) | |
else: | |
self._problems.InvalidValue('exception_type', exception_type) | |
self._problems.ClearContext() | |
# Now insert the periods into the schedule object, so that they're | |
# validated with both calendar and calendar_dates info present | |
for period, context in periods.values(): | |
self._problems.SetFileContext(*context) | |
self._schedule.AddServicePeriodObject(period, self._problems) | |
self._problems.ClearContext() | |
def _LoadShapes(self): | |
if not self._HasFile('shapes.txt'): | |
return | |
shapes = {} # shape_id to tuple | |
for (row, row_num, cols) in self._ReadCSV('shapes.txt', | |
Shape._FIELD_NAMES, | |
Shape._REQUIRED_FIELD_NAMES): | |
file_context = ('shapes.txt', row_num, row, cols) | |
self._problems.SetFileContext(*file_context) | |
(shape_id, lat, lon, seq, dist) = row | |
if IsEmpty(shape_id): | |
self._problems.MissingValue('shape_id') | |
continue | |
try: | |
seq = int(seq) | |
except (TypeError, ValueError): | |
self._problems.InvalidValue('shape_pt_sequence', seq, | |
'Value should be a number (0 or higher)') | |
continue | |
shapes.setdefault(shape_id, []).append((seq, lat, lon, dist, file_context)) | |
self._problems.ClearContext() | |
for shape_id, points in shapes.items(): | |
shape = Shape(shape_id) | |
points.sort() | |
if points and points[0][0] < 0: | |
self._problems.InvalidValue('shape_pt_sequence', points[0][0], | |
'In shape %s, a negative sequence number ' | |
'%d was found; sequence numbers should be ' | |
'0 or higher.' % (shape_id, points[0][0])) | |
last_seq = None | |
for (seq, lat, lon, dist, file_context) in points: | |
if (seq == last_seq): | |
self._problems.SetFileContext(*file_context) | |
self._problems.InvalidValue('shape_pt_sequence', seq, | |
'The sequence number %d occurs more ' | |
'than once in shape %s.' % | |
(seq, shape_id)) | |
last_seq = seq | |
shape.AddPoint(lat, lon, dist, self._problems) | |
self._problems.ClearContext() | |
self._schedule.AddShapeObject(shape, self._problems) | |
def _LoadTrips(self): | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
'trips.txt', | |
Trip._FIELD_NAMES, | |
Trip._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('trips.txt', row_num, row, header) | |
trip = Trip(field_dict=d) | |
self._schedule.AddTripObject(trip, self._problems) | |
self._problems.ClearContext() | |
def _LoadFares(self): | |
if not self._HasFile('fare_attributes.txt'): | |
return | |
for (row, row_num, cols) in self._ReadCSV('fare_attributes.txt', | |
Fare._FIELD_NAMES, | |
Fare._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('fare_attributes.txt', row_num, row, cols) | |
fare = Fare(field_list=row) | |
self._schedule.AddFareObject(fare, self._problems) | |
self._problems.ClearContext() | |
def _LoadFareRules(self): | |
if not self._HasFile('fare_rules.txt'): | |
return | |
for (row, row_num, cols) in self._ReadCSV('fare_rules.txt', | |
FareRule._FIELD_NAMES, | |
FareRule._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('fare_rules.txt', row_num, row, cols) | |
rule = FareRule(field_list=row) | |
self._schedule.AddFareRuleObject(rule, self._problems) | |
self._problems.ClearContext() | |
def _LoadHeadways(self): | |
file_name = 'frequencies.txt' | |
if not self._HasFile(file_name): # headways are an optional feature | |
return | |
# ['trip_id', 'start_time', 'end_time', 'headway_secs'] | |
fields = Trip._FIELD_NAMES_HEADWAY | |
modified_trips = {} | |
for (row, row_num, cols) in self._ReadCSV(file_name, fields, fields): | |
self._problems.SetFileContext(file_name, row_num, row, cols) | |
(trip_id, start_time, end_time, headway_secs) = row | |
try: | |
trip = self._schedule.GetTrip(trip_id) | |
trip.AddHeadwayPeriod(start_time, end_time, headway_secs, | |
self._problems) | |
modified_trips[trip_id] = trip | |
except KeyError: | |
self._problems.InvalidValue('trip_id', trip_id) | |
self._problems.ClearContext() | |
for trip in modified_trips.values(): | |
trip.Validate(self._problems) | |
def _LoadStopTimes(self): | |
for (row, row_num, cols) in self._ReadCSV('stop_times.txt', | |
StopTime._FIELD_NAMES, | |
StopTime._REQUIRED_FIELD_NAMES): | |
file_context = ('stop_times.txt', row_num, row, cols) | |
self._problems.SetFileContext(*file_context) | |
(trip_id, arrival_time, departure_time, stop_id, stop_sequence, | |
stop_headsign, pickup_type, drop_off_type, shape_dist_traveled) = row | |
try: | |
sequence = int(stop_sequence) | |
except (TypeError, ValueError): | |
self._problems.InvalidValue('stop_sequence', stop_sequence, | |
'This should be a number.') | |
continue | |
if sequence < 0: | |
self._problems.InvalidValue('stop_sequence', sequence, | |
'Sequence numbers should be 0 or higher.') | |
if stop_id not in self._schedule.stops: | |
self._problems.InvalidValue('stop_id', stop_id, | |
'This value wasn\'t defined in stops.txt') | |
continue | |
stop = self._schedule.stops[stop_id] | |
if trip_id not in self._schedule.trips: | |
self._problems.InvalidValue('trip_id', trip_id, | |
'This value wasn\'t defined in trips.txt') | |
continue | |
trip = self._schedule.trips[trip_id] | |
# If self._problems.Report returns then StopTime.__init__ will return | |
# even if the StopTime object has an error. Thus this code may add a | |
# StopTime that didn't validate to the database. | |
# Trip.GetStopTimes then tries to make a StopTime from the invalid data | |
# and calls the problem reporter for errors. An ugly solution is to | |
# wrap problems and a better solution is to move all validation out of | |
# __init__. For now make sure Trip.GetStopTimes gets a problem reporter | |
# when called from Trip.Validate. | |
stop_time = StopTime(self._problems, stop, arrival_time, | |
departure_time, stop_headsign, | |
pickup_type, drop_off_type, | |
shape_dist_traveled, stop_sequence=sequence) | |
trip._AddStopTimeObjectUnordered(stop_time, self._schedule) | |
self._problems.ClearContext() | |
# stop_times are validated in Trip.ValidateChildren, called by | |
# Schedule.Validate | |
def _LoadTransfers(self): | |
file_name = 'transfers.txt' | |
if not self._HasFile(file_name): # transfers are an optional feature | |
return | |
for (d, row_num, header, row) in self._ReadCsvDict(file_name, | |
Transfer._FIELD_NAMES, | |
Transfer._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext(file_name, row_num, row, header) | |
transfer = Transfer(field_dict=d) | |
self._schedule.AddTransferObject(transfer, self._problems) | |
self._problems.ClearContext() | |
def Load(self): | |
self._problems.ClearContext() | |
if not self._DetermineFormat(): | |
return self._schedule | |
self._CheckFileNames() | |
self._LoadAgencies() | |
self._LoadStops() | |
self._LoadRoutes() | |
self._LoadCalendar() | |
self._LoadShapes() | |
self._LoadTrips() | |
self._LoadHeadways() | |
if self._load_stop_times: | |
self._LoadStopTimes() | |
self._LoadFares() | |
self._LoadFareRules() | |
self._LoadTransfers() | |
if self._zip: | |
self._zip.close() | |
self._zip = None | |
if self._extra_validation: | |
self._schedule.Validate(self._problems, validate_children=False) | |
return self._schedule | |
class ShapeLoader(Loader): | |
"""A subclass of Loader that only loads the shapes from a GTFS file.""" | |
def __init__(self, *args, **kwargs): | |
"""Initialize a new ShapeLoader object. | |
See Loader.__init__ for argument documentation. | |
""" | |
Loader.__init__(self, *args, **kwargs) | |
def Load(self): | |
self._LoadShapes() | |
return self._schedule | |
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A library for manipulating points and polylines. | |
This is a library for creating and manipulating points on the unit | |
sphere, as an approximate model of Earth. The primary use of this | |
library is to make manipulation and matching of polylines easy in the | |
transitfeed library. | |
NOTE: in this library, Earth is modelled as a sphere, whereas | |
GTFS specifies that latitudes and longitudes are in WGS84. For the | |
purpose of comparing and matching latitudes and longitudes that | |
are relatively close together on the surface of the earth, this | |
is adequate; for other purposes, this library may not be accurate | |
enough. | |
""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import copy | |
import decimal | |
import heapq | |
import math | |
class ShapeError(Exception): | |
"""Thrown whenever there is a shape parsing error.""" | |
pass | |
EARTH_RADIUS_METERS = 6371010.0 | |
class Point(object): | |
""" | |
A class representing a point on the unit sphere in three dimensions. | |
""" | |
def __init__(self, x, y, z): | |
self.x = x | |
self.y = y | |
self.z = z | |
def __hash__(self): | |
return hash((self.x, self.y, self.z)) | |
def __cmp__(self, other): | |
if not isinstance(other, Point): | |
raise TypeError('Point.__cmp__(x,y) requires y to be a "Point", ' | |
'not a "%s"' % type(other).__name__) | |
return cmp((self.x, self.y, self.z), (other.x, other.y, other.z)) | |
def __str__(self): | |
return "(%.15f, %.15f, %.15f) " % (self.x, self.y, self.z) | |
def Norm2(self): | |
""" | |
Returns the L_2 (Euclidean) norm of self. | |
""" | |
sum = self.x * self.x + self.y * self.y + self.z * self.z | |
return math.sqrt(float(sum)) | |
def IsUnitLength(self): | |
return abs(self.Norm2() - 1.0) < 1e-14 | |
def Plus(self, other): | |
""" | |
Returns a new point which is the pointwise sum of self and other. | |
""" | |
return Point(self.x + other.x, | |
self.y + other.y, | |
self.z + other.z) | |
def Minus(self, other): | |
""" | |
Returns a new point which is the pointwise subtraction of other from | |
self. | |
""" | |
return Point(self.x - other.x, | |
self.y - other.y, | |
self.z - other.z) | |
def DotProd(self, other): | |
""" | |
Returns the (scalar) dot product of self with other. | |
""" | |
return self.x * other.x + self.y * other.y + self.z * other.z | |
def Times(self, val): | |
""" | |
Returns a new point which is pointwise multiplied by val. | |
""" | |
return Point(self.x * val, self.y * val, self.z * val) | |
def Normalize(self): | |
""" | |
Returns a unit point in the same direction as self. | |
""" | |
return self.Times(1 / self.Norm2()) | |
def RobustCrossProd(self, other): | |
""" | |
A robust version of cross product. If self and other | |
are not nearly the same point, returns the same value | |
as CrossProd() modulo normalization. Otherwise returns | |
an arbitrary unit point orthogonal to self. | |
""" | |
assert(self.IsUnitLength() and other.IsUnitLength()) | |
x = self.Plus(other).CrossProd(other.Minus(self)) | |
if abs(x.x) > 1e-15 or abs(x.y) > 1e-15 or abs(x.z) > 1e-15: | |
return x.Normalize() | |
else: | |
return self.Ortho() | |
def LargestComponent(self): | |
""" | |
Returns (i, val) where i is the component index (0 - 2) | |
which has largest absolute value and val is the value | |
of the component. | |
""" | |
if abs(self.x) > abs(self.y): | |
if abs(self.x) > abs(self.z): | |
return (0, self.x) | |
else: | |
return (2, self.z) | |
else: | |
if abs(self.y) > abs(self.z): | |
return (1, self.y) | |
else: | |
return (2, self.z) | |
def Ortho(self): | |
"""Returns a unit-length point orthogonal to this point""" | |
(index, val) = self.LargestComponent() | |
index = index - 1 | |
if index < 0: | |
index = 2 | |
temp = Point(0.012, 0.053, 0.00457) | |
if index == 0: | |
temp.x = 1 | |
elif index == 1: | |
temp.y = 1 | |
elif index == 2: | |
temp.z = 1 | |
return self.CrossProd(temp).Normalize() | |
def CrossProd(self, other): | |
""" | |
Returns the cross product of self and other. | |
""" | |
return Point( | |
self.y * other.z - self.z * other.y, | |
self.z * other.x - self.x * other.z, | |
self.x * other.y - self.y * other.x) | |
@staticmethod | |
def _approxEq(a, b): | |
return abs(a - b) < 1e-11 | |
def Equals(self, other): | |
""" | |
Returns true of self and other are approximately equal. | |
""" | |
return (self._approxEq(self.x, other.x) | |
and self._approxEq(self.y, other.y) | |
and self._approxEq(self.z, other.z)) | |
def Angle(self, other): | |
""" | |
Returns the angle in radians between self and other. | |
""" | |
return math.atan2(self.CrossProd(other).Norm2(), | |
self.DotProd(other)) | |
def ToLatLng(self): | |
""" | |
Returns that latitude and longitude that this point represents | |
under a spherical Earth model. | |
""" | |
rad_lat = math.atan2(self.z, math.sqrt(self.x * self.x + self.y * self.y)) | |
rad_lng = math.atan2(self.y, self.x) | |
return (rad_lat * 180.0 / math.pi, rad_lng * 180.0 / math.pi) | |
@staticmethod | |
def FromLatLng(lat, lng): | |
""" | |
Returns a new point representing this latitude and longitude under | |
a spherical Earth model. | |
""" | |
phi = lat * (math.pi / 180.0) | |
theta = lng * (math.pi / 180.0) | |
cosphi = math.cos(phi) | |
return Point(math.cos(theta) * cosphi, | |
math.sin(theta) * cosphi, | |
math.sin(phi)) | |
def GetDistanceMeters(self, other): | |
assert(self.IsUnitLength() and other.IsUnitLength()) | |
return self.Angle(other) * EARTH_RADIUS_METERS | |
def SimpleCCW(a, b, c): | |
""" | |
Returns true if the triangle abc is oriented counterclockwise. | |
""" | |
return c.CrossProd(a).DotProd(b) > 0 | |
def GetClosestPoint(x, a, b): | |
""" | |
Returns the point on the great circle segment ab closest to x. | |
""" | |
assert(x.IsUnitLength()) | |
assert(a.IsUnitLength()) | |
assert(b.IsUnitLength()) | |
a_cross_b = a.RobustCrossProd(b) | |
# project to the great circle going through a and b | |
p = x.Minus( | |
a_cross_b.Times( | |
x.DotProd(a_cross_b) / a_cross_b.Norm2())) | |
# if p lies between a and b, return it | |
if SimpleCCW(a_cross_b, a, p) and SimpleCCW(p, b, a_cross_b): | |
return p.Normalize() | |
# otherwise return the closer of a or b | |
if x.Minus(a).Norm2() <= x.Minus(b).Norm2(): | |
return a | |
else: | |
return b | |
class Poly(object): | |
""" | |
A class representing a polyline. | |
""" | |
def __init__(self, points = [], name=None): | |
self._points = list(points) | |
self._name = name | |
def AddPoint(self, p): | |
""" | |
Adds a new point to the end of the polyline. | |
""" | |
assert(p.IsUnitLength()) | |
self._points.append(p) | |
def GetName(self): | |
return self._name | |
def GetPoint(self, i): | |
return self._points[i] | |
def GetPoints(self): | |
return self._points | |
def GetNumPoints(self): | |
return len(self._points) | |
def _GetPointSafe(self, i): | |
try: | |
return self.GetPoint(i) | |
except IndexError: | |
return None | |
def GetClosestPoint(self, p): | |
""" | |
Returns (closest_p, closest_i), where closest_p is the closest point | |
to p on the piecewise linear curve represented by the polyline, | |
and closest_i is the index of the point on the polyline just before | |
the polyline segment that contains closest_p. | |
""" | |
assert(len(self._points) > 0) | |
closest_point = self._points[0] | |
closest_i = 0 | |
for i in range(0, len(self._points) - 1): | |
(a, b) = (self._points[i], self._points[i+1]) | |
cur_closest_point = GetClosestPoint(p, a, b) | |
if p.Angle(cur_closest_point) < p.Angle(closest_point): | |
closest_point = cur_closest_point.Normalize() | |
closest_i = i | |
return (closest_point, closest_i) | |
def LengthMeters(self): | |
"""Return length of this polyline in meters.""" | |
assert(len(self._points) > 0) | |
length = 0 | |
for i in range(0, len(self._points) - 1): | |
length += self._points[i].GetDistanceMeters(self._points[i+1]) | |
return length | |
def Reversed(self): | |
"""Return a polyline that is the reverse of this polyline.""" | |
return Poly(reversed(self.GetPoints()), self.GetName()) | |
def CutAtClosestPoint(self, p): | |
""" | |
Let x be the point on the polyline closest to p. Then | |
CutAtClosestPoint returns two new polylines, one representing | |
the polyline from the beginning up to x, and one representing | |
x onwards to the end of the polyline. x is the first point | |
returned in the second polyline. | |
""" | |
(closest, i) = self.GetClosestPoint(p) | |
tmp = [closest] | |
tmp.extend(self._points[i+1:]) | |
return (Poly(self._points[0:i+1]), | |
Poly(tmp)) | |
def GreedyPolyMatchDist(self, shape): | |
""" | |
Tries a greedy matching algorithm to match self to the | |
given shape. Returns the maximum distance in meters of | |
any point in self to its matched point in shape under the | |
algorithm. | |
Args: shape, a Poly object. | |
""" | |
tmp_shape = Poly(shape.GetPoints()) | |
max_radius = 0 | |
for (i, point) in enumerate(self._points): | |
tmp_shape = tmp_shape.CutAtClosestPoint(point)[1] | |
dist = tmp_shape.GetPoint(0).GetDistanceMeters(point) | |
max_radius = max(max_radius, dist) | |
return max_radius | |
@staticmethod | |
def MergePolys(polys, merge_point_threshold=10): | |
""" | |
Merge multiple polylines, in the order that they were passed in. | |
Merged polyline will have the names of their component parts joined by ';'. | |
Example: merging [a,b], [c,d] and [e,f] will result in [a,b,c,d,e,f]. | |
However if the endpoints of two adjacent polylines are less than | |
merge_point_threshold meters apart, we will only use the first endpoint in | |
the merged polyline. | |
""" | |
name = ";".join((p.GetName(), '')[p.GetName() is None] for p in polys) | |
merged = Poly([], name) | |
if polys: | |
first_poly = polys[0] | |
for p in first_poly.GetPoints(): | |
merged.AddPoint(p) | |
last_point = merged._GetPointSafe(-1) | |
for poly in polys[1:]: | |
first_point = poly._GetPointSafe(0) | |
if (last_point and first_point and | |
last_point.GetDistanceMeters(first_point) <= merge_point_threshold): | |
points = poly.GetPoints()[1:] | |
else: | |
points = poly.GetPoints() | |
for p in points: | |
merged.AddPoint(p) | |
last_point = merged._GetPointSafe(-1) | |
return merged | |
def __str__(self): | |
return self._ToString(str) | |
def ToLatLngString(self): | |
return self._ToString(lambda p: str(p.ToLatLng())) | |
def _ToString(self, pointToStringFn): | |
return "%s: %s" % (self.GetName() or "", | |
", ".join([pointToStringFn(p) for p in self._points])) | |
class PolyCollection(object): | |
""" | |
A class representing a collection of polylines. | |
""" | |
def __init__(self): | |
self._name_to_shape = {} | |
pass | |
def AddPoly(self, poly, smart_duplicate_handling=True): | |
""" | |
Adds a new polyline to the collection. | |
""" | |
inserted_name = poly.GetName() | |
if poly.GetName() in self._name_to_shape: | |
if not smart_duplicate_handling: | |
raise ShapeError("Duplicate shape found: " + poly.GetName()) | |
print ("Warning: duplicate shape id being added to collection: " + | |
poly.GetName()) | |
if poly.GreedyPolyMatchDist(self._name_to_shape[poly.GetName()]) < 10: | |
print " (Skipping as it apears to be an exact duplicate)" | |
else: | |
print " (Adding new shape variant with uniquified name)" | |
inserted_name = "%s-%d" % (inserted_name, len(self._name_to_shape)) | |
self._name_to_shape[inserted_name] = poly | |
def NumPolys(self): | |
return len(self._name_to_shape) | |
def FindMatchingPolys(self, start_point, end_point, max_radius=150): | |
""" | |
Returns a list of polylines in the collection that have endpoints | |
within max_radius of the given start and end points. | |
""" | |
matches = [] | |
for shape in self._name_to_shape.itervalues(): | |
if start_point.GetDistanceMeters(shape.GetPoint(0)) < max_radius and \ | |
end_point.GetDistanceMeters(shape.GetPoint(-1)) < max_radius: | |
matches.append(shape) | |
return matches | |
class PolyGraph(PolyCollection): | |
""" | |
A class representing a graph where the edges are polylines. | |
""" | |
def __init__(self): | |
PolyCollection.__init__(self) | |
self._nodes = {} | |
def AddPoly(self, poly, smart_duplicate_handling=True): | |
PolyCollection.AddPoly(self, poly, smart_duplicate_handling) | |
start_point = poly.GetPoint(0) | |
end_point = poly.GetPoint(-1) | |
self._AddNodeWithEdge(start_point, poly) | |
self._AddNodeWithEdge(end_point, poly) | |
def _AddNodeWithEdge(self, point, edge): | |
if point in self._nodes: | |
self._nodes[point].add(edge) | |
else: | |
self._nodes[point] = set([edge]) | |
def ShortestPath(self, start, goal): | |
"""Uses the A* algorithm to find a shortest path between start and goal. | |
For more background see http://en.wikipedia.org/wiki/A-star_algorithm | |
Some definitions: | |
g(x): The actual shortest distance traveled from initial node to current | |
node. | |
h(x): The estimated (or "heuristic") distance from current node to goal. | |
We use the distance on Earth from node to goal as the heuristic. | |
This heuristic is both admissible and monotonic (see wikipedia for | |
more details). | |
f(x): The sum of g(x) and h(x), used to prioritize elements to look at. | |
Arguments: | |
start: Point that is in the graph, start point of the search. | |
goal: Point that is in the graph, end point for the search. | |
Returns: | |
A Poly object representing the shortest polyline through the graph from | |
start to goal, or None if no path found. | |
""" | |
assert start in self._nodes | |
assert goal in self._nodes | |
closed_set = set() # Set of nodes already evaluated. | |
open_heap = [(0, start)] # Nodes to visit, heapified by f(x). | |
open_set = set([start]) # Same as open_heap, but a set instead of a heap. | |
g_scores = { start: 0 } # Distance from start along optimal path | |
came_from = {} # Map to reconstruct optimal path once we're done. | |
while open_set: | |
(f_x, x) = heapq.heappop(open_heap) | |
open_set.remove(x) | |
if x == goal: | |
return self._ReconstructPath(came_from, goal) | |
closed_set.add(x) | |
edges = self._nodes[x] | |
for edge in edges: | |
if edge.GetPoint(0) == x: | |
y = edge.GetPoint(-1) | |
else: | |
y = edge.GetPoint(0) | |
if y in closed_set: | |
continue | |
tentative_g_score = g_scores[x] + edge.LengthMeters() | |
tentative_is_better = False | |
if y not in open_set: | |
h_y = y.GetDistanceMeters(goal) | |
f_y = tentative_g_score + h_y | |
open_set.add(y) | |
heapq.heappush(open_heap, (f_y, y)) | |
tentative_is_better = True | |
elif tentative_g_score < g_scores[y]: | |
tentative_is_better = True | |
if tentative_is_better: | |
came_from[y] = (x, edge) | |
g_scores[y] = tentative_g_score | |
return None | |
def _ReconstructPath(self, came_from, current_node): | |
""" | |
Helper method for ShortestPath, to reconstruct path. | |
Arguments: | |
came_from: a dictionary mapping Point to (Point, Poly) tuples. | |
This dictionary keeps track of the previous neighbor to a node, and | |
the edge used to get from the previous neighbor to the node. | |
current_node: the current Point in the path. | |
Returns: | |
A Poly that represents the path through the graph from the start of the | |
search to current_node. | |
""" | |
if current_node in came_from: | |
(previous_node, previous_edge) = came_from[current_node] | |
if previous_edge.GetPoint(0) == current_node: | |
previous_edge = previous_edge.Reversed() | |
p = self._ReconstructPath(came_from, previous_node) | |
return Poly.MergePolys([p, previous_edge], merge_point_threshold=0) | |
else: | |
return Poly([], '') | |
def FindShortestMultiPointPath(self, points, max_radius=150, keep_best_n=10, | |
verbosity=0): | |
""" | |
Return a polyline, representing the shortest path through this graph that | |
has edge endpoints on each of a given list of points in sequence. We allow | |
fuzziness in matching of input points to points in this graph. | |
We limit ourselves to a view of the best keep_best_n paths at any time, as a | |
greedy optimization. | |
""" | |
assert len(points) > 1 | |
nearby_points = [] | |
paths_found = [] # A heap sorted by inverse path length. | |
for i, point in enumerate(points): | |
nearby = [p for p in self._nodes.iterkeys() | |
if p.GetDistanceMeters(point) < max_radius] | |
if verbosity >= 2: | |
print ("Nearby points for point %d %s: %s" | |
% (i + 1, | |
str(point.ToLatLng()), | |
", ".join([str(n.ToLatLng()) for n in nearby]))) | |
if nearby: | |
nearby_points.append(nearby) | |
else: | |
print "No nearby points found for point %s" % str(point.ToLatLng()) | |
return None | |
pathToStr = lambda start, end, path: (" Best path %s -> %s: %s" | |
% (str(start.ToLatLng()), | |
str(end.ToLatLng()), | |
path and path.GetName() or | |
"None")) | |
if verbosity >= 3: | |
print "Step 1" | |
step = 2 | |
start_points = nearby_points[0] | |
end_points = nearby_points[1] | |
for start in start_points: | |
for end in end_points: | |
path = self.ShortestPath(start, end) | |
if verbosity >= 3: | |
print pathToStr(start, end, path) | |
PolyGraph._AddPathToHeap(paths_found, path, keep_best_n) | |
for possible_points in nearby_points[2:]: | |
if verbosity >= 3: | |
print "\nStep %d" % step | |
step += 1 | |
new_paths_found = [] | |
start_end_paths = {} # cache of shortest paths between (start, end) pairs | |
for score, path in paths_found: | |
start = path.GetPoint(-1) | |
for end in possible_points: | |
if (start, end) in start_end_paths: | |
new_segment = start_end_paths[(start, end)] | |
else: | |
new_segment = self.ShortestPath(start, end) | |
if verbosity >= 3: | |
print pathToStr(start, end, new_segment) | |
start_end_paths[(start, end)] = new_segment | |
if new_segment: | |
new_path = Poly.MergePolys([path, new_segment], | |
merge_point_threshold=0) | |
PolyGraph._AddPathToHeap(new_paths_found, new_path, keep_best_n) | |
paths_found = new_paths_found | |
if paths_found: | |
best_score, best_path = max(paths_found) | |
return best_path | |
else: | |
return None | |
@staticmethod | |
def _AddPathToHeap(heap, path, keep_best_n): | |
if path and path.GetNumPoints(): | |
new_item = (-path.LengthMeters(), path) | |
if new_item not in heap: | |
if len(heap) < keep_best_n: | |
heapq.heappush(heap, new_item) | |
else: | |
heapq.heapreplace(heap, new_item) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2009 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import optparse | |
import sys | |
class OptionParserLongError(optparse.OptionParser): | |
"""OptionParser subclass that includes list of options above error message.""" | |
def error(self, msg): | |
print >>sys.stderr, self.format_help() | |
print >>sys.stderr, '\n\n%s: error: %s\n\n' % (self.get_prog_name(), msg) | |
sys.exit(2) | |
def RunWithCrashHandler(f): | |
try: | |
exit_code = f() | |
sys.exit(exit_code) | |
except (SystemExit, KeyboardInterrupt): | |
raise | |
except: | |
import inspect | |
import traceback | |
# Save trace and exception now. These calls look at the most recently | |
# raised exception. The code that makes the report might trigger other | |
# exceptions. | |
original_trace = inspect.trace(3)[1:] | |
formatted_exception = traceback.format_exception_only(*(sys.exc_info()[:2])) | |
apology = """Yikes, the program threw an unexpected exception! | |
Hopefully a complete report has been saved to transitfeedcrash.txt, | |
though if you are seeing this message we've already disappointed you once | |
today. Please include the report in a new issue at | |
http://code.google.com/p/googletransitdatafeed/issues/entry | |
or an email to the public group googletransitdatafeed@googlegroups.com. Sorry! | |
""" | |
dashes = '%s\n' % ('-' * 60) | |
dump = [] | |
dump.append(apology) | |
dump.append(dashes) | |
try: | |
import transitfeed | |
dump.append("transitfeed version %s\n\n" % transitfeed.__version__) | |
except NameError: | |
# Oh well, guess we won't put the version in the report | |
pass | |
for (frame_obj, filename, line_num, fun_name, context_lines, | |
context_index) in original_trace: | |
dump.append('File "%s", line %d, in %s\n' % (filename, line_num, | |
fun_name)) | |
if context_lines: | |
for (i, line) in enumerate(context_lines): | |
if i == context_index: | |
dump.append(' --> %s' % line) | |
else: | |
dump.append(' %s' % line) | |
for local_name, local_val in frame_obj.f_locals.items(): | |
try: | |
truncated_val = str(local_val)[0:500] | |
except Exception, e: | |
dump.append(' Exception in str(%s): %s' % (local_name, e)) | |
else: | |
if len(truncated_val) >= 500: | |
truncated_val = '%s...' % truncated_val[0:499] | |
dump.append(' %s = %s\n' % (local_name, truncated_val)) | |
dump.append('\n') | |
dump.append(''.join(formatted_exception)) | |
open('transitfeedcrash.txt', 'w').write(''.join(dump)) | |
print ''.join(dump) | |
print dashes | |
print apology | |
try: | |
raw_input('Press enter to continue...') | |
except EOFError: | |
# Ignore stdin being closed. This happens during some tests. | |
pass | |
sys.exit(127) | |
# Pick one of two defaultdict implementations. A native version was added to | |
# the collections library in python 2.5. If that is not available use Jason's | |
# pure python recipe. He gave us permission to distribute it. | |
# On Mon, Nov 30, 2009 at 07:27, jason kirtland <jek at discorporate.us> wrote: | |
# > | |
# > Hi Tom, sure thing! It's not easy to find on the cookbook site, but the | |
# > recipe is under the Python license. | |
# > | |
# > Cheers, | |
# > Jason | |
# > | |
# > On Thu, Nov 26, 2009 at 3:03 PM, Tom Brown <tom.brown.code@gmail.com> wrote: | |
# > | |
# >> I would like to include http://code.activestate.com/recipes/523034/ in | |
# >> http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution | |
# >> which is distributed under the Apache License, Version 2.0 with Copyright | |
# >> Google. May we include your code with a comment in the source pointing at | |
# >> the original URL? Thanks, Tom Brown | |
try: | |
# Try the native implementation first | |
from collections import defaultdict | |
except: | |
# Fallback for python2.4, which didn't include collections.defaultdict | |
class defaultdict(dict): | |
def __init__(self, default_factory=None, *a, **kw): | |
if (default_factory is not None and | |
not hasattr(default_factory, '__call__')): | |
raise TypeError('first argument must be callable') | |
dict.__init__(self, *a, **kw) | |
self.default_factory = default_factory | |
def __getitem__(self, key): | |
try: | |
return dict.__getitem__(self, key) | |
except KeyError: | |
return self.__missing__(key) | |
def __missing__(self, key): | |
if self.default_factory is None: | |
raise KeyError(key) | |
self[key] = value = self.default_factory() | |
return value | |
def __reduce__(self): | |
if self.default_factory is None: | |
args = tuple() | |
else: | |
args = self.default_factory, | |
return type(self), args, None, None, self.items() | |
def copy(self): | |
return self.__copy__() | |
def __copy__(self): | |
return type(self)(self.default_factory, self) | |
def __deepcopy__(self, memo): | |
import copy | |
return type(self)(self.default_factory, | |
copy.deepcopy(self.items())) | |
def __repr__(self): | |
return 'defaultdict(%s, %s)' % (self.default_factory, | |
dict.__repr__(self)) | |
#!/usr/bin/python | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Validates a GTFS file. | |
For usage information run feedvalidator.py --help | |
""" | |
import bisect | |
import codecs | |
import datetime | |
from transitfeed.util import defaultdict | |
import optparse | |
import os | |
import os.path | |
import re | |
import socket | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import TYPE_ERROR, TYPE_WARNING | |
from urllib2 import Request, urlopen, HTTPError, URLError | |
from transitfeed import util | |
import webbrowser | |
SVN_TAG_URL = 'http://googletransitdatafeed.googlecode.com/svn/tags/' | |
def MaybePluralizeWord(count, word): | |
if count == 1: | |
return word | |
else: | |
return word + 's' | |
def PrettyNumberWord(count, word): | |
return '%d %s' % (count, MaybePluralizeWord(count, word)) | |
def UnCamelCase(camel): | |
return re.sub(r'([a-z])([A-Z])', r'\1 \2', camel) | |
def ProblemCountText(error_count, warning_count): | |
results = [] | |
if error_count: | |
results.append(PrettyNumberWord(error_count, 'error')) | |
if warning_count: | |
results.append(PrettyNumberWord(warning_count, 'warning')) | |
return ' and '.join(results) | |
def CalendarSummary(schedule): | |
today = datetime.date.today() | |
summary_end_date = today + datetime.timedelta(days=60) | |
start_date, end_date = schedule.GetDateRange() | |
if not start_date or not end_date: | |
return {} | |
try: | |
start_date_object = transitfeed.DateStringToDateObject(start_date) | |
end_date_object = transitfeed.DateStringToDateObject(end_date) | |
except ValueError: | |
return {} | |
# Get the list of trips only during the period the feed is active. | |
# As such we have to check if it starts in the future and/or if | |
# if it ends in less than 60 days. | |
date_trips_departures = schedule.GenerateDateTripsDeparturesList( | |
max(today, start_date_object), | |
min(summary_end_date, end_date_object)) | |
if not date_trips_departures: | |
return {} | |
# Check that the dates which will be shown in summary agree with these | |
# calculations. Failure implies a bug which should be fixed. It isn't good | |
# for users to discover assertion failures but means it will likely be fixed. | |
assert start_date <= date_trips_departures[0][0].strftime("%Y%m%d") | |
assert end_date >= date_trips_departures[-1][0].strftime("%Y%m%d") | |
# Generate a map from int number of trips in a day to a list of date objects | |
# with that many trips. The list of dates is sorted. | |
trips_dates = defaultdict(lambda: []) | |
trips = 0 | |
for date, day_trips, day_departures in date_trips_departures: | |
trips += day_trips | |
trips_dates[day_trips].append(date) | |
mean_trips = trips / len(date_trips_departures) | |
max_trips = max(trips_dates.keys()) | |
min_trips = min(trips_dates.keys()) | |
calendar_summary = {} | |
calendar_summary['mean_trips'] = mean_trips | |
calendar_summary['max_trips'] = max_trips | |
calendar_summary['max_trips_dates'] = FormatDateList(trips_dates[max_trips]) | |
calendar_summary['min_trips'] = min_trips | |
calendar_summary['min_trips_dates'] = FormatDateList(trips_dates[min_trips]) | |
calendar_summary['date_trips_departures'] = date_trips_departures | |
calendar_summary['date_summary_range'] = "%s to %s" % ( | |
date_trips_departures[0][0].strftime("%a %b %d"), | |
date_trips_departures[-1][0].strftime("%a %b %d")) | |
return calendar_summary | |
def FormatDateList(dates): | |
if not dates: | |
return "0 service dates" | |
formatted = [d.strftime("%a %b %d") for d in dates[0:3]] | |
if len(dates) > 3: | |
formatted.append("...") | |
return "%s (%s)" % (PrettyNumberWord(len(dates), "service date"), | |
", ".join(formatted)) | |
def MaxVersion(versions): | |
versions = filter(None, versions) | |
versions.sort(lambda x,y: -cmp([int(item) for item in x.split('.')], | |
[int(item) for item in y.split('.')])) | |
if len(versions) > 0: | |
return versions[0] | |
class CountingConsoleProblemReporter(transitfeed.ProblemReporter): | |
def __init__(self): | |
transitfeed.ProblemReporter.__init__(self) | |
self._error_count = 0 | |
self._warning_count = 0 | |
def _Report(self, e): | |
transitfeed.ProblemReporter._Report(self, e) | |
if e.IsError(): | |
self._error_count += 1 | |
else: | |
self._warning_count += 1 | |
def ErrorCount(self): | |
return self._error_count | |
def WarningCount(self): | |
return self._warning_count | |
def FormatCount(self): | |
return ProblemCountText(self.ErrorCount(), self.WarningCount()) | |
def HasIssues(self): | |
return self.ErrorCount() or self.WarningCount() | |
class BoundedProblemList(object): | |
"""A list of one type of ExceptionWithContext objects with bounded size.""" | |
def __init__(self, size_bound): | |
self._count = 0 | |
self._exceptions = [] | |
self._size_bound = size_bound | |
def Add(self, e): | |
self._count += 1 | |
try: | |
bisect.insort(self._exceptions, e) | |
except TypeError: | |
# The base class ExceptionWithContext raises this exception in __cmp__ | |
# to signal that an object is not comparable. Instead of keeping the most | |
# significant issue keep the first reported. | |
if self._count <= self._size_bound: | |
self._exceptions.append(e) | |
else: | |
# self._exceptions is in order. Drop the least significant if the list is | |
# now too long. | |
if self._count > self._size_bound: | |
del self._exceptions[-1] | |
def _GetDroppedCount(self): | |
return self._count - len(self._exceptions) | |
def __repr__(self): | |
return "<BoundedProblemList %s>" % repr(self._exceptions) | |
count = property(lambda s: s._count) | |
dropped_count = property(_GetDroppedCount) | |
problems = property(lambda s: s._exceptions) | |
class LimitPerTypeProblemReporter(transitfeed.ProblemReporter): | |
def __init__(self, limit_per_type): | |
transitfeed.ProblemReporter.__init__(self) | |
# {TYPE_WARNING: {"ClassName": BoundedProblemList()}} | |
self._type_to_name_to_problist = { | |
TYPE_WARNING: defaultdict(lambda: BoundedProblemList(limit_per_type)), | |
TYPE_ERROR: defaultdict(lambda: BoundedProblemList(limit_per_type)) | |
} | |
def HasIssues(self): | |
return (self._type_to_name_to_problist[TYPE_ERROR] or | |
self._type_to_name_to_problist[TYPE_WARNING]) | |
def _Report(self, e): | |
self._type_to_name_to_problist[e.GetType()][e.__class__.__name__].Add(e) | |
def ErrorCount(self): | |
error_sets = self._type_to_name_to_problist[TYPE_ERROR].values() | |
return sum(map(lambda v: v.count, error_sets)) | |
def WarningCount(self): | |
warning_sets = self._type_to_name_to_problist[TYPE_WARNING].values() | |
return sum(map(lambda v: v.count, warning_sets)) | |
def ProblemList(self, problem_type, class_name): | |
"""Return the BoundedProblemList object for given type and class.""" | |
return self._type_to_name_to_problist[problem_type][class_name] | |
def ProblemListMap(self, problem_type): | |
"""Return the map from class name to BoundedProblemList object.""" | |
return self._type_to_name_to_problist[problem_type] | |
class HTMLCountingProblemReporter(LimitPerTypeProblemReporter): | |
def FormatType(self, f, level_name, class_problist): | |
"""Write the HTML dumping all problems of one type. | |
Args: | |
f: file object open for writing | |
level_name: string such as "Error" or "Warning" | |
class_problist: sequence of tuples (class name, | |
BoundedProblemList object) | |
""" | |
class_problist.sort() | |
output = [] | |
for classname, problist in class_problist: | |
output.append('<h4 class="issueHeader"><a name="%s%s">%s</a></h4><ul>\n' % | |
(level_name, classname, UnCamelCase(classname))) | |
for e in problist.problems: | |
self.FormatException(e, output) | |
if problist.dropped_count: | |
output.append('<li>and %d more of this type.' % | |
(problist.dropped_count)) | |
output.append('</ul>\n') | |
f.write(''.join(output)) | |
def FormatTypeSummaryTable(self, level_name, name_to_problist): | |
"""Return an HTML table listing the number of problems by class name. | |
Args: | |
level_name: string such as "Error" or "Warning" | |
name_to_problist: dict mapping class name to an BoundedProblemList object | |
Returns: | |
HTML in a string | |
""" | |
output = [] | |
output.append('<table>') | |
for classname in sorted(name_to_problist.keys()): | |
problist = name_to_problist[classname] | |
human_name = MaybePluralizeWord(problist.count, UnCamelCase(classname)) | |
output.append('<tr><td>%d</td><td><a href="#%s%s">%s</a></td></tr>\n' % | |
(problist.count, level_name, classname, human_name)) | |
output.append('</table>\n') | |
return ''.join(output) | |
def FormatException(self, e, output): | |
"""Append HTML version of e to list output.""" | |
d = e.GetDictToFormat() | |
for k in ('file_name', 'feedname', 'column_name'): | |
if k in d.keys(): | |
d[k] = '<code>%s</code>' % d[k] | |
problem_text = e.FormatProblem(d).replace('\n', '<br>') | |
output.append('<li>') | |
output.append('<div class="problem">%s</div>' % | |
transitfeed.EncodeUnicode(problem_text)) | |
try: | |
if hasattr(e, 'row_num'): | |
line_str = 'line %d of ' % e.row_num | |
else: | |
line_str = '' | |
output.append('in %s<code>%s</code><br>\n' % | |
(line_str, e.file_name)) | |
row = e.row | |
headers = e.headers | |
column_name = e.column_name | |
table_header = '' # HTML | |
table_data = '' # HTML | |
for header, value in zip(headers, row): | |
attributes = '' | |
if header == column_name: | |
attributes = ' class="problem"' | |
table_header += '<th%s>%s</th>' % (attributes, header) | |
table_data += '<td%s>%s</td>' % (attributes, value) | |
# Make sure output is encoded into UTF-8 | |
output.append('<table class="dump"><tr>%s</tr>\n' % | |
transitfeed.EncodeUnicode(table_header)) | |
output.append('<tr>%s</tr></table>\n' % | |
transitfeed.EncodeUnicode(table_data)) | |
except AttributeError, e: | |
pass # Hope this was getting an attribute from e ;-) | |
output.append('<br></li>\n') | |
def FormatCount(self): | |
return ProblemCountText(self.ErrorCount(), self.WarningCount()) | |
def CountTable(self): | |
output = [] | |
output.append('<table class="count_outside">\n') | |
output.append('<tr>') | |
if self.ProblemListMap(TYPE_ERROR): | |
output.append('<td><span class="fail">%s</span></td>' % | |
PrettyNumberWord(self.ErrorCount(), "error")) | |
if self.ProblemListMap(TYPE_WARNING): | |
output.append('<td><span class="fail">%s</span></td>' % | |
PrettyNumberWord(self.WarningCount(), "warning")) | |
output.append('</tr>\n<tr>') | |
if self.ProblemListMap(TYPE_ERROR): | |
output.append('<td>\n') | |
output.append(self.FormatTypeSummaryTable("Error", | |
self.ProblemListMap(TYPE_ERROR))) | |
output.append('</td>\n') | |
if self.ProblemListMap(TYPE_WARNING): | |
output.append('<td>\n') | |
output.append(self.FormatTypeSummaryTable("Warning", | |
self.ProblemListMap(TYPE_WARNING))) | |
output.append('</td>\n') | |
output.append('</table>') | |
return ''.join(output) | |
def WriteOutput(self, feed_location, f, schedule, other_problems): | |
"""Write the html output to f.""" | |
if self.HasIssues(): | |
if self.ErrorCount() + self.WarningCount() == 1: | |
summary = ('<span class="fail">Found this problem:</span>\n%s' % | |
self.CountTable()) | |
else: | |
summary = ('<span class="fail">Found these problems:</span>\n%s' % | |
self.CountTable()) | |
else: | |
summary = '<span class="pass">feed validated successfully</span>' | |
if other_problems is not None: | |
summary = ('<span class="fail">\n%s</span><br><br>' % | |
other_problems) + summary | |
basename = os.path.basename(feed_location) | |
feed_path = (feed_location[:feed_location.rfind(basename)], basename) | |
agencies = ', '.join(['<a href="%s">%s</a>' % (a.agency_url, a.agency_name) | |
for a in schedule.GetAgencyList()]) | |
if not agencies: | |
agencies = '?' | |
dates = "No valid service dates found" | |
(start, end) = schedule.GetDateRange() | |
if start and end: | |
def FormatDate(yyyymmdd): | |
src_format = "%Y%m%d" | |
dst_format = "%B %d, %Y" | |
try: | |
return time.strftime(dst_format, | |
time.strptime(yyyymmdd, src_format)) | |
except ValueError: | |
return yyyymmdd | |
formatted_start = FormatDate(start) | |
formatted_end = FormatDate(end) | |
dates = "%s to %s" % (formatted_start, formatted_end) | |
calendar_summary = CalendarSummary(schedule) | |
if calendar_summary: | |
calendar_summary_html = """<br> | |
During the upcoming service dates %(date_summary_range)s: | |
<table> | |
<tr><th class="header">Average trips per date:</th><td class="header">%(mean_trips)s</td></tr> | |
<tr><th class="header">Most trips on a date:</th><td class="header">%(max_trips)s, on %(max_trips_dates)s</td></tr> | |
<tr><th class="header">Least trips on a date:</th><td class="header">%(min_trips)s, on %(min_trips_dates)s</td></tr> | |
</table>""" % calendar_summary | |
else: | |
calendar_summary_html = "" | |
output_prefix = """ | |
<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | |
<title>FeedValidator: %(feed_file)s</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
table.dump td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
table.dump td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; font-family:monospace} | |
table.count_outside td {vertical-align: top} | |
table.count_outside {border-spacing: 0px; } | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 0.5em} | |
h4.issueHeader {padding-left: 1em} | |
.pass {background-color: lightgreen} | |
.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
GTFS validation results for feed:<br> | |
<code><span class="path">%(feed_dir)s</span><b>%(feed_file)s</b></code> | |
<br><br> | |
<table> | |
<tr><th class="header">Agencies:</th><td class="header">%(agencies)s</td></tr> | |
<tr><th class="header">Routes:</th><td class="header">%(routes)s</td></tr> | |
<tr><th class="header">Stops:</th><td class="header">%(stops)s</td></tr> | |
<tr><th class="header">Trips:</th><td class="header">%(trips)s</td></tr> | |
<tr><th class="header">Shapes:</th><td class="header">%(shapes)s</td></tr> | |
<tr><th class="header">Effective:</th><td class="header">%(dates)s</td></tr> | |
</table> | |
%(calendar_summary)s | |
<br> | |
%(problem_summary)s | |
<br><br> | |
""" % { "feed_file": feed_path[1], | |
"feed_dir": feed_path[0], | |
"agencies": agencies, | |
"routes": len(schedule.GetRouteList()), | |
"stops": len(schedule.GetStopList()), | |
"trips": len(schedule.GetTripList()), | |
"shapes": len(schedule.GetShapeList()), | |
"dates": dates, | |
"problem_summary": summary, | |
"calendar_summary": calendar_summary_html} | |
# In output_suffix string | |
# time.strftime() returns a regular local time string (not a Unicode one) with | |
# default system encoding. And decode() will then convert this time string back | |
# into a Unicode string. We use decode() here because we don't want the operating | |
# system to do any system encoding (which may cause some problem if the string | |
# contains some non-English characters) for the string. Therefore we decode it | |
# back to its original Unicode code print. | |
time_unicode = (time.strftime('%B %d, %Y at %I:%M %p %Z'). | |
decode(sys.getfilesystemencoding())) | |
output_suffix = """ | |
<div class="footer"> | |
Generated by <a href="http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator"> | |
FeedValidator</a> version %s on %s. | |
</div> | |
</body> | |
</html>""" % (transitfeed.__version__, time_unicode) | |
f.write(transitfeed.EncodeUnicode(output_prefix)) | |
if self.ProblemListMap(TYPE_ERROR): | |
f.write('<h3 class="issueHeader">Errors:</h3>') | |
self.FormatType(f, "Error", | |
self.ProblemListMap(TYPE_ERROR).items()) | |
if self.ProblemListMap(TYPE_WARNING): | |
f.write('<h3 class="issueHeader">Warnings:</h3>') | |
self.FormatType(f, "Warning", | |
self.ProblemListMap(TYPE_WARNING).items()) | |
f.write(transitfeed.EncodeUnicode(output_suffix)) | |
def RunValidationOutputFromOptions(feed, options): | |
"""Validate feed, output results per options and return an exit code.""" | |
if options.output.upper() == "CONSOLE": | |
return RunValidationOutputToConsole(feed, options) | |
else: | |
return RunValidationOutputToFilename(feed, options, options.output) | |
def RunValidationOutputToFilename(feed, options, output_filename): | |
"""Validate feed, save HTML at output_filename and return an exit code.""" | |
try: | |
output_file = open(output_filename, 'w') | |
exit_code = RunValidationOutputToFile(feed, options, output_file) | |
output_file.close() | |
except IOError, e: | |
print 'Error while writing %s: %s' % (output_filename, e) | |
output_filename = None | |
exit_code = 2 | |
if options.manual_entry and output_filename: | |
webbrowser.open('file://%s' % os.path.abspath(output_filename)) | |
return exit_code | |
def RunValidationOutputToFile(feed, options, output_file): | |
"""Validate feed, write HTML to output_file and return an exit code.""" | |
problems = HTMLCountingProblemReporter(options.limit_per_type) | |
schedule, exit_code, other_problems_string = RunValidation(feed, options, | |
problems) | |
if isinstance(feed, basestring): | |
feed_location = feed | |
else: | |
feed_location = getattr(feed, 'name', repr(feed)) | |
problems.WriteOutput(feed_location, output_file, schedule, | |
other_problems_string) | |
return exit_code | |
def RunValidationOutputToConsole(feed, options): | |
"""Validate feed, print reports and return an exit code.""" | |
problems = CountingConsoleProblemReporter() | |
_, exit_code, _ = RunValidation(feed, options, problems) | |
return exit_code | |
def RunValidation(feed, options, problems): | |
"""Validate feed, returning the loaded Schedule and exit code. | |
Args: | |
feed: GTFS file, either path of the file as a string or a file object | |
options: options object returned by optparse | |
problems: transitfeed.ProblemReporter instance | |
Returns: | |
a transitfeed.Schedule object, exit code and plain text string of other | |
problems | |
Exit code is 1 if problems are found and 0 if the Schedule is problem free. | |
plain text string is '' if no other problems are found. | |
""" | |
other_problems_string = CheckVersion(latest_version=options.latest_version) | |
print 'validating %s' % feed | |
loader = transitfeed.Loader(feed, problems=problems, extra_validation=False, | |
memory_db=options.memory_db, | |
check_duplicate_trips=\ | |
options.check_duplicate_trips) | |
schedule = loader.Load() | |
schedule.Validate(service_gap_interval=options.service_gap_interval) | |
if feed == 'IWantMyvalidation-crash.txt': | |
# See test/testfeedvalidator.py | |
raise Exception('For testing the feed validator crash handler.') | |
if other_problems_string: | |
print other_problems_string | |
if problems.HasIssues(): | |
print 'ERROR: %s found' % problems.FormatCount() | |
return schedule, 1, other_problems_string | |
else: | |
print 'feed validated successfully' | |
return schedule, 0, other_problems_string | |
def CheckVersion(latest_version=''): | |
""" | |
Check there is newer version of this project. | |
Codes are based on http://www.voidspace.org.uk/python/articles/urllib2.shtml | |
Already got permission from the copyright holder. | |
""" | |
current_version = transitfeed.__version__ | |
if not latest_version: | |
timeout = 20 | |
socket.setdefaulttimeout(timeout) | |
request = Request(SVN_TAG_URL) | |
try: | |
response = urlopen(request) | |
content = response.read() | |
versions = re.findall(r'>transitfeed-([\d\.]+)\/<\/a>', content) | |
latest_version = MaxVersion(versions) | |
except HTTPError, e: | |
return('The server couldn\'t fulfill the request. Error code: %s.' | |
% e.code) | |
except URLError, e: | |
return('We failed to reach transitfeed server. Reason: %s.' % e.reason) | |
if not latest_version: | |
return('We had trouble parsing the contents of %s.' % SVN_TAG_URL) | |
newest_version = MaxVersion([latest_version, current_version]) | |
if current_version != newest_version: | |
return('A new version %s of transitfeed is available. Please visit ' | |
'http://code.google.com/p/googletransitdatafeed and download.' | |
% newest_version) | |
def main(): | |
usage = \ | |
'''%prog [options] [<input GTFS.zip>] | |
Validates GTFS file (or directory) <input GTFS.zip> and writes a HTML | |
report of the results to validation-results.html. | |
If <input GTFS.zip> is ommited the filename is read from the console. Dragging | |
a file into the console may enter the filename. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-n', '--noprompt', action='store_false', | |
dest='manual_entry', | |
help='do not prompt for feed location or load output in ' | |
'browser') | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='write html output to FILE or --output=CONSOLE to ' | |
'print all errors and warnings to the command console') | |
parser.add_option('-p', '--performance', action='store_true', | |
dest='performance', | |
help='output memory and time performance (Availability: ' | |
'Unix') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Use in-memory sqlite db instead of a temporary file. ' | |
'It is faster but uses more RAM.') | |
parser.add_option('-d', '--duplicate_trip_check', | |
dest='check_duplicate_trips', action='store_true', | |
help='Check for duplicate trips which go through the same ' | |
'stops with same service and start times') | |
parser.add_option('-l', '--limit_per_type', | |
dest='limit_per_type', action='store', type='int', | |
help='Maximum number of errors and warnings to keep of ' | |
'each type') | |
parser.add_option('--latest_version', dest='latest_version', | |
action='store', | |
help='a version number such as 1.2.1 or None to get the ' | |
'latest version from code.google.com. Output a warning if ' | |
'transitfeed.py is older than this version.') | |
parser.add_option('--service_gap_interval', | |
dest='service_gap_interval', | |
action='store', | |
type='int', | |
help='the number of consecutive days to search for with no ' | |
'scheduled service. For each interval with no service ' | |
'having this number of days or more a warning will be ' | |
'issued') | |
parser.set_defaults(manual_entry=True, output='validation-results.html', | |
memory_db=False, check_duplicate_trips=False, | |
limit_per_type=5, latest_version='', | |
service_gap_interval=13) | |
(options, args) = parser.parse_args() | |
if not len(args) == 1: | |
if options.manual_entry: | |
feed = raw_input('Enter Feed Location: ') | |
else: | |
parser.error('You must provide the path of a single feed') | |
else: | |
feed = args[0] | |
feed = feed.strip('"') | |
if options.performance: | |
return ProfileRunValidationOutputFromOptions(feed, options) | |
else: | |
return RunValidationOutputFromOptions(feed, options) | |
def ProfileRunValidationOutputFromOptions(feed, options): | |
"""Run RunValidationOutputFromOptions, print profile and return exit code.""" | |
import cProfile | |
import pstats | |
# runctx will modify a dict, but not locals(). We need a way to get rv back. | |
locals_for_exec = locals() | |
cProfile.runctx('rv = RunValidationOutputFromOptions(feed, options)', | |
globals(), locals_for_exec, 'validate-stats') | |
# Only available on Unix, http://docs.python.org/lib/module-resource.html | |
import resource | |
print "Time: %d seconds" % ( | |
resource.getrusage(resource.RUSAGE_SELF).ru_utime + | |
resource.getrusage(resource.RUSAGE_SELF).ru_stime) | |
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/286222 | |
# http://aspn.activestate.com/ASPN/Cookbook/ "The recipes are freely | |
# available for review and use." | |
def _VmB(VmKey): | |
"""Return size from proc status in bytes.""" | |
_proc_status = '/proc/%d/status' % os.getpid() | |
_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0, | |
'KB': 1024.0, 'MB': 1024.0*1024.0} | |
# get pseudo file /proc/<pid>/status | |
try: | |
t = open(_proc_status) | |
v = t.read() | |
t.close() | |
except: | |
raise Exception("no proc file %s" % _proc_status) | |
return 0 # non-Linux? | |
# get VmKey line e.g. 'VmRSS: 9999 kB\n ...' | |
i = v.index(VmKey) | |
v = v[i:].split(None, 3) # whitespace | |
if len(v) < 3: | |
raise Exception("%s" % v) | |
return 0 # invalid format? | |
# convert Vm value to bytes | |
return int(float(v[1]) * _scale[v[2]]) | |
# I ran this on over a hundred GTFS files, comparing VmSize to VmRSS | |
# (resident set size). The difference was always under 2% or 3MB. | |
print "Virtual Memory Size: %d bytes" % _VmB('VmSize:') | |
# Output report of where CPU time was spent. | |
p = pstats.Stats('validate-stats') | |
p.strip_dirs() | |
p.sort_stats('cumulative').print_stats(30) | |
p.sort_stats('cumulative').print_callers(30) | |
return locals_for_exec['rv'] | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
This package provides implementation of a converter from a kml | |
file format into Google transit feed format. | |
The KmlParser class is the main class implementing the parser. | |
Currently only information about stops is extracted from a kml file. | |
The extractor expects the stops to be represented as placemarks with | |
a single point. | |
""" | |
import re | |
import string | |
import sys | |
import transitfeed | |
from transitfeed import util | |
import xml.dom.minidom as minidom | |
import zipfile | |
class Placemark(object): | |
def __init__(self): | |
self.name = "" | |
self.coordinates = [] | |
def IsPoint(self): | |
return len(self.coordinates) == 1 | |
def IsLine(self): | |
return len(self.coordinates) > 1 | |
class KmlParser(object): | |
def __init__(self, stopNameRe = '(.*)'): | |
""" | |
Args: | |
stopNameRe - a regular expression to extract a stop name from a | |
placemaker name | |
""" | |
self.stopNameRe = re.compile(stopNameRe) | |
def Parse(self, filename, feed): | |
""" | |
Reads the kml file, parses it and updated the Google transit feed | |
object with the extracted information. | |
Args: | |
filename - kml file name | |
feed - an instance of Schedule class to be updated | |
""" | |
dom = minidom.parse(filename) | |
self.ParseDom(dom, feed) | |
def ParseDom(self, dom, feed): | |
""" | |
Parses the given kml dom tree and updates the Google transit feed object. | |
Args: | |
dom - kml dom tree | |
feed - an instance of Schedule class to be updated | |
""" | |
shape_num = 0 | |
for node in dom.getElementsByTagName('Placemark'): | |
p = self.ParsePlacemark(node) | |
if p.IsPoint(): | |
(lon, lat) = p.coordinates[0] | |
m = self.stopNameRe.search(p.name) | |
feed.AddStop(lat, lon, m.group(1)) | |
elif p.IsLine(): | |
shape_num = shape_num + 1 | |
shape = transitfeed.Shape("kml_shape_" + str(shape_num)) | |
for (lon, lat) in p.coordinates: | |
shape.AddPoint(lat, lon) | |
feed.AddShapeObject(shape) | |
def ParsePlacemark(self, node): | |
ret = Placemark() | |
for child in node.childNodes: | |
if child.nodeName == 'name': | |
ret.name = self.ExtractText(child) | |
if child.nodeName == 'Point' or child.nodeName == 'LineString': | |
ret.coordinates = self.ExtractCoordinates(child) | |
return ret | |
def ExtractText(self, node): | |
for child in node.childNodes: | |
if child.nodeType == child.TEXT_NODE: | |
return child.wholeText # is a unicode string | |
return "" | |
def ExtractCoordinates(self, node): | |
coordinatesText = "" | |
for child in node.childNodes: | |
if child.nodeName == 'coordinates': | |
coordinatesText = self.ExtractText(child) | |
break | |
ret = [] | |
for point in coordinatesText.split(): | |
coords = point.split(',') | |
ret.append((float(coords[0]), float(coords[1]))) | |
return ret | |
def main(): | |
usage = \ | |
"""%prog <input.kml> <output GTFS.zip> | |
Reads KML file <input.kml> and creates GTFS file <output GTFS.zip> with | |
placemarks in the KML represented as stops. | |
""" | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
(options, args) = parser.parse_args() | |
if len(args) != 2: | |
parser.error('You did not provide all required command line arguments.') | |
if args[0] == 'IWantMyCrash': | |
raise Exception('For testCrashHandler') | |
parser = KmlParser() | |
feed = transitfeed.Schedule() | |
feed.save_all_stops = True | |
parser.Parse(args[0], feed) | |
feed.WriteGoogleTransitFeed(args[1]) | |
print "Done." | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python | |
# | |
# Copyright 2008 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A module for writing GTFS feeds out into Google Earth KML format. | |
For usage information run kmlwriter.py --help | |
If no output filename is specified, the output file will be given the same | |
name as the feed file (with ".kml" appended) and will be placed in the same | |
directory as the input feed. | |
The resulting KML file has a folder hierarchy which looks like this: | |
- Stops | |
* stop1 | |
* stop2 | |
- Routes | |
- route1 | |
- Shapes | |
* shape1 | |
* shape2 | |
- Patterns | |
- pattern1 | |
- pattern2 | |
- Trips | |
* trip1 | |
* trip2 | |
- Shapes | |
* shape1 | |
- Shape Points | |
* shape_point1 | |
* shape_point2 | |
* shape2 | |
- Shape Points | |
* shape_point1 | |
* shape_point2 | |
where the hyphens represent folders and the asteriks represent placemarks. | |
In a trip, a vehicle visits stops in a certain sequence. Such a sequence of | |
stops is called a pattern. A pattern is represented by a linestring connecting | |
the stops. The "Shapes" subfolder of a route folder contains placemarks for | |
each shape used by a trip in the route. The "Patterns" subfolder contains a | |
placemark for each unique pattern used by a trip in the route. The "Trips" | |
subfolder contains a placemark for each trip in the route. | |
Since there can be many trips and trips for the same route are usually similar, | |
they are not exported unless the --showtrips option is used. There is also | |
another option --splitroutes that groups the routes by vehicle type resulting | |
in a folder hierarchy which looks like this at the top level: | |
- Stops | |
- Routes - Bus | |
- Routes - Tram | |
- Routes - Rail | |
- Shapes | |
""" | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
import optparse | |
import os.path | |
import sys | |
import transitfeed | |
from transitfeed import util | |
class KMLWriter(object): | |
"""This class knows how to write out a transit feed as KML. | |
Sample usage: | |
KMLWriter().Write(<transitfeed.Schedule object>, <output filename>) | |
Attributes: | |
show_trips: True if the individual trips should be included in the routes. | |
show_trips: True if the individual trips should be placed on ground. | |
split_routes: True if the routes should be split by type. | |
shape_points: True if individual shape points should be plotted. | |
""" | |
def __init__(self): | |
"""Initialise.""" | |
self.show_trips = False | |
self.split_routes = False | |
self.shape_points = False | |
self.altitude_per_sec = 0.0 | |
self.date_filter = None | |
def _SetIndentation(self, elem, level=0): | |
"""Indented the ElementTree DOM. | |
This is the recommended way to cause an ElementTree DOM to be | |
prettyprinted on output, as per: http://effbot.org/zone/element-lib.htm | |
Run this on the root element before outputting the tree. | |
Args: | |
elem: The element to start indenting from, usually the document root. | |
level: Current indentation level for recursion. | |
""" | |
i = "\n" + level*" " | |
if len(elem): | |
if not elem.text or not elem.text.strip(): | |
elem.text = i + " " | |
for elem in elem: | |
self._SetIndentation(elem, level+1) | |
if not elem.tail or not elem.tail.strip(): | |
elem.tail = i | |
else: | |
if level and (not elem.tail or not elem.tail.strip()): | |
elem.tail = i | |
def _CreateFolder(self, parent, name, visible=True, description=None): | |
"""Create a KML Folder element. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
name: The folder name as a string. | |
visible: Whether the folder is initially visible or not. | |
description: A description string or None. | |
Returns: | |
The folder ElementTree.Element instance. | |
""" | |
folder = ET.SubElement(parent, 'Folder') | |
name_tag = ET.SubElement(folder, 'name') | |
name_tag.text = name | |
if description is not None: | |
desc_tag = ET.SubElement(folder, 'description') | |
desc_tag.text = description | |
if not visible: | |
visibility = ET.SubElement(folder, 'visibility') | |
visibility.text = '0' | |
return folder | |
def _CreateStyleForRoute(self, doc, route): | |
"""Create a KML Style element for the route. | |
The style sets the line colour if the route colour is specified. The | |
line thickness is set depending on the vehicle type. | |
Args: | |
doc: The KML Document ElementTree.Element instance. | |
route: The transitfeed.Route to create the style for. | |
Returns: | |
The id of the style as a string. | |
""" | |
style_id = 'route_%s' % route.route_id | |
style = ET.SubElement(doc, 'Style', {'id': style_id}) | |
linestyle = ET.SubElement(style, 'LineStyle') | |
width = ET.SubElement(linestyle, 'width') | |
type_to_width = {0: '3', # Tram | |
1: '3', # Subway | |
2: '5', # Rail | |
3: '1'} # Bus | |
width.text = type_to_width.get(route.route_type, '1') | |
if route.route_color: | |
color = ET.SubElement(linestyle, 'color') | |
red = route.route_color[0:2].lower() | |
green = route.route_color[2:4].lower() | |
blue = route.route_color[4:6].lower() | |
color.text = 'ff%s%s%s' % (blue, green, red) | |
return style_id | |
def _CreatePlacemark(self, parent, name, style_id=None, visible=True, | |
description=None): | |
"""Create a KML Placemark element. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
name: The placemark name as a string. | |
style_id: If not None, the id of a style to use for the placemark. | |
visible: Whether the placemark is initially visible or not. | |
description: A description string or None. | |
Returns: | |
The placemark ElementTree.Element instance. | |
""" | |
placemark = ET.SubElement(parent, 'Placemark') | |
placemark_name = ET.SubElement(placemark, 'name') | |
placemark_name.text = name | |
if description is not None: | |
desc_tag = ET.SubElement(placemark, 'description') | |
desc_tag.text = description | |
if style_id is not None: | |
styleurl = ET.SubElement(placemark, 'styleUrl') | |
styleurl.text = '#%s' % style_id | |
if not visible: | |
visibility = ET.SubElement(placemark, 'visibility') | |
visibility.text = '0' | |
return placemark | |
def _CreateLineString(self, parent, coordinate_list): | |
"""Create a KML LineString element. | |
The points of the string are given in coordinate_list. Every element of | |
coordinate_list should be one of a tuple (longitude, latitude) or a tuple | |
(longitude, latitude, altitude). | |
Args: | |
parent: The parent ElementTree.Element instance. | |
coordinate_list: The list of coordinates. | |
Returns: | |
The LineString ElementTree.Element instance or None if coordinate_list is | |
empty. | |
""" | |
if not coordinate_list: | |
return None | |
linestring = ET.SubElement(parent, 'LineString') | |
tessellate = ET.SubElement(linestring, 'tessellate') | |
tessellate.text = '1' | |
if len(coordinate_list[0]) == 3: | |
altitude_mode = ET.SubElement(linestring, 'altitudeMode') | |
altitude_mode.text = 'absolute' | |
coordinates = ET.SubElement(linestring, 'coordinates') | |
if len(coordinate_list[0]) == 3: | |
coordinate_str_list = ['%f,%f,%f' % t for t in coordinate_list] | |
else: | |
coordinate_str_list = ['%f,%f' % t for t in coordinate_list] | |
coordinates.text = ' '.join(coordinate_str_list) | |
return linestring | |
def _CreateLineStringForShape(self, parent, shape): | |
"""Create a KML LineString using coordinates from a shape. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
shape: The transitfeed.Shape instance. | |
Returns: | |
The LineString ElementTree.Element instance or None if coordinate_list is | |
empty. | |
""" | |
coordinate_list = [(longitude, latitude) for | |
(latitude, longitude, distance) in shape.points] | |
return self._CreateLineString(parent, coordinate_list) | |
def _CreateStopsFolder(self, schedule, doc): | |
"""Create a KML Folder containing placemarks for each stop in the schedule. | |
If there are no stops in the schedule then no folder is created. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
Returns: | |
The Folder ElementTree.Element instance or None if there are no stops. | |
""" | |
if not schedule.GetStopList(): | |
return None | |
stop_folder = self._CreateFolder(doc, 'Stops') | |
stops = list(schedule.GetStopList()) | |
stops.sort(key=lambda x: x.stop_name) | |
for stop in stops: | |
desc_items = [] | |
if stop.stop_desc: | |
desc_items.append(stop.stop_desc) | |
if stop.stop_url: | |
desc_items.append('Stop info page: <a href="%s">%s</a>' % ( | |
stop.stop_url, stop.stop_url)) | |
description = '<br/>'.join(desc_items) or None | |
placemark = self._CreatePlacemark(stop_folder, stop.stop_name, | |
description=description) | |
point = ET.SubElement(placemark, 'Point') | |
coordinates = ET.SubElement(point, 'coordinates') | |
coordinates.text = '%.6f,%.6f' % (stop.stop_lon, stop.stop_lat) | |
return stop_folder | |
def _CreateRoutePatternsFolder(self, parent, route, | |
style_id=None, visible=True): | |
"""Create a KML Folder containing placemarks for each pattern in the route. | |
A pattern is a sequence of stops used by one of the trips in the route. | |
If there are not patterns for the route then no folder is created and None | |
is returned. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: The id of a style to use if not None. | |
visible: Whether the folder is initially visible or not. | |
Returns: | |
The Folder ElementTree.Element instance or None if there are no patterns. | |
""" | |
pattern_id_to_trips = route.GetPatternIdTripDict() | |
if not pattern_id_to_trips: | |
return None | |
# sort by number of trips using the pattern | |
pattern_trips = pattern_id_to_trips.values() | |
pattern_trips.sort(lambda a, b: cmp(len(b), len(a))) | |
folder = self._CreateFolder(parent, 'Patterns', visible) | |
for n, trips in enumerate(pattern_trips): | |
trip_ids = [trip.trip_id for trip in trips] | |
name = 'Pattern %d (trips: %d)' % (n+1, len(trips)) | |
description = 'Trips using this pattern (%d in total): %s' % ( | |
len(trips), ', '.join(trip_ids)) | |
placemark = self._CreatePlacemark(folder, name, style_id, visible, | |
description) | |
coordinates = [(stop.stop_lon, stop.stop_lat) | |
for stop in trips[0].GetPattern()] | |
self._CreateLineString(placemark, coordinates) | |
return folder | |
def _CreateRouteShapesFolder(self, schedule, parent, route, | |
style_id=None, visible=True): | |
"""Create a KML Folder for the shapes of a route. | |
The folder contains a placemark for each shape referenced by a trip in the | |
route. If there are no such shapes, no folder is created and None is | |
returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: The id of a style to use if not None. | |
visible: Whether the placemark is initially visible or not. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
shape_id_to_trips = {} | |
for trip in route.trips: | |
if trip.shape_id: | |
shape_id_to_trips.setdefault(trip.shape_id, []).append(trip) | |
if not shape_id_to_trips: | |
return None | |
# sort by the number of trips using the shape | |
shape_id_to_trips_items = shape_id_to_trips.items() | |
shape_id_to_trips_items.sort(lambda a, b: cmp(len(b[1]), len(a[1]))) | |
folder = self._CreateFolder(parent, 'Shapes', visible) | |
for shape_id, trips in shape_id_to_trips_items: | |
trip_ids = [trip.trip_id for trip in trips] | |
name = '%s (trips: %d)' % (shape_id, len(trips)) | |
description = 'Trips using this shape (%d in total): %s' % ( | |
len(trips), ', '.join(trip_ids)) | |
placemark = self._CreatePlacemark(folder, name, style_id, visible, | |
description) | |
self._CreateLineStringForShape(placemark, schedule.GetShape(shape_id)) | |
return folder | |
def _CreateRouteTripsFolder(self, parent, route, style_id=None, schedule=None): | |
"""Create a KML Folder containing all the trips in the route. | |
The folder contains a placemark for each of these trips. If there are no | |
trips in the route, no folder is created and None is returned. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: A style id string for the placemarks or None. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
if not route.trips: | |
return None | |
trips = list(route.trips) | |
trips.sort(key=lambda x: x.trip_id) | |
trips_folder = self._CreateFolder(parent, 'Trips', visible=False) | |
for trip in trips: | |
if (self.date_filter and | |
not trip.service_period.IsActiveOn(self.date_filter)): | |
continue | |
if trip.trip_headsign: | |
description = 'Headsign: %s' % trip.trip_headsign | |
else: | |
description = None | |
coordinate_list = [] | |
for secs, stoptime, tp in trip.GetTimeInterpolatedStops(): | |
if self.altitude_per_sec > 0: | |
coordinate_list.append((stoptime.stop.stop_lon, stoptime.stop.stop_lat, | |
(secs - 3600 * 4) * self.altitude_per_sec)) | |
else: | |
coordinate_list.append((stoptime.stop.stop_lon, | |
stoptime.stop.stop_lat)) | |
placemark = self._CreatePlacemark(trips_folder, | |
trip.trip_id, | |
style_id=style_id, | |
visible=False, | |
description=description) | |
self._CreateLineString(placemark, coordinate_list) | |
return trips_folder | |
def _CreateRoutesFolder(self, schedule, doc, route_type=None): | |
"""Create a KML Folder containing routes in a schedule. | |
The folder contains a subfolder for each route in the schedule of type | |
route_type. If route_type is None, then all routes are selected. Each | |
subfolder contains a flattened graph placemark, a route shapes placemark | |
and, if show_trips is True, a subfolder containing placemarks for each of | |
the trips in the route. | |
If there are no routes in the schedule then no folder is created and None | |
is returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
route_type: The route type integer or None. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
def GetRouteName(route): | |
"""Return a placemark name for the route. | |
Args: | |
route: The transitfeed.Route instance. | |
Returns: | |
The name as a string. | |
""" | |
name_parts = [] | |
if route.route_short_name: | |
name_parts.append('<b>%s</b>' % route.route_short_name) | |
if route.route_long_name: | |
name_parts.append(route.route_long_name) | |
return ' - '.join(name_parts) or route.route_id | |
def GetRouteDescription(route): | |
"""Return a placemark description for the route. | |
Args: | |
route: The transitfeed.Route instance. | |
Returns: | |
The description as a string. | |
""" | |
desc_items = [] | |
if route.route_desc: | |
desc_items.append(route.route_desc) | |
if route.route_url: | |
desc_items.append('Route info page: <a href="%s">%s</a>' % ( | |
route.route_url, route.route_url)) | |
description = '<br/>'.join(desc_items) | |
return description or None | |
routes = [route for route in schedule.GetRouteList() | |
if route_type is None or route.route_type == route_type] | |
if not routes: | |
return None | |
routes.sort(key=lambda x: GetRouteName(x)) | |
if route_type is not None: | |
route_type_names = {0: 'Tram, Streetcar or Light rail', | |
1: 'Subway or Metro', | |
2: 'Rail', | |
3: 'Bus', | |
4: 'Ferry', | |
5: 'Cable car', | |
6: 'Gondola or suspended cable car', | |
7: 'Funicular'} | |
type_name = route_type_names.get(route_type, str(route_type)) | |
folder_name = 'Routes - %s' % type_name | |
else: | |
folder_name = 'Routes' | |
routes_folder = self._CreateFolder(doc, folder_name, visible=False) | |
for route in routes: | |
style_id = self._CreateStyleForRoute(doc, route) | |
route_folder = self._CreateFolder(routes_folder, | |
GetRouteName(route), | |
description=GetRouteDescription(route)) | |
self._CreateRouteShapesFolder(schedule, route_folder, route, | |
style_id, False) | |
self._CreateRoutePatternsFolder(route_folder, route, style_id, False) | |
if self.show_trips: | |
self._CreateRouteTripsFolder(route_folder, route, style_id, schedule) | |
return routes_folder | |
def _CreateShapesFolder(self, schedule, doc): | |
"""Create a KML Folder containing all the shapes in a schedule. | |
The folder contains a placemark for each shape. If there are no shapes in | |
the schedule then the folder is not created and None is returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
if not schedule.GetShapeList(): | |
return None | |
shapes_folder = self._CreateFolder(doc, 'Shapes') | |
shapes = list(schedule.GetShapeList()) | |
shapes.sort(key=lambda x: x.shape_id) | |
for shape in shapes: | |
placemark = self._CreatePlacemark(shapes_folder, shape.shape_id) | |
self._CreateLineStringForShape(placemark, shape) | |
if self.shape_points: | |
self._CreateShapePointFolder(shapes_folder, shape) | |
return shapes_folder | |
def _CreateShapePointFolder(self, shapes_folder, shape): | |
"""Create a KML Folder containing all the shape points in a shape. | |
The folder contains placemarks for each shapepoint. | |
Args: | |
shapes_folder: A KML Shape Folder ElementTree.Element instance | |
shape: The shape to plot. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
folder_name = shape.shape_id + ' Shape Points' | |
folder = self._CreateFolder(shapes_folder, folder_name, visible=False) | |
for (index, (lat, lon, dist)) in enumerate(shape.points): | |
placemark = self._CreatePlacemark(folder, str(index+1)) | |
point = ET.SubElement(placemark, 'Point') | |
coordinates = ET.SubElement(point, 'coordinates') | |
coordinates.text = '%.6f,%.6f' % (lon, lat) | |
return folder | |
def Write(self, schedule, output_file): | |
"""Writes out a feed as KML. | |
Args: | |
schedule: A transitfeed.Schedule object containing the feed to write. | |
output_file: The name of the output KML file, or file object to use. | |
""" | |
# Generate the DOM to write | |
root = ET.Element('kml') | |
root.attrib['xmlns'] = 'http://earth.google.com/kml/2.1' | |
doc = ET.SubElement(root, 'Document') | |
open_tag = ET.SubElement(doc, 'open') | |
open_tag.text = '1' | |
self._CreateStopsFolder(schedule, doc) | |
if self.split_routes: | |
route_types = set() | |
for route in schedule.GetRouteList(): | |
route_types.add(route.route_type) | |
route_types = list(route_types) | |
route_types.sort() | |
for route_type in route_types: | |
self._CreateRoutesFolder(schedule, doc, route_type) | |
else: | |
self._CreateRoutesFolder(schedule, doc) | |
self._CreateShapesFolder(schedule, doc) | |
# Make sure we pretty-print | |
self._SetIndentation(root) | |
# Now write the output | |
if isinstance(output_file, file): | |
output = output_file | |
else: | |
output = open(output_file, 'w') | |
output.write("""<?xml version="1.0" encoding="UTF-8"?>\n""") | |
ET.ElementTree(root).write(output, 'utf-8') | |
def main(): | |
usage = \ | |
'''%prog [options] <input GTFS.zip> [<output.kml>] | |
Reads GTFS file or directory <input GTFS.zip> and creates a KML file | |
<output.kml> that contains the geographical features of the input. If | |
<output.kml> is omitted a default filename is picked based on | |
<input GTFS.zip>. By default the KML contains all stops and shapes. | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-t', '--showtrips', action='store_true', | |
dest='show_trips', | |
help='include the individual trips for each route') | |
parser.add_option('-a', '--altitude_per_sec', action='store', type='float', | |
dest='altitude_per_sec', | |
help='if greater than 0 trips are drawn with time axis ' | |
'set to this many meters high for each second of time') | |
parser.add_option('-s', '--splitroutes', action='store_true', | |
dest='split_routes', | |
help='split the routes by type') | |
parser.add_option('-d', '--date_filter', action='store', type='string', | |
dest='date_filter', | |
help='Restrict to trips active on date YYYYMMDD') | |
parser.add_option('-p', '--display_shape_points', action='store_true', | |
dest='shape_points', | |
help='shows the actual points along shapes') | |
parser.set_defaults(altitude_per_sec=1.0) | |
options, args = parser.parse_args() | |
if len(args) < 1: | |
parser.error('You must provide the path of an input GTFS file.') | |
if args[0] == 'IWantMyCrash': | |
raise Exception('For testCrashHandler') | |
input_path = args[0] | |
if len(args) >= 2: | |
output_path = args[1] | |
else: | |
path = os.path.normpath(input_path) | |
(feed_dir, feed) = os.path.split(path) | |
if '.' in feed: | |
feed = feed.rsplit('.', 1)[0] # strip extension | |
output_filename = '%s.kml' % feed | |
output_path = os.path.join(feed_dir, output_filename) | |
loader = transitfeed.Loader(input_path, | |
problems=transitfeed.ProblemReporter()) | |
feed = loader.Load() | |
print "Writing %s" % output_path | |
writer = KMLWriter() | |
writer.show_trips = options.show_trips | |
writer.altitude_per_sec = options.altitude_per_sec | |
writer.split_routes = options.split_routes | |
writer.date_filter = options.date_filter | |
writer.shape_points = options.shape_points | |
writer.Write(feed, output_path) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A tool for merging two Google Transit feeds. | |
Given two Google Transit feeds intending to cover two disjoint calendar | |
intervals, this tool will attempt to produce a single feed by merging as much | |
of the two feeds together as possible. | |
For example, most stops remain the same throughout the year. Therefore, many | |
of the stops given in stops.txt for the first feed represent the same stops | |
given in the second feed. This tool will try to merge these stops so they | |
only appear once in the resultant feed. | |
A note on terminology: The first schedule is referred to as the "old" schedule; | |
the second as the "new" schedule. The resultant schedule is referred to as | |
the "merged" schedule. Names of things in the old schedule are variations of | |
the letter "a" while names of things from the new schedule are variations of | |
"b". The objects that represents routes, agencies and so on are called | |
"entities". | |
usage: merge.py [options] old_feed_path new_feed_path merged_feed_path | |
Run merge.py --help for a list of the possible options. | |
""" | |
__author__ = 'timothy.stranex@gmail.com (Timothy Stranex)' | |
import datetime | |
import optparse | |
import os | |
import re | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import util | |
import webbrowser | |
# TODO: | |
# 1. write unit tests that use actual data | |
# 2. write a proper trip and stop_times merger | |
# 3. add a serialised access method for stop_times and shapes to transitfeed | |
# 4. add support for merging schedules which have some service period overlap | |
def ApproximateDistanceBetweenPoints(pa, pb): | |
"""Finds the distance between two points on the Earth's surface. | |
This is an approximate distance based on assuming that the Earth is a sphere. | |
The points are specified by their lattitude and longitude. | |
Args: | |
pa: the first (lat, lon) point tuple | |
pb: the second (lat, lon) point tuple | |
Returns: | |
The distance as a float in metres. | |
""" | |
alat, alon = pa | |
blat, blon = pb | |
sa = transitfeed.Stop(lat=alat, lng=alon) | |
sb = transitfeed.Stop(lat=blat, lng=blon) | |
return transitfeed.ApproximateDistanceBetweenStops(sa, sb) | |
class Error(Exception): | |
"""The base exception class for this module.""" | |
class MergeError(Error): | |
"""An error produced when two entities could not be merged.""" | |
class MergeProblemWithContext(transitfeed.ExceptionWithContext): | |
"""The base exception class for problem reporting in the merge module. | |
Attributes: | |
dataset_merger: The DataSetMerger that generated this problem. | |
entity_type_name: The entity type of the dataset_merger. This is just | |
dataset_merger.ENTITY_TYPE_NAME. | |
ERROR_TEXT: The text used for generating the problem message. | |
""" | |
def __init__(self, dataset_merger, problem_type=transitfeed.TYPE_WARNING, | |
**kwargs): | |
"""Initialise the exception object. | |
Args: | |
dataset_merger: The DataSetMerger instance that generated this problem. | |
problem_type: The problem severity. This should be set to one of the | |
corresponding constants in transitfeed. | |
kwargs: Keyword arguments to be saved as instance attributes. | |
""" | |
kwargs['type'] = problem_type | |
kwargs['entity_type_name'] = dataset_merger.ENTITY_TYPE_NAME | |
transitfeed.ExceptionWithContext.__init__(self, None, None, **kwargs) | |
self.dataset_merger = dataset_merger | |
def FormatContext(self): | |
return "In files '%s'" % self.dataset_merger.FILE_NAME | |
class SameIdButNotMerged(MergeProblemWithContext): | |
ERROR_TEXT = ("There is a %(entity_type_name)s in the old feed with id " | |
"'%(id)s' and one from the new feed with the same id but " | |
"they could not be merged:") | |
class CalendarsNotDisjoint(MergeProblemWithContext): | |
ERROR_TEXT = ("The service periods could not be merged since they are not " | |
"disjoint.") | |
class MergeNotImplemented(MergeProblemWithContext): | |
ERROR_TEXT = ("The feed merger does not currently support merging in this " | |
"file. The entries have been duplicated instead.") | |
class FareRulesBroken(MergeProblemWithContext): | |
ERROR_TEXT = ("The feed merger is currently unable to handle fare rules " | |
"properly.") | |
class MergeProblemReporterBase(transitfeed.ProblemReporterBase): | |
"""The base problem reporter class for the merge module.""" | |
def SameIdButNotMerged(self, dataset, entity_id, reason): | |
self._Report(SameIdButNotMerged(dataset, id=entity_id, reason=reason)) | |
def CalendarsNotDisjoint(self, dataset): | |
self._Report(CalendarsNotDisjoint(dataset, | |
problem_type=transitfeed.TYPE_ERROR)) | |
def MergeNotImplemented(self, dataset): | |
self._Report(MergeNotImplemented(dataset)) | |
def FareRulesBroken(self, dataset): | |
self._Report(FareRulesBroken(dataset)) | |
class ExceptionProblemReporter(MergeProblemReporterBase): | |
"""A problem reporter that reports errors by raising exceptions.""" | |
def __init__(self, raise_warnings=False): | |
"""Initialise. | |
Args: | |
raise_warnings: If this is True then warnings are also raised as | |
exceptions. | |
""" | |
MergeProblemReporterBase.__init__(self) | |
self._raise_warnings = raise_warnings | |
def _Report(self, merge_problem): | |
if self._raise_warnings or merge_problem.IsError(): | |
raise merge_problem | |
class HTMLProblemReporter(MergeProblemReporterBase): | |
"""A problem reporter which generates HTML output.""" | |
def __init__(self): | |
"""Initialise.""" | |
MergeProblemReporterBase.__init__(self) | |
self._dataset_warnings = {} # a map from DataSetMergers to their warnings | |
self._dataset_errors = {} | |
self._warning_count = 0 | |
self._error_count = 0 | |
def _Report(self, merge_problem): | |
if merge_problem.IsWarning(): | |
dataset_problems = self._dataset_warnings | |
self._warning_count += 1 | |
else: | |
dataset_problems = self._dataset_errors | |
self._error_count += 1 | |
problem_html = '<li>%s</li>' % ( | |
merge_problem.FormatProblem().replace('\n', '<br>')) | |
dataset_problems.setdefault(merge_problem.dataset_merger, []).append( | |
problem_html) | |
def _GenerateStatsTable(self, feed_merger): | |
"""Generate an HTML table of merge statistics. | |
Args: | |
feed_merger: The FeedMerger instance. | |
Returns: | |
The generated HTML as a string. | |
""" | |
rows = [] | |
rows.append('<tr><th class="header"/><th class="header">Merged</th>' | |
'<th class="header">Copied from old feed</th>' | |
'<th class="header">Copied from new feed</th></tr>') | |
for merger in feed_merger.GetMergerList(): | |
stats = merger.GetMergeStats() | |
if stats is None: | |
continue | |
merged, not_merged_a, not_merged_b = stats | |
rows.append('<tr><th class="header">%s</th>' | |
'<td class="header">%d</td>' | |
'<td class="header">%d</td>' | |
'<td class="header">%d</td></tr>' % | |
(merger.DATASET_NAME, merged, not_merged_a, not_merged_b)) | |
return '<table>%s</table>' % '\n'.join(rows) | |
def _GenerateSection(self, problem_type): | |
"""Generate a listing of the given type of problems. | |
Args: | |
problem_type: The type of problem. This is one of the problem type | |
constants from transitfeed. | |
Returns: | |
The generated HTML as a string. | |
""" | |
if problem_type == transitfeed.TYPE_WARNING: | |
dataset_problems = self._dataset_warnings | |
heading = 'Warnings' | |
else: | |
dataset_problems = self._dataset_errors | |
heading = 'Errors' | |
if not dataset_problems: | |
return '' | |
prefix = '<h2 class="issueHeader">%s:</h2>' % heading | |
dataset_sections = [] | |
for dataset_merger, problems in dataset_problems.items(): | |
dataset_sections.append('<h3>%s</h3><ol>%s</ol>' % ( | |
dataset_merger.FILE_NAME, '\n'.join(problems))) | |
body = '\n'.join(dataset_sections) | |
return prefix + body | |
def _GenerateSummary(self): | |
"""Generate a summary of the warnings and errors. | |
Returns: | |
The generated HTML as a string. | |
""" | |
items = [] | |
if self._dataset_errors: | |
items.append('errors: %d' % self._error_count) | |
if self._dataset_warnings: | |
items.append('warnings: %d' % self._warning_count) | |
if items: | |
return '<p><span class="fail">%s</span></p>' % '<br>'.join(items) | |
else: | |
return '<p><span class="pass">feeds merged successfully</span></p>' | |
def WriteOutput(self, output_file, feed_merger, | |
old_feed_path, new_feed_path, merged_feed_path): | |
"""Write the HTML output to a file. | |
Args: | |
output_file: The file object that the HTML output will be written to. | |
feed_merger: The FeedMerger instance. | |
old_feed_path: The path to the old feed file as a string. | |
new_feed_path: The path to the new feed file as a string | |
merged_feed_path: The path to the merged feed file as a string. This | |
may be None if no merged feed was written. | |
""" | |
if merged_feed_path is None: | |
html_merged_feed_path = '' | |
else: | |
html_merged_feed_path = '<p>Merged feed created: <code>%s</code></p>' % ( | |
merged_feed_path) | |
html_header = """<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> | |
<title>Feed Merger Results</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; | |
font-family:monospace} | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 1em} | |
span.pass {background-color: lightgreen} | |
span.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt; padding: 3px} | |
ol,.unused {padding-left: 40pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
<h1>Feed merger results</h1> | |
<p>Old feed: <code>%(old_feed_path)s</code></p> | |
<p>New feed: <code>%(new_feed_path)s</code></p> | |
%(html_merged_feed_path)s""" % locals() | |
html_stats = self._GenerateStatsTable(feed_merger) | |
html_summary = self._GenerateSummary() | |
html_errors = self._GenerateSection(transitfeed.TYPE_ERROR) | |
html_warnings = self._GenerateSection(transitfeed.TYPE_WARNING) | |
html_footer = """ | |
<div class="footer"> | |
Generated using transitfeed version %s on %s. | |
</div> | |
</body> | |
</html>""" % (transitfeed.__version__, | |
time.strftime('%B %d, %Y at %I:%M %p %Z')) | |
output_file.write(transitfeed.EncodeUnicode(html_header)) | |
output_file.write(transitfeed.EncodeUnicode(html_stats)) | |
output_file.write(transitfeed.EncodeUnicode(html_summary)) | |
output_file.write(transitfeed.EncodeUnicode(html_errors)) | |
output_file.write(transitfeed.EncodeUnicode(html_warnings)) | |
output_file.write(transitfeed.EncodeUnicode(html_footer)) | |
class ConsoleWarningRaiseErrorProblemReporter(transitfeed.ProblemReporterBase): | |
"""Problem reporter to use when loading feeds for merge.""" | |
def _Report(self, e): | |
if e.IsError(): | |
raise e | |
else: | |
print transitfeed.EncodeUnicode(e.FormatProblem()) | |
context = e.FormatContext() | |
if context: | |
print context | |
def LoadWithoutErrors(path, memory_db): | |
""""Return a Schedule object loaded from path; sys.exit for any error.""" | |
loading_problem_handler = ConsoleWarningRaiseErrorProblemReporter() | |
try: | |
schedule = transitfeed.Loader(path, | |
memory_db=memory_db, | |
problems=loading_problem_handler).Load() | |
except transitfeed.ExceptionWithContext, e: | |
print >>sys.stderr, ( | |
"\n\nFeeds to merge must load without any errors.\n" | |
"While loading %s the following error was found:\n%s\n%s\n" % | |
(path, e.FormatContext(), transitfeed.EncodeUnicode(e.FormatProblem()))) | |
sys.exit(1) | |
return schedule | |
class DataSetMerger(object): | |
"""A DataSetMerger is in charge of merging a set of entities. | |
This is an abstract class and should be subclassed for each different entity | |
type. | |
Attributes: | |
ENTITY_TYPE_NAME: The name of the entity type like 'agency' or 'stop'. | |
FILE_NAME: The name of the file containing this data set like 'agency.txt'. | |
DATASET_NAME: A name for the dataset like 'Agencies' or 'Stops'. | |
""" | |
def __init__(self, feed_merger): | |
"""Initialise. | |
Args: | |
feed_merger: The FeedMerger. | |
""" | |
self.feed_merger = feed_merger | |
self._num_merged = 0 | |
self._num_not_merged_a = 0 | |
self._num_not_merged_b = 0 | |
def _MergeIdentical(self, a, b): | |
"""Tries to merge two values. The values are required to be identical. | |
Args: | |
a: The first value. | |
b: The second value. | |
Returns: | |
The trivially merged value. | |
Raises: | |
MergeError: The values were not identical. | |
""" | |
if a != b: | |
raise MergeError("values must be identical ('%s' vs '%s')" % | |
(transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return b | |
def _MergeIdenticalCaseInsensitive(self, a, b): | |
"""Tries to merge two strings. | |
The string are required to be the same ignoring case. The second string is | |
always used as the merged value. | |
Args: | |
a: The first string. | |
b: The second string. | |
Returns: | |
The merged string. This is equal to the second string. | |
Raises: | |
MergeError: The strings were not the same ignoring case. | |
""" | |
if a.lower() != b.lower(): | |
raise MergeError("values must be the same (case insensitive) " | |
"('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return b | |
def _MergeOptional(self, a, b): | |
"""Tries to merge two values which may be None. | |
If both values are not None, they are required to be the same and the | |
merge is trivial. If one of the values is None and the other is not None, | |
the merge results in the one which is not None. If both are None, the merge | |
results in None. | |
Args: | |
a: The first value. | |
b: The second value. | |
Returns: | |
The merged value. | |
Raises: | |
MergeError: If both values are not None and are not the same. | |
""" | |
if a and b: | |
if a != b: | |
raise MergeError("values must be identical if both specified " | |
"('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return a or b | |
def _MergeSameAgency(self, a_agency_id, b_agency_id): | |
"""Merge agency ids to the corresponding agency id in the merged schedule. | |
Args: | |
a_agency_id: an agency id from the old schedule | |
b_agency_id: an agency id from the new schedule | |
Returns: | |
The agency id of the corresponding merged agency. | |
Raises: | |
MergeError: If a_agency_id and b_agency_id do not correspond to the same | |
merged agency. | |
KeyError: Either aaid or baid is not a valid agency id. | |
""" | |
a_agency_id = (a_agency_id or | |
self.feed_merger.a_schedule.GetDefaultAgency().agency_id) | |
b_agency_id = (b_agency_id or | |
self.feed_merger.b_schedule.GetDefaultAgency().agency_id) | |
a_agency = self.feed_merger.a_merge_map[ | |
self.feed_merger.a_schedule.GetAgency(a_agency_id)] | |
b_agency = self.feed_merger.b_merge_map[ | |
self.feed_merger.b_schedule.GetAgency(b_agency_id)] | |
if a_agency != b_agency: | |
raise MergeError('agency must be the same') | |
return a_agency.agency_id | |
def _SchemedMerge(self, scheme, a, b): | |
"""Tries to merge two entities according to a merge scheme. | |
A scheme is specified by a map where the keys are entity attributes and the | |
values are merge functions like Merger._MergeIdentical or | |
Merger._MergeOptional. The entity is first migrated to the merged schedule. | |
Then the attributes are individually merged as specified by the scheme. | |
Args: | |
scheme: The merge scheme, a map from entity attributes to merge | |
functions. | |
a: The entity from the old schedule. | |
b: The entity from the new schedule. | |
Returns: | |
The migrated and merged entity. | |
Raises: | |
MergeError: One of the attributes was not able to be merged. | |
""" | |
migrated = self._Migrate(b, self.feed_merger.b_schedule, False) | |
for attr, merger in scheme.items(): | |
a_attr = getattr(a, attr, None) | |
b_attr = getattr(b, attr, None) | |
try: | |
merged_attr = merger(a_attr, b_attr) | |
except MergeError, merge_error: | |
raise MergeError("Attribute '%s' could not be merged: %s." % ( | |
attr, merge_error)) | |
if migrated is not None: | |
setattr(migrated, attr, merged_attr) | |
return migrated | |
def _MergeSameId(self): | |
"""Tries to merge entities based on their ids. | |
This tries to merge only the entities from the old and new schedules which | |
have the same id. These are added into the merged schedule. Entities which | |
do not merge or do not have the same id as another entity in the other | |
schedule are simply migrated into the merged schedule. | |
This method is less flexible than _MergeDifferentId since it only tries | |
to merge entities which have the same id while _MergeDifferentId tries to | |
merge everything. However, it is faster and so should be used whenever | |
possible. | |
This method makes use of various methods like _Merge and _Migrate which | |
are not implemented in the abstract DataSetMerger class. These method | |
should be overwritten in a subclass to allow _MergeSameId to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
a_not_merged = [] | |
b_not_merged = [] | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
try: | |
b = self._GetById(self.feed_merger.b_schedule, self._GetId(a)) | |
except KeyError: | |
# there was no entity in B with the same id as a | |
a_not_merged.append(a) | |
continue | |
try: | |
self._Add(a, b, self._MergeEntities(a, b)) | |
self._num_merged += 1 | |
except MergeError, merge_error: | |
a_not_merged.append(a) | |
b_not_merged.append(b) | |
self._ReportSameIdButNotMerged(self._GetId(a), merge_error) | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
try: | |
a = self._GetById(self.feed_merger.a_schedule, self._GetId(b)) | |
except KeyError: | |
# there was no entity in A with the same id as b | |
b_not_merged.append(b) | |
# migrate the remaining entities | |
for a in a_not_merged: | |
newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a)) | |
self._Add(a, None, self._Migrate(a, self.feed_merger.a_schedule, newid)) | |
for b in b_not_merged: | |
newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b)) | |
self._Add(None, b, self._Migrate(b, self.feed_merger.b_schedule, newid)) | |
self._num_not_merged_a = len(a_not_merged) | |
self._num_not_merged_b = len(b_not_merged) | |
return self._num_merged | |
def _MergeDifferentId(self): | |
"""Tries to merge all possible combinations of entities. | |
This tries to merge every entity in the old schedule with every entity in | |
the new schedule. Unlike _MergeSameId, the ids do not need to match. | |
However, _MergeDifferentId is much slower than _MergeSameId. | |
This method makes use of various methods like _Merge and _Migrate which | |
are not implemented in the abstract DataSetMerger class. These method | |
should be overwritten in a subclass to allow _MergeSameId to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
# TODO: The same entity from A could merge with multiple from B. | |
# This should either generate an error or should be prevented from | |
# happening. | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
try: | |
self._Add(a, b, self._MergeEntities(a, b)) | |
self._num_merged += 1 | |
except MergeError: | |
continue | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
if a not in self.feed_merger.a_merge_map: | |
self._num_not_merged_a += 1 | |
newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a)) | |
self._Add(a, None, | |
self._Migrate(a, self.feed_merger.a_schedule, newid)) | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
if b not in self.feed_merger.b_merge_map: | |
self._num_not_merged_b += 1 | |
newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b)) | |
self._Add(None, b, | |
self._Migrate(b, self.feed_merger.b_schedule, newid)) | |
return self._num_merged | |
def _ReportSameIdButNotMerged(self, entity_id, reason): | |
"""Report that two entities have the same id but could not be merged. | |
Args: | |
entity_id: The id of the entities. | |
reason: A string giving a reason why they could not be merged. | |
""" | |
self.feed_merger.problem_reporter.SameIdButNotMerged(self, | |
entity_id, | |
reason) | |
def _GetIter(self, schedule): | |
"""Returns an iterator of entities for this data set in the given schedule. | |
This method usually corresponds to one of the methods from | |
transitfeed.Schedule like GetAgencyList() or GetRouteList(). | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
schedule: Either the old or new schedule from the FeedMerger. | |
Returns: | |
An iterator of entities. | |
""" | |
raise NotImplementedError() | |
def _GetById(self, schedule, entity_id): | |
"""Returns an entity given its id. | |
This method usually corresponds to one of the methods from | |
transitfeed.Schedule like GetAgency() or GetRoute(). | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
schedule: Either the old or new schedule from the FeedMerger. | |
entity_id: The id string of the entity. | |
Returns: | |
The entity with the given id. | |
Raises: | |
KeyError: There is not entity with the given id. | |
""" | |
raise NotImplementedError() | |
def _HasId(self, schedule, entity_id): | |
"""Check if the schedule has an entity with the given id. | |
Args: | |
schedule: The transitfeed.Schedule instance to look in. | |
entity_id: The id of the entity. | |
Returns: | |
True if the schedule has an entity with the id or False if not. | |
""" | |
try: | |
self._GetById(schedule, entity_id) | |
has = True | |
except KeyError: | |
has = False | |
return has | |
def _MergeEntities(self, a, b): | |
"""Tries to merge the two entities. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
a: The entity from the old schedule. | |
b: The entity from the new schedule. | |
Returns: | |
The merged migrated entity. | |
Raises: | |
MergeError: The entities were not able to be merged. | |
""" | |
raise NotImplementedError() | |
def _Migrate(self, entity, schedule, newid): | |
"""Migrates the entity to the merge schedule. | |
This involves copying the entity and updating any ids to point to the | |
corresponding entities in the merged schedule. If newid is True then | |
a unique id is generated for the migrated entity using the original id | |
as a prefix. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
entity: The entity to migrate. | |
schedule: The schedule from the FeedMerger that contains ent. | |
newid: Whether to generate a new id (True) or keep the original (False). | |
Returns: | |
The migrated entity. | |
""" | |
raise NotImplementedError() | |
def _Add(self, a, b, migrated): | |
"""Adds the migrated entity to the merged schedule. | |
If a and b are both not None, it means that a and b were merged to create | |
migrated. If one of a or b is None, it means that the other was not merged | |
but has been migrated. This mapping is registered with the FeedMerger. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
a: The original entity from the old schedule. | |
b: The original entity from the new schedule. | |
migrated: The migrated entity for the merged schedule. | |
""" | |
raise NotImplementedError() | |
def _GetId(self, entity): | |
"""Returns the id of the given entity. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
entity: The entity. | |
Returns: | |
The id of the entity as a string or None. | |
""" | |
raise NotImplementedError() | |
def MergeDataSets(self): | |
"""Merge the data sets. | |
This method is called in FeedMerger.MergeSchedule(). | |
Note: This method must be overwritten in a subclass. | |
Returns: | |
A boolean which is False if the dataset was unable to be merged and | |
as a result the entire merge should be aborted. In this case, the problem | |
will have been reported using the FeedMerger's problem reporter. | |
""" | |
raise NotImplementedError() | |
def GetMergeStats(self): | |
"""Returns some merge statistics. | |
These are given as a tuple (merged, not_merged_a, not_merged_b) where | |
"merged" is the number of merged entities, "not_merged_a" is the number of | |
entities from the old schedule that were not merged and "not_merged_b" is | |
the number of entities from the new schedule that were not merged. | |
The return value can also be None. This means that there are no statistics | |
for this entity type. | |
The statistics are only available after MergeDataSets() has been called. | |
Returns: | |
Either the statistics tuple or None. | |
""" | |
return (self._num_merged, self._num_not_merged_a, self._num_not_merged_b) | |
class AgencyMerger(DataSetMerger): | |
"""A DataSetMerger for agencies.""" | |
ENTITY_TYPE_NAME = 'agency' | |
FILE_NAME = 'agency.txt' | |
DATASET_NAME = 'Agencies' | |
def _GetIter(self, schedule): | |
return schedule.GetAgencyList() | |
def _GetById(self, schedule, agency_id): | |
return schedule.GetAgency(agency_id) | |
def _MergeEntities(self, a, b): | |
"""Merges two agencies. | |
To be merged, they are required to have the same id, name, url and | |
timezone. The remaining language attribute is taken from the new agency. | |
Args: | |
a: The first agency. | |
b: The second agency. | |
Returns: | |
The merged agency. | |
Raises: | |
MergeError: The agencies could not be merged. | |
""" | |
def _MergeAgencyId(a_agency_id, b_agency_id): | |
"""Merge two agency ids. | |
The only difference between this and _MergeIdentical() is that the values | |
None and '' are regarded as being the same. | |
Args: | |
a_agency_id: The first agency id. | |
b_agency_id: The second agency id. | |
Returns: | |
The merged agency id. | |
Raises: | |
MergeError: The agency ids could not be merged. | |
""" | |
a_agency_id = a_agency_id or None | |
b_agency_id = b_agency_id or None | |
return self._MergeIdentical(a_agency_id, b_agency_id) | |
scheme = {'agency_id': _MergeAgencyId, | |
'agency_name': self._MergeIdentical, | |
'agency_url': self._MergeIdentical, | |
'agency_timezone': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
a = transitfeed.Agency(field_dict=entity) | |
if newid: | |
a.agency_id = self.feed_merger.GenerateId(entity.agency_id) | |
return a | |
def _Add(self, a, b, migrated): | |
self.feed_merger.Register(a, b, migrated) | |
self.feed_merger.merged_schedule.AddAgencyObject(migrated) | |
def _GetId(self, entity): | |
return entity.agency_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class StopMerger(DataSetMerger): | |
"""A DataSetMerger for stops. | |
Attributes: | |
largest_stop_distance: The largest distance allowed between stops that | |
will be merged in metres. | |
""" | |
ENTITY_TYPE_NAME = 'stop' | |
FILE_NAME = 'stops.txt' | |
DATASET_NAME = 'Stops' | |
largest_stop_distance = 10.0 | |
def __init__(self, feed_merger): | |
DataSetMerger.__init__(self, feed_merger) | |
self._merged = [] | |
self._a_not_merged = [] | |
self._b_not_merged = [] | |
def SetLargestStopDistance(self, distance): | |
"""Sets largest_stop_distance.""" | |
self.largest_stop_distance = distance | |
def _GetIter(self, schedule): | |
return schedule.GetStopList() | |
def _GetById(self, schedule, stop_id): | |
return schedule.GetStop(stop_id) | |
def _MergeEntities(self, a, b): | |
"""Merges two stops. | |
For the stops to be merged, they must have: | |
- the same stop_id | |
- the same stop_name (case insensitive) | |
- the same zone_id | |
- locations less than largest_stop_distance apart | |
The other attributes can have arbitary changes. The merged attributes are | |
taken from the new stop. | |
Args: | |
a: The first stop. | |
b: The second stop. | |
Returns: | |
The merged stop. | |
Raises: | |
MergeError: The stops could not be merged. | |
""" | |
distance = transitfeed.ApproximateDistanceBetweenStops(a, b) | |
if distance > self.largest_stop_distance: | |
raise MergeError("Stops are too far apart: %.1fm " | |
"(largest_stop_distance is %.1fm)." % | |
(distance, self.largest_stop_distance)) | |
scheme = {'stop_id': self._MergeIdentical, | |
'stop_name': self._MergeIdenticalCaseInsensitive, | |
'zone_id': self._MergeIdentical, | |
'location_type': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
migrated_stop = transitfeed.Stop(field_dict=entity) | |
if newid: | |
migrated_stop.stop_id = self.feed_merger.GenerateId(entity.stop_id) | |
return migrated_stop | |
def _Add(self, a, b, migrated_stop): | |
self.feed_merger.Register(a, b, migrated_stop) | |
# The migrated_stop will be added to feed_merger.merged_schedule later | |
# since adding must be done after the zone_ids have been finalized. | |
if a and b: | |
self._merged.append((a, b, migrated_stop)) | |
elif a: | |
self._a_not_merged.append((a, migrated_stop)) | |
elif b: | |
self._b_not_merged.append((b, migrated_stop)) | |
def _GetId(self, entity): | |
return entity.stop_id | |
def MergeDataSets(self): | |
num_merged = self._MergeSameId() | |
fm = self.feed_merger | |
# now we do all the zone_id and parent_station mapping | |
# the zone_ids for merged stops can be preserved | |
for (a, b, merged_stop) in self._merged: | |
assert a.zone_id == b.zone_id | |
fm.a_zone_map[a.zone_id] = a.zone_id | |
fm.b_zone_map[b.zone_id] = b.zone_id | |
merged_stop.zone_id = a.zone_id | |
if merged_stop.parent_station: | |
# Merged stop has a parent. Update it to be the parent it had in b. | |
parent_in_b = fm.b_schedule.GetStop(b.parent_station) | |
merged_stop.parent_station = fm.b_merge_map[parent_in_b].stop_id | |
fm.merged_schedule.AddStopObject(merged_stop) | |
self._UpdateAndMigrateUnmerged(self._a_not_merged, fm.a_zone_map, | |
fm.a_merge_map, fm.a_schedule) | |
self._UpdateAndMigrateUnmerged(self._b_not_merged, fm.b_zone_map, | |
fm.b_merge_map, fm.b_schedule) | |
print 'Stops merged: %d of %d, %d' % ( | |
num_merged, | |
len(fm.a_schedule.GetStopList()), | |
len(fm.b_schedule.GetStopList())) | |
return True | |
def _UpdateAndMigrateUnmerged(self, not_merged_stops, zone_map, merge_map, | |
schedule): | |
"""Correct references in migrated unmerged stops and add to merged_schedule. | |
For stops migrated from one of the input feeds to the output feed update the | |
parent_station and zone_id references to point to objects in the output | |
feed. Then add the migrated stop to the new schedule. | |
Args: | |
not_merged_stops: list of stops from one input feed that have not been | |
merged | |
zone_map: map from zone_id in the input feed to zone_id in the output feed | |
merge_map: map from Stop objects in the input feed to Stop objects in | |
the output feed | |
schedule: the input Schedule object | |
""" | |
# for the unmerged stops, we use an already mapped zone_id if possible | |
# if not, we generate a new one and add it to the map | |
for stop, migrated_stop in not_merged_stops: | |
if stop.zone_id in zone_map: | |
migrated_stop.zone_id = zone_map[stop.zone_id] | |
else: | |
migrated_stop.zone_id = self.feed_merger.GenerateId(stop.zone_id) | |
zone_map[stop.zone_id] = migrated_stop.zone_id | |
if stop.parent_station: | |
parent_original = schedule.GetStop(stop.parent_station) | |
migrated_stop.parent_station = merge_map[parent_original].stop_id | |
self.feed_merger.merged_schedule.AddStopObject(migrated_stop) | |
class RouteMerger(DataSetMerger): | |
"""A DataSetMerger for routes.""" | |
ENTITY_TYPE_NAME = 'route' | |
FILE_NAME = 'routes.txt' | |
DATASET_NAME = 'Routes' | |
def _GetIter(self, schedule): | |
return schedule.GetRouteList() | |
def _GetById(self, schedule, route_id): | |
return schedule.GetRoute(route_id) | |
def _MergeEntities(self, a, b): | |
scheme = {'route_short_name': self._MergeIdentical, | |
'route_long_name': self._MergeIdentical, | |
'agency_id': self._MergeSameAgency, | |
'route_type': self._MergeIdentical, | |
'route_id': self._MergeIdentical, | |
'route_url': self._MergeOptional, | |
'route_color': self._MergeOptional, | |
'route_text_color': self._MergeOptional} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
migrated_route = transitfeed.Route(field_dict=entity) | |
if newid: | |
migrated_route.route_id = self.feed_merger.GenerateId(entity.route_id) | |
if entity.agency_id: | |
original_agency = schedule.GetAgency(entity.agency_id) | |
else: | |
original_agency = schedule.GetDefaultAgency() | |
migrated_agency = self.feed_merger.GetMergedObject(original_agency) | |
migrated_route.agency_id = migrated_agency.agency_id | |
return migrated_route | |
def _Add(self, a, b, migrated_route): | |
self.feed_merger.Register(a, b, migrated_route) | |
self.feed_merger.merged_schedule.AddRouteObject(migrated_route) | |
def _GetId(self, entity): | |
return entity.route_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class ServicePeriodMerger(DataSetMerger): | |
"""A DataSetMerger for service periods. | |
Attributes: | |
require_disjoint_calendars: A boolean specifying whether to require | |
disjoint calendars when merging (True) or not (False). | |
""" | |
ENTITY_TYPE_NAME = 'service period' | |
FILE_NAME = 'calendar.txt/calendar_dates.txt' | |
DATASET_NAME = 'Service Periods' | |
def __init__(self, feed_merger): | |
DataSetMerger.__init__(self, feed_merger) | |
self.require_disjoint_calendars = True | |
def _ReportSameIdButNotMerged(self, entity_id, reason): | |
pass | |
def _GetIter(self, schedule): | |
return schedule.GetServicePeriodList() | |
def _GetById(self, schedule, service_id): | |
return schedule.GetServicePeriod(service_id) | |
def _MergeEntities(self, a, b): | |
"""Tries to merge two service periods. | |
Note: Currently this just raises a MergeError since service periods cannot | |
be merged. | |
Args: | |
a: The first service period. | |
b: The second service period. | |
Returns: | |
The merged service period. | |
Raises: | |
MergeError: When the service periods could not be merged. | |
""" | |
raise MergeError('Cannot merge service periods') | |
def _Migrate(self, original_service_period, schedule, newid): | |
migrated_service_period = transitfeed.ServicePeriod() | |
migrated_service_period.day_of_week = list( | |
original_service_period.day_of_week) | |
migrated_service_period.start_date = original_service_period.start_date | |
migrated_service_period.end_date = original_service_period.end_date | |
migrated_service_period.date_exceptions = dict( | |
original_service_period.date_exceptions) | |
if newid: | |
migrated_service_period.service_id = self.feed_merger.GenerateId( | |
original_service_period.service_id) | |
else: | |
migrated_service_period.service_id = original_service_period.service_id | |
return migrated_service_period | |
def _Add(self, a, b, migrated_service_period): | |
self.feed_merger.Register(a, b, migrated_service_period) | |
self.feed_merger.merged_schedule.AddServicePeriodObject( | |
migrated_service_period) | |
def _GetId(self, entity): | |
return entity.service_id | |
def MergeDataSets(self): | |
if self.require_disjoint_calendars and not self.CheckDisjointCalendars(): | |
self.feed_merger.problem_reporter.CalendarsNotDisjoint(self) | |
return False | |
self._MergeSameId() | |
self.feed_merger.problem_reporter.MergeNotImplemented(self) | |
return True | |
def DisjoinCalendars(self, cutoff): | |
"""Forces the old and new calendars to be disjoint about a cutoff date. | |
This truncates the service periods of the old schedule so that service | |
stops one day before the given cutoff date and truncates the new schedule | |
so that service only begins on the cutoff date. | |
Args: | |
cutoff: The cutoff date as a string in YYYYMMDD format. The timezone | |
is the same as used in the calendar.txt file. | |
""" | |
def TruncatePeriod(service_period, start, end): | |
"""Truncate the service period to into the range [start, end]. | |
Args: | |
service_period: The service period to truncate. | |
start: The start date as a string in YYYYMMDD format. | |
end: The end date as a string in YYYYMMDD format. | |
""" | |
service_period.start_date = max(service_period.start_date, start) | |
service_period.end_date = min(service_period.end_date, end) | |
dates_to_delete = [] | |
for k in service_period.date_exceptions: | |
if (k < start) or (k > end): | |
dates_to_delete.append(k) | |
for k in dates_to_delete: | |
del service_period.date_exceptions[k] | |
# find the date one day before cutoff | |
year = int(cutoff[:4]) | |
month = int(cutoff[4:6]) | |
day = int(cutoff[6:8]) | |
cutoff_date = datetime.date(year, month, day) | |
one_day_delta = datetime.timedelta(days=1) | |
before = (cutoff_date - one_day_delta).strftime('%Y%m%d') | |
for a in self.feed_merger.a_schedule.GetServicePeriodList(): | |
TruncatePeriod(a, 0, before) | |
for b in self.feed_merger.b_schedule.GetServicePeriodList(): | |
TruncatePeriod(b, cutoff, '9'*8) | |
def CheckDisjointCalendars(self): | |
"""Check whether any old service periods intersect with any new ones. | |
This is a rather coarse check based on | |
transitfeed.SevicePeriod.GetDateRange. | |
Returns: | |
True if the calendars are disjoint or False if not. | |
""" | |
# TODO: Do an exact check here. | |
a_service_periods = self.feed_merger.a_schedule.GetServicePeriodList() | |
b_service_periods = self.feed_merger.b_schedule.GetServicePeriodList() | |
for a_service_period in a_service_periods: | |
a_start, a_end = a_service_period.GetDateRange() | |
for b_service_period in b_service_periods: | |
b_start, b_end = b_service_period.GetDateRange() | |
overlap_start = max(a_start, b_start) | |
overlap_end = min(a_end, b_end) | |
if overlap_end >= overlap_start: | |
return False | |
return True | |
def GetMergeStats(self): | |
return None | |
class FareMerger(DataSetMerger): | |
"""A DataSetMerger for fares.""" | |
ENTITY_TYPE_NAME = 'fare' | |
FILE_NAME = 'fare_attributes.txt' | |
DATASET_NAME = 'Fares' | |
def _GetIter(self, schedule): | |
return schedule.GetFareList() | |
def _GetById(self, schedule, fare_id): | |
return schedule.GetFare(fare_id) | |
def _MergeEntities(self, a, b): | |
"""Merges the fares if all the attributes are the same.""" | |
scheme = {'price': self._MergeIdentical, | |
'currency_type': self._MergeIdentical, | |
'payment_method': self._MergeIdentical, | |
'transfers': self._MergeIdentical, | |
'transfer_duration': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, original_fare, schedule, newid): | |
migrated_fare = transitfeed.Fare( | |
field_list=original_fare.GetFieldValuesTuple()) | |
if newid: | |
migrated_fare.fare_id = self.feed_merger.GenerateId( | |
original_fare.fare_id) | |
return migrated_fare | |
def _Add(self, a, b, migrated_fare): | |
self.feed_merger.Register(a, b, migrated_fare) | |
self.feed_merger.merged_schedule.AddFareObject(migrated_fare) | |
def _GetId(self, fare): | |
return fare.fare_id | |
def MergeDataSets(self): | |
num_merged = self._MergeSameId() | |
print 'Fares merged: %d of %d, %d' % ( | |
num_merged, | |
len(self.feed_merger.a_schedule.GetFareList()), | |
len(self.feed_merger.b_schedule.GetFareList())) | |
return True | |
class ShapeMerger(DataSetMerger): | |
"""A DataSetMerger for shapes. | |
In this implementation, merging shapes means just taking the new shape. | |
The only conditions for a merge are that the shape_ids are the same and | |
the endpoints of the old and new shapes are no further than | |
largest_shape_distance apart. | |
Attributes: | |
largest_shape_distance: The largest distance between the endpoints of two | |
shapes allowed for them to be merged in metres. | |
""" | |
ENTITY_TYPE_NAME = 'shape' | |
FILE_NAME = 'shapes.txt' | |
DATASET_NAME = 'Shapes' | |
largest_shape_distance = 10.0 | |
def SetLargestShapeDistance(self, distance): | |
"""Sets largest_shape_distance.""" | |
self.largest_shape_distance = distance | |
def _GetIter(self, schedule): | |
return schedule.GetShapeList() | |
def _GetById(self, schedule, shape_id): | |
return schedule.GetShape(shape_id) | |
def _MergeEntities(self, a, b): | |
"""Merges the shapes by taking the new shape. | |
Args: | |
a: The first transitfeed.Shape instance. | |
b: The second transitfeed.Shape instance. | |
Returns: | |
The merged shape. | |
Raises: | |
MergeError: If the ids are different or if the endpoints are further | |
than largest_shape_distance apart. | |
""" | |
if a.shape_id != b.shape_id: | |
raise MergeError('shape_id must be the same') | |
distance = max(ApproximateDistanceBetweenPoints(a.points[0][:2], | |
b.points[0][:2]), | |
ApproximateDistanceBetweenPoints(a.points[-1][:2], | |
b.points[-1][:2])) | |
if distance > self.largest_shape_distance: | |
raise MergeError('The shape endpoints are too far away: %.1fm ' | |
'(largest_shape_distance is %.1fm)' % | |
(distance, self.largest_shape_distance)) | |
return self._Migrate(b, self.feed_merger.b_schedule, False) | |
def _Migrate(self, original_shape, schedule, newid): | |
migrated_shape = transitfeed.Shape(original_shape.shape_id) | |
if newid: | |
migrated_shape.shape_id = self.feed_merger.GenerateId( | |
original_shape.shape_id) | |
for (lat, lon, dist) in original_shape.points: | |
migrated_shape.AddPoint(lat=lat, lon=lon, distance=dist) | |
return migrated_shape | |
def _Add(self, a, b, migrated_shape): | |
self.feed_merger.Register(a, b, migrated_shape) | |
self.feed_merger.merged_schedule.AddShapeObject(migrated_shape) | |
def _GetId(self, shape): | |
return shape.shape_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class TripMerger(DataSetMerger): | |
"""A DataSetMerger for trips. | |
This implementation makes no attempt to merge trips, it simply migrates | |
them all to the merged feed. | |
""" | |
ENTITY_TYPE_NAME = 'trip' | |
FILE_NAME = 'trips.txt' | |
DATASET_NAME = 'Trips' | |
def _ReportSameIdButNotMerged(self, trip_id, reason): | |
pass | |
def _GetIter(self, schedule): | |
return schedule.GetTripList() | |
def _GetById(self, schedule, trip_id): | |
return schedule.GetTrip(trip_id) | |
def _MergeEntities(self, a, b): | |
"""Raises a MergeError because currently trips cannot be merged.""" | |
raise MergeError('Cannot merge trips') | |
def _Migrate(self, original_trip, schedule, newid): | |
migrated_trip = transitfeed.Trip(field_dict=original_trip) | |
# Make new trip_id first. AddTripObject reports a problem if it conflicts | |
# with an existing id. | |
if newid: | |
migrated_trip.trip_id = self.feed_merger.GenerateId( | |
original_trip.trip_id) | |
# Need to add trip to schedule before copying stoptimes | |
self.feed_merger.merged_schedule.AddTripObject(migrated_trip, | |
validate=False) | |
if schedule == self.feed_merger.a_schedule: | |
merge_map = self.feed_merger.a_merge_map | |
else: | |
merge_map = self.feed_merger.b_merge_map | |
original_route = schedule.GetRoute(original_trip.route_id) | |
migrated_trip.route_id = merge_map[original_route].route_id | |
original_service_period = schedule.GetServicePeriod( | |
original_trip.service_id) | |
migrated_trip.service_id = merge_map[original_service_period].service_id | |
if original_trip.block_id: | |
migrated_trip.block_id = '%s_%s' % ( | |
self.feed_merger.GetScheduleName(schedule), | |
original_trip.block_id) | |
if original_trip.shape_id: | |
original_shape = schedule.GetShape(original_trip.shape_id) | |
migrated_trip.shape_id = merge_map[original_shape].shape_id | |
for original_stop_time in original_trip.GetStopTimes(): | |
migrated_stop_time = transitfeed.StopTime( | |
None, | |
merge_map[original_stop_time.stop], | |
original_stop_time.arrival_time, | |
original_stop_time.departure_time, | |
original_stop_time.stop_headsign, | |
original_stop_time.pickup_type, | |
original_stop_time.drop_off_type, | |
original_stop_time.shape_dist_traveled, | |
original_stop_time.arrival_secs, | |
original_stop_time.departure_secs) | |
migrated_trip.AddStopTimeObject(migrated_stop_time) | |
for headway_period in original_trip.GetHeadwayPeriodTuples(): | |
migrated_trip.AddHeadwayPeriod(*headway_period) | |
return migrated_trip | |
def _Add(self, a, b, migrated_trip): | |
# Validate now, since it wasn't done in _Migrate | |
migrated_trip.Validate(self.feed_merger.merged_schedule.problem_reporter) | |
self.feed_merger.Register(a, b, migrated_trip) | |
def _GetId(self, trip): | |
return trip.trip_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
self.feed_merger.problem_reporter.MergeNotImplemented(self) | |
return True | |
def GetMergeStats(self): | |
return None | |
class FareRuleMerger(DataSetMerger): | |
"""A DataSetMerger for fare rules.""" | |
ENTITY_TYPE_NAME = 'fare rule' | |
FILE_NAME = 'fare_rules.txt' | |
DATASET_NAME = 'Fare Rules' | |
def MergeDataSets(self): | |
"""Merge the fare rule datasets. | |
The fare rules are first migrated. Merging is done by removing any | |
duplicate rules. | |
Returns: | |
True since fare rules can always be merged. | |
""" | |
rules = set() | |
for (schedule, merge_map, zone_map) in ([self.feed_merger.a_schedule, | |
self.feed_merger.a_merge_map, | |
self.feed_merger.a_zone_map], | |
[self.feed_merger.b_schedule, | |
self.feed_merger.b_merge_map, | |
self.feed_merger.b_zone_map]): | |
for fare in schedule.GetFareList(): | |
for fare_rule in fare.GetFareRuleList(): | |
fare_id = merge_map[schedule.GetFare(fare_rule.fare_id)].fare_id | |
route_id = (fare_rule.route_id and | |
merge_map[schedule.GetRoute(fare_rule.route_id)].route_id) | |
origin_id = (fare_rule.origin_id and | |
zone_map[fare_rule.origin_id]) | |
destination_id = (fare_rule.destination_id and | |
zone_map[fare_rule.destination_id]) | |
contains_id = (fare_rule.contains_id and | |
zone_map[fare_rule.contains_id]) | |
rules.add((fare_id, route_id, origin_id, destination_id, | |
contains_id)) | |
for fare_rule_tuple in rules: | |
migrated_fare_rule = transitfeed.FareRule(*fare_rule_tuple) | |
self.feed_merger.merged_schedule.AddFareRuleObject(migrated_fare_rule) | |
if rules: | |
self.feed_merger.problem_reporter.FareRulesBroken(self) | |
print 'Fare Rules: union has %d fare rules' % len(rules) | |
return True | |
def GetMergeStats(self): | |
return None | |
class FeedMerger(object): | |
"""A class for merging two whole feeds. | |
This class takes two instances of transitfeed.Schedule and uses | |
DataSetMerger instances to merge the feeds and produce the resultant | |
merged feed. | |
Attributes: | |
a_schedule: The old transitfeed.Schedule instance. | |
b_schedule: The new transitfeed.Schedule instance. | |
problem_reporter: The merge problem reporter. | |
merged_schedule: The merged transitfeed.Schedule instance. | |
a_merge_map: A map from old entities to merged entities. | |
b_merge_map: A map from new entities to merged entities. | |
a_zone_map: A map from old zone ids to merged zone ids. | |
b_zone_map: A map from new zone ids to merged zone ids. | |
""" | |
def __init__(self, a_schedule, b_schedule, merged_schedule, | |
problem_reporter=None): | |
"""Initialise the merger. | |
Once this initialiser has been called, a_schedule and b_schedule should | |
not be modified. | |
Args: | |
a_schedule: The old schedule, an instance of transitfeed.Schedule. | |
b_schedule: The new schedule, an instance of transitfeed.Schedule. | |
problem_reporter: The problem reporter, an instance of | |
transitfeed.ProblemReporterBase. This can be None in | |
which case the ExceptionProblemReporter is used. | |
""" | |
self.a_schedule = a_schedule | |
self.b_schedule = b_schedule | |
self.merged_schedule = merged_schedule | |
self.a_merge_map = {} | |
self.b_merge_map = {} | |
self.a_zone_map = {} | |
self.b_zone_map = {} | |
self._mergers = [] | |
self._idnum = max(self._FindLargestIdPostfixNumber(self.a_schedule), | |
self._FindLargestIdPostfixNumber(self.b_schedule)) | |
if problem_reporter is not None: | |
self.problem_reporter = problem_reporter | |
else: | |
self.problem_reporter = ExceptionProblemReporter() | |
def _FindLargestIdPostfixNumber(self, schedule): | |
"""Finds the largest integer used as the ending of an id in the schedule. | |
Args: | |
schedule: The schedule to check. | |
Returns: | |
The maximum integer used as an ending for an id. | |
""" | |
postfix_number_re = re.compile('(\d+)$') | |
def ExtractPostfixNumber(entity_id): | |
"""Try to extract an integer from the end of entity_id. | |
If entity_id is None or if there is no integer ending the id, zero is | |
returned. | |
Args: | |
entity_id: An id string or None. | |
Returns: | |
An integer ending the entity_id or zero. | |
""" | |
if entity_id is None: | |
return 0 | |
match = postfix_number_re.search(entity_id) | |
if match is not None: | |
return int(match.group(1)) | |
else: | |
return 0 | |
id_data_sets = {'agency_id': schedule.GetAgencyList(), | |
'stop_id': schedule.GetStopList(), | |
'route_id': schedule.GetRouteList(), | |
'trip_id': schedule.GetTripList(), | |
'service_id': schedule.GetServicePeriodList(), | |
'fare_id': schedule.GetFareList(), | |
'shape_id': schedule.GetShapeList()} | |
max_postfix_number = 0 | |
for id_name, entity_list in id_data_sets.items(): | |
for entity in entity_list: | |
entity_id = getattr(entity, id_name) | |
postfix_number = ExtractPostfixNumber(entity_id) | |
max_postfix_number = max(max_postfix_number, postfix_number) | |
return max_postfix_number | |
def GetScheduleName(self, schedule): | |
"""Returns a single letter identifier for the schedule. | |
This only works for the old and new schedules which return 'a' and 'b' | |
respectively. The purpose of such identifiers is for generating ids. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
Returns: | |
The schedule identifier. | |
Raises: | |
KeyError: schedule is not the old or new schedule. | |
""" | |
return {self.a_schedule: 'a', self.b_schedule: 'b'}[schedule] | |
def GenerateId(self, entity_id=None): | |
"""Generate a unique id based on the given id. | |
This is done by appending a counter which is then incremented. The | |
counter is initialised at the maximum number used as an ending for | |
any id in the old and new schedules. | |
Args: | |
entity_id: The base id string. This is allowed to be None. | |
Returns: | |
The generated id. | |
""" | |
self._idnum += 1 | |
if entity_id: | |
return '%s_merged_%d' % (entity_id, self._idnum) | |
else: | |
return 'merged_%d' % self._idnum | |
def Register(self, a, b, migrated_entity): | |
"""Registers a merge mapping. | |
If a and b are both not None, this means that entities a and b were merged | |
to produce migrated_entity. If one of a or b are not None, then it means | |
it was not merged but simply migrated. | |
The effect of a call to register is to update a_merge_map and b_merge_map | |
according to the merge. | |
Args: | |
a: The entity from the old feed or None. | |
b: The entity from the new feed or None. | |
migrated_entity: The migrated entity. | |
""" | |
if a is not None: self.a_merge_map[a] = migrated_entity | |
if b is not None: self.b_merge_map[b] = migrated_entity | |
def AddMerger(self, merger): | |
"""Add a DataSetMerger to be run by Merge(). | |
Args: | |
merger: The DataSetMerger instance. | |
""" | |
self._mergers.append(merger) | |
def AddDefaultMergers(self): | |
"""Adds the default DataSetMergers defined in this module.""" | |
self.AddMerger(AgencyMerger(self)) | |
self.AddMerger(StopMerger(self)) | |
self.AddMerger(RouteMerger(self)) | |
self.AddMerger(ServicePeriodMerger(self)) | |
self.AddMerger(FareMerger(self)) | |
self.AddMerger(ShapeMerger(self)) | |
self.AddMerger(TripMerger(self)) | |
self.AddMerger(FareRuleMerger(self)) | |
def GetMerger(self, cls): | |
"""Looks for an added DataSetMerger derived from the given class. | |
Args: | |
cls: A class derived from DataSetMerger. | |
Returns: | |
The matching DataSetMerger instance. | |
Raises: | |
LookupError: No matching DataSetMerger has been added. | |
""" | |
for merger in self._mergers: | |
if isinstance(merger, cls): | |
return merger | |
raise LookupError('No matching DataSetMerger found') | |
def GetMergerList(self): | |
"""Returns the list of DataSetMerger instances that have been added.""" | |
return self._mergers | |
def MergeSchedules(self): | |
"""Merge the schedules. | |
This is done by running the DataSetMergers that have been added with | |
AddMerger() in the order that they were added. | |
Returns: | |
True if the merge was successful. | |
""" | |
for merger in self._mergers: | |
if not merger.MergeDataSets(): | |
return False | |
return True | |
def GetMergedSchedule(self): | |
"""Returns the merged schedule. | |
This will be empty before MergeSchedules() is called. | |
Returns: | |
The merged schedule. | |
""" | |
return self.merged_schedule | |
def GetMergedObject(self, original): | |
"""Returns an object that represents original in the merged schedule.""" | |
# TODO: I think this would be better implemented by adding a private | |
# attribute to the objects in the original feeds | |
merged = (self.a_merge_map.get(original) or | |
self.b_merge_map.get(original)) | |
if merged: | |
return merged | |
else: | |
raise KeyError() | |
def main(): | |
"""Run the merge driver program.""" | |
usage = \ | |
"""%prog [options] <input GTFS a.zip> <input GTFS b.zip> <output GTFS.zip> | |
Merges <input GTFS a.zip> and <input GTFS b.zip> into a new GTFS file | |
<output GTFS.zip>. | |
""" | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('--cutoff_date', | |
dest='cutoff_date', | |
default=None, | |
help='a transition date from the old feed to the new ' | |
'feed in the format YYYYMMDD') | |
parser.add_option('--largest_stop_distance', | |
dest='largest_stop_distance', | |
default=StopMerger.largest_stop_distance, | |
help='the furthest distance two stops can be apart and ' | |
'still be merged, in metres') | |
parser.add_option('--largest_shape_distance', | |
dest='largest_shape_distance', | |
default=ShapeMerger.largest_shape_distance, | |
help='the furthest distance the endpoints of two shapes ' | |
'can be apart and the shape still be merged, in metres') | |
parser.add_option('--html_output_path', | |
dest='html_output_path', | |
default='merge-results.html', | |
help='write the html output to this file') | |
parser.add_option('--no_browser', | |
dest='no_browser', | |
action='store_true', | |
help='prevents the merge results from being opened in a ' | |
'browser') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Use in-memory sqlite db instead of a temporary file. ' | |
'It is faster but uses more RAM.') | |
parser.set_defaults(memory_db=False) | |
(options, args) = parser.parse_args() | |
if len(args) != 3: | |
parser.error('You did not provide all required command line arguments.') | |
old_feed_path = os.path.abspath(args[0]) | |
new_feed_path = os.path.abspath(args[1]) | |
merged_feed_path = os.path.abspath(args[2]) | |
if old_feed_path.find("IWantMyCrash") != -1: | |
# See test/testmerge.py | |
raise Exception('For testing the merge crash handler.') | |
a_schedule = LoadWithoutErrors(old_feed_path, options.memory_db) | |
b_schedule = LoadWithoutErrors(new_feed_path, options.memory_db) | |
merged_schedule = transitfeed.Schedule(memory_db=options.memory_db) | |
problem_reporter = HTMLProblemReporter() | |
feed_merger = FeedMerger(a_schedule, b_schedule, merged_schedule, | |
problem_reporter) | |
feed_merger.AddDefaultMergers() | |
feed_merger.GetMerger(StopMerger).SetLargestStopDistance(float( | |
options.largest_stop_distance)) | |
feed_merger.GetMerger(ShapeMerger).SetLargestShapeDistance(float( | |
options.largest_shape_distance)) | |
if options.cutoff_date is not None: | |
service_period_merger = feed_merger.GetMerger(ServicePeriodMerger) | |
service_period_merger.DisjoinCalendars(options.cutoff_date) | |
if feed_merger.MergeSchedules(): | |
feed_merger.GetMergedSchedule().WriteGoogleTransitFeed(merged_feed_path) | |
else: | |
merged_feed_path = None | |
output_file = file(options.html_output_path, 'w') | |
problem_reporter.WriteOutput(output_file, feed_merger, | |
old_feed_path, new_feed_path, merged_feed_path) | |
output_file.close() | |
if not options.no_browser: | |
webbrowser.open('file://%s' % os.path.abspath(options.html_output_path)) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
An example application that uses the transitfeed module. | |
You must provide a Google Maps API key. | |
""" | |
import BaseHTTPServer, sys, urlparse | |
import bisect | |
from gtfsscheduleviewer.marey_graph import MareyGraph | |
import gtfsscheduleviewer | |
import mimetypes | |
import os.path | |
import re | |
import signal | |
import simplejson | |
import socket | |
import time | |
import transitfeed | |
from transitfeed import util | |
import urllib | |
# By default Windows kills Python with Ctrl+Break. Instead make Ctrl+Break | |
# raise a KeyboardInterrupt. | |
if hasattr(signal, 'SIGBREAK'): | |
signal.signal(signal.SIGBREAK, signal.default_int_handler) | |
mimetypes.add_type('text/plain', '.vbs') | |
class ResultEncoder(simplejson.JSONEncoder): | |
def default(self, obj): | |
try: | |
iterable = iter(obj) | |
except TypeError: | |
pass | |
else: | |
return list(iterable) | |
return simplejson.JSONEncoder.default(self, obj) | |
# Code taken from | |
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/425210/index_txt | |
# An alternate approach is shown at | |
# http://mail.python.org/pipermail/python-list/2003-July/212751.html | |
# but it requires multiple threads. A sqlite object can only be used from one | |
# thread. | |
class StoppableHTTPServer(BaseHTTPServer.HTTPServer): | |
def server_bind(self): | |
BaseHTTPServer.HTTPServer.server_bind(self) | |
self.socket.settimeout(1) | |
self._run = True | |
def get_request(self): | |
while self._run: | |
try: | |
sock, addr = self.socket.accept() | |
sock.settimeout(None) | |
return (sock, addr) | |
except socket.timeout: | |
pass | |
def stop(self): | |
self._run = False | |
def serve(self): | |
while self._run: | |
self.handle_request() | |
def StopToTuple(stop): | |
"""Return tuple as expected by javascript function addStopMarkerFromList""" | |
return (stop.stop_id, stop.stop_name, float(stop.stop_lat), | |
float(stop.stop_lon), stop.location_type) | |
class ScheduleRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): | |
def do_GET(self): | |
scheme, host, path, x, params, fragment = urlparse.urlparse(self.path) | |
parsed_params = {} | |
for k in params.split('&'): | |
k = urllib.unquote(k) | |
if '=' in k: | |
k, v = k.split('=', 1) | |
parsed_params[k] = unicode(v, 'utf8') | |
else: | |
parsed_params[k] = '' | |
if path == '/': | |
return self.handle_GET_home() | |
m = re.match(r'/json/([a-z]{1,64})', path) | |
if m: | |
handler_name = 'handle_json_GET_%s' % m.group(1) | |
handler = getattr(self, handler_name, None) | |
if callable(handler): | |
return self.handle_json_wrapper_GET(handler, parsed_params) | |
# Restrict allowable file names to prevent relative path attacks etc | |
m = re.match(r'/file/([a-z0-9_-]{1,64}\.?[a-z0-9_-]{1,64})$', path) | |
if m and m.group(1): | |
try: | |
f, mime_type = self.OpenFile(m.group(1)) | |
return self.handle_static_file_GET(f, mime_type) | |
except IOError, e: | |
print "Error: unable to open %s" % m.group(1) | |
# Ignore and treat as 404 | |
m = re.match(r'/([a-z]{1,64})', path) | |
if m: | |
handler_name = 'handle_GET_%s' % m.group(1) | |
handler = getattr(self, handler_name, None) | |
if callable(handler): | |
return handler(parsed_params) | |
return self.handle_GET_default(parsed_params, path) | |
def OpenFile(self, filename): | |
"""Try to open filename in the static files directory of this server. | |
Return a tuple (file object, string mime_type) or raise an exception.""" | |
(mime_type, encoding) = mimetypes.guess_type(filename) | |
assert mime_type | |
# A crude guess of when we should use binary mode. Without it non-unix | |
# platforms may corrupt binary files. | |
if mime_type.startswith('text/'): | |
mode = 'r' | |
else: | |
mode = 'rb' | |
return open(os.path.join(self.server.file_dir, filename), mode), mime_type | |
def handle_GET_default(self, parsed_params, path): | |
self.send_error(404) | |
def handle_static_file_GET(self, fh, mime_type): | |
content = fh.read() | |
self.send_response(200) | |
self.send_header('Content-Type', mime_type) | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def AllowEditMode(self): | |
return False | |
def handle_GET_home(self): | |
schedule = self.server.schedule | |
(min_lat, min_lon, max_lat, max_lon) = schedule.GetStopBoundingBox() | |
forbid_editing = ('true', 'false')[self.AllowEditMode()] | |
agency = ', '.join(a.agency_name for a in schedule.GetAgencyList()).encode('utf-8') | |
key = self.server.key | |
host = self.server.host | |
# A very simple template system. For a fixed set of values replace [xxx] | |
# with the value of local variable xxx | |
f, _ = self.OpenFile('index.html') | |
content = f.read() | |
for v in ('agency', 'min_lat', 'min_lon', 'max_lat', 'max_lon', 'key', | |
'host', 'forbid_editing'): | |
content = content.replace('[%s]' % v, str(locals()[v])) | |
self.send_response(200) | |
self.send_header('Content-Type', 'text/html') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def handle_json_GET_routepatterns(self, params): | |
"""Given a route_id generate a list of patterns of the route. For each | |
pattern include some basic information and a few sample trips.""" | |
schedule = self.server.schedule | |
route = schedule.GetRoute(params.get('route', None)) | |
if not route: | |
self.send_error(404) | |
return | |
time = int(params.get('time', 0)) | |
sample_size = 3 # For each pattern return the start time for this many trips | |
pattern_id_trip_dict = route.GetPatternIdTripDict() | |
patterns = [] | |
for pattern_id, trips in pattern_id_trip_dict.items(): | |
time_stops = trips[0].GetTimeStops() | |
if not time_stops: | |
continue | |
has_non_zero_trip_type = False; | |
for trip in trips: | |
if trip['trip_type'] and trip['trip_type'] != '0': | |
has_non_zero_trip_type = True | |
name = u'%s to %s, %d stops' % (time_stops[0][2].stop_name, time_stops[-1][2].stop_name, len(time_stops)) | |
transitfeed.SortListOfTripByTime(trips) | |
num_trips = len(trips) | |
if num_trips <= sample_size: | |
start_sample_index = 0 | |
num_after_sample = 0 | |
else: | |
# Will return sample_size trips that start after the 'time' param. | |
# Linear search because I couldn't find a built-in way to do a binary | |
# search with a custom key. | |
start_sample_index = len(trips) | |
for i, trip in enumerate(trips): | |
if trip.GetStartTime() >= time: | |
start_sample_index = i | |
break | |
num_after_sample = num_trips - (start_sample_index + sample_size) | |
if num_after_sample < 0: | |
# Less than sample_size trips start after 'time' so return all the | |
# last sample_size trips. | |
num_after_sample = 0 | |
start_sample_index = num_trips - sample_size | |
sample = [] | |
for t in trips[start_sample_index:start_sample_index + sample_size]: | |
sample.append( (t.GetStartTime(), t.trip_id) ) | |
patterns.append((name, pattern_id, start_sample_index, sample, | |
num_after_sample, (0,1)[has_non_zero_trip_type])) | |
patterns.sort() | |
return patterns | |
def handle_json_wrapper_GET(self, handler, parsed_params): | |
"""Call handler and output the return value in JSON.""" | |
schedule = self.server.schedule | |
result = handler(parsed_params) | |
content = ResultEncoder().encode(result) | |
self.send_response(200) | |
self.send_header('Content-Type', 'text/plain') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def handle_json_GET_routes(self, params): | |
"""Return a list of all routes.""" | |
schedule = self.server.schedule | |
result = [] | |
for r in schedule.GetRouteList(): | |
result.append( (r.route_id, r.route_short_name, r.route_long_name) ) | |
result.sort(key = lambda x: x[1:3]) | |
return result | |
def handle_json_GET_routerow(self, params): | |
schedule = self.server.schedule | |
route = schedule.GetRoute(params.get('route', None)) | |
return [transitfeed.Route._FIELD_NAMES, route.GetFieldValuesTuple()] | |
def handle_json_GET_triprows(self, params): | |
"""Return a list of rows from the feed file that are related to this | |
trip.""" | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip', None)) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
route = schedule.GetRoute(trip.route_id) | |
trip_row = dict(trip.iteritems()) | |
route_row = dict(route.iteritems()) | |
return [['trips.txt', trip_row], ['routes.txt', route_row]] | |
def handle_json_GET_tripstoptimes(self, params): | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip')) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
time_stops = trip.GetTimeStops() | |
stops = [] | |
times = [] | |
for arr,dep,stop in time_stops: | |
stops.append(StopToTuple(stop)) | |
times.append(arr) | |
return [stops, times] | |
def handle_json_GET_tripshape(self, params): | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip')) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
points = [] | |
if trip.shape_id: | |
shape = schedule.GetShape(trip.shape_id) | |
for (lat, lon, dist) in shape.points: | |
points.append((lat, lon)) | |
else: | |
time_stops = trip.GetTimeStops() | |
for arr,dep,stop in time_stops: | |
points.append((stop.stop_lat, stop.stop_lon)) | |
return points | |
def handle_json_GET_neareststops(self, params): | |
"""Return a list of the nearest 'limit' stops to 'lat', 'lon'""" | |
schedule = self.server.schedule | |
lat = float(params.get('lat')) | |
lon = float(params.get('lon')) | |
limit = int(params.get('limit')) | |
stops = schedule.GetNearestStops(lat=lat, lon=lon, n=limit) | |
return [StopToTuple(s) for s in stops] | |
def handle_json_GET_boundboxstops(self, params): | |
"""Return a list of up to 'limit' stops within bounding box with 'n','e' | |
and 's','w' in the NE and SW corners. Does not handle boxes crossing | |
longitude line 180.""" | |
schedule = self.server.schedule | |
n = float(params.get('n')) | |
e = float(params.get('e')) | |
s = float(params.get('s')) | |
w = float(params.get('w')) | |
limit = int(params.get('limit')) | |
stops = schedule.GetStopsInBoundingBox(north=n, east=e, south=s, west=w, n=limit) | |
return [StopToTuple(s) for s in stops] | |
def handle_json_GET_stopsearch(self, params): | |
schedule = self.server.schedule | |
query = params.get('q', None).lower() | |
matches = [] | |
for s in schedule.GetStopList(): | |
if s.stop_id.lower().find(query) != -1 or s.stop_name.lower().find(query) != -1: | |
matches.append(StopToTuple(s)) | |
return matches | |
def handle_json_GET_stoptrips(self, params): | |
"""Given a stop_id and time in seconds since midnight return the next | |
trips to visit the stop.""" | |
schedule = self.server.schedule | |
stop = schedule.GetStop(params.get('stop', None)) | |
time = int(params.get('time', 0)) | |
time_trips = stop.GetStopTimeTrips(schedule) | |
time_trips.sort() # OPT: use bisect.insort to make this O(N*ln(N)) -> O(N) | |
# Keep the first 5 after param 'time'. | |
# Need make a tuple to find correct bisect point | |
time_trips = time_trips[bisect.bisect_left(time_trips, (time, 0)):] | |
time_trips = time_trips[:5] | |
# TODO: combine times for a route to show next 2 departure times | |
result = [] | |
for time, (trip, index), tp in time_trips: | |
headsign = None | |
# Find the most recent headsign from the StopTime objects | |
for stoptime in trip.GetStopTimes()[index::-1]: | |
if stoptime.stop_headsign: | |
headsign = stoptime.stop_headsign | |
break | |
# If stop_headsign isn't found, look for a trip_headsign | |
if not headsign: | |
headsign = trip.trip_headsign | |
route = schedule.GetRoute(trip.route_id) | |
trip_name = '' | |
if route.route_short_name: | |
trip_name += route.route_short_name | |
if route.route_long_name: | |
if len(trip_name): | |
trip_name += " - " | |
trip_name += route.route_long_name | |
if headsign: | |
trip_name += " (Direction: %s)" % headsign | |
result.append((time, (trip.trip_id, trip_name, trip.service_id), tp)) | |
return result | |
def handle_GET_ttablegraph(self,params): | |
"""Draw a Marey graph in SVG for a pattern (collection of trips in a route | |
that visit the same sequence of stops).""" | |
schedule = self.server.schedule | |
marey = MareyGraph() | |
trip = schedule.GetTrip(params.get('trip', None)) | |
route = schedule.GetRoute(trip.route_id) | |
height = int(params.get('height', 300)) | |
if not route: | |
print 'no such route' | |
self.send_error(404) | |
return | |
pattern_id_trip_dict = route.GetPatternIdTripDict() | |
pattern_id = trip.pattern_id | |
if pattern_id not in pattern_id_trip_dict: | |
print 'no pattern %s found in %s' % (pattern_id, pattern_id_trip_dict.keys()) | |
self.send_error(404) | |
return | |
triplist = pattern_id_trip_dict[pattern_id] | |
pattern_start_time = min((t.GetStartTime() for t in triplist)) | |
pattern_end_time = max((t.GetEndTime() for t in triplist)) | |
marey.SetSpan(pattern_start_time,pattern_end_time) | |
marey.Draw(triplist[0].GetPattern(), triplist, height) | |
content = marey.Draw() | |
self.send_response(200) | |
self.send_header('Content-Type', 'image/svg+xml') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def FindPy2ExeBase(): | |
"""If this is running in py2exe return the install directory else return | |
None""" | |
# py2exe puts gtfsscheduleviewer in library.zip. For py2exe setup.py is | |
# configured to put the data next to library.zip. | |
windows_ending = gtfsscheduleviewer.__file__.find('\\library.zip\\') | |
if windows_ending != -1: | |
return transitfeed.__file__[:windows_ending] | |
else: | |
return None | |
def FindDefaultFileDir(): | |
"""Return the path of the directory containing the static files. By default | |
the directory is called 'files'. The location depends on where setup.py put | |
it.""" | |
base = FindPy2ExeBase() | |
if base: | |
return os.path.join(base, 'schedule_viewer_files') | |
else: | |
# For all other distributions 'files' is in the gtfsscheduleviewer | |
# directory. | |
base = os.path.dirname(gtfsscheduleviewer.__file__) # Strip __init__.py | |
return os.path.join(base, 'files') | |
def GetDefaultKeyFilePath(): | |
"""In py2exe return absolute path of file in the base directory and in all | |
other distributions return relative path 'key.txt'""" | |
windows_base = FindPy2ExeBase() | |
if windows_base: | |
return os.path.join(windows_base, 'key.txt') | |
else: | |
return 'key.txt' | |
def main(RequestHandlerClass = ScheduleRequestHandler): | |
usage = \ | |
'''%prog [options] [<input GTFS.zip>] | |
Runs a webserver that lets you explore a <input GTFS.zip> in your browser. | |
If <input GTFS.zip> is omited the filename is read from the console. Dragging | |
a file into the console may enter the filename. | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('--feed_filename', '--feed', dest='feed_filename', | |
help='file name of feed to load') | |
parser.add_option('--key', dest='key', | |
help='Google Maps API key or the name ' | |
'of a text file that contains an API key') | |
parser.add_option('--host', dest='host', help='Host name of Google Maps') | |
parser.add_option('--port', dest='port', type='int', | |
help='port on which to listen') | |
parser.add_option('--file_dir', dest='file_dir', | |
help='directory containing static files') | |
parser.add_option('-n', '--noprompt', action='store_false', | |
dest='manual_entry', | |
help='disable interactive prompts') | |
parser.set_defaults(port=8765, | |
host='maps.google.com', | |
file_dir=FindDefaultFileDir(), | |
manual_entry=True) | |
(options, args) = parser.parse_args() | |
if not os.path.isfile(os.path.join(options.file_dir, 'index.html')): | |
print "Can't find index.html with --file_dir=%s" % options.file_dir | |
exit(1) | |
if not options.feed_filename and len(args) == 1: | |
options.feed_filename = args[0] | |
if not options.feed_filename and options.manual_entry: | |
options.feed_filename = raw_input('Enter Feed Location: ').strip('"') | |
default_key_file = GetDefaultKeyFilePath() | |
if not options.key and os.path.isfile(default_key_file): | |
options.key = open(default_key_file).read().strip() | |
if options.key and os.path.isfile(options.key): | |
options.key = open(options.key).read().strip() | |
schedule = transitfeed.Schedule(problem_reporter=transitfeed.ProblemReporter()) | |
print 'Loading data from feed "%s"...' % options.feed_filename | |
print '(this may take a few minutes for larger cities)' | |
schedule.Load(options.feed_filename) | |
server = StoppableHTTPServer(server_address=('', options.port), | |
RequestHandlerClass=RequestHandlerClass) | |
server.key = options.key | |
server.schedule = schedule | |
server.file_dir = options.file_dir | |
server.host = options.host | |
server.feed_path = options.feed_filename | |
print ("To view, point your browser at http://localhost:%d/" % | |
(server.server_port)) | |
server.serve_forever() | |
if __name__ == '__main__': | |
main() | |
#!/usr/bin/python | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A utility program to help add shapes to an existing GTFS feed. | |
Requires the ogr python package. | |
""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import csv | |
import glob | |
import ogr | |
import os | |
import shutil | |
import sys | |
import tempfile | |
import transitfeed | |
from transitfeed import shapelib | |
from transitfeed import util | |
import zipfile | |
class ShapeImporterError(Exception): | |
pass | |
def PrintColumns(shapefile): | |
""" | |
Print the columns of layer 0 of the shapefile to the screen. | |
""" | |
ds = ogr.Open(shapefile) | |
layer = ds.GetLayer(0) | |
if len(layer) == 0: | |
raise ShapeImporterError("Layer 0 has no elements!") | |
feature = layer.GetFeature(0) | |
print "%d features" % feature.GetFieldCount() | |
for j in range(0, feature.GetFieldCount()): | |
print '--' + feature.GetFieldDefnRef(j).GetName() + \ | |
': ' + feature.GetFieldAsString(j) | |
def AddShapefile(shapefile, graph, key_cols): | |
""" | |
Adds shapes found in the given shape filename to the given polyline | |
graph object. | |
""" | |
ds = ogr.Open(shapefile) | |
layer = ds.GetLayer(0) | |
for i in range(0, len(layer)): | |
feature = layer.GetFeature(i) | |
geometry = feature.GetGeometryRef() | |
if key_cols: | |
key_list = [] | |
for col in key_cols: | |
key_list.append(str(feature.GetField(col))) | |
shape_id = '-'.join(key_list) | |
else: | |
shape_id = '%s-%d' % (shapefile, i) | |
poly = shapelib.Poly(name=shape_id) | |
for j in range(0, geometry.GetPointCount()): | |
(lat, lng) = (round(geometry.GetY(j), 15), round(geometry.GetX(j), 15)) | |
poly.AddPoint(shapelib.Point.FromLatLng(lat, lng)) | |
graph.AddPoly(poly) | |
return graph | |
def GetMatchingShape(pattern_poly, trip, matches, max_distance, verbosity=0): | |
""" | |
Tries to find a matching shape for the given pattern Poly object, | |
trip, and set of possibly matching Polys from which to choose a match. | |
""" | |
if len(matches) == 0: | |
print ('No matching shape found within max-distance %d for trip %s ' | |
% (max_distance, trip.trip_id)) | |
return None | |
if verbosity >= 1: | |
for match in matches: | |
print "match: size %d" % match.GetNumPoints() | |
scores = [(pattern_poly.GreedyPolyMatchDist(match), match) | |
for match in matches] | |
scores.sort() | |
if scores[0][0] > max_distance: | |
print ('No matching shape found within max-distance %d for trip %s ' | |
'(min score was %f)' | |
% (max_distance, trip.trip_id, scores[0][0])) | |
return None | |
return scores[0][1] | |
def AddExtraShapes(extra_shapes_txt, graph): | |
""" | |
Add extra shapes into our input set by parsing them out of a GTFS-formatted | |
shapes.txt file. Useful for manually adding lines to a shape file, since it's | |
a pain to edit .shp files. | |
""" | |
print "Adding extra shapes from %s" % extra_shapes_txt | |
try: | |
tmpdir = tempfile.mkdtemp() | |
shutil.copy(extra_shapes_txt, os.path.join(tmpdir, 'shapes.txt')) | |
loader = transitfeed.ShapeLoader(tmpdir) | |
schedule = loader.Load() | |
for shape in schedule.GetShapeList(): | |
print "Adding extra shape: %s" % shape.shape_id | |
graph.AddPoly(ShapeToPoly(shape)) | |
finally: | |
if tmpdir: | |
shutil.rmtree(tmpdir) | |
# Note: this method lives here to avoid cross-dependencies between | |
# shapelib and transitfeed. | |
def ShapeToPoly(shape): | |
poly = shapelib.Poly(name=shape.shape_id) | |
for lat, lng, distance in shape.points: | |
point = shapelib.Point.FromLatLng(round(lat, 15), round(lng, 15)) | |
poly.AddPoint(point) | |
return poly | |
def ValidateArgs(options_parser, options, args): | |
if not (args and options.source_gtfs and options.dest_gtfs): | |
options_parser.error("You must specify a source and dest GTFS file, " | |
"and at least one source shapefile") | |
def DefineOptions(): | |
usage = \ | |
"""%prog [options] --source_gtfs=<input GTFS.zip> --dest_gtfs=<output GTFS.zip>\ | |
<input.shp> [<input.shp>...] | |
Try to match shapes in one or more SHP files to trips in a GTFS file.""" | |
options_parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
options_parser.add_option("--print_columns", | |
action="store_true", | |
default=False, | |
dest="print_columns", | |
help="Print column names in shapefile DBF and exit") | |
options_parser.add_option("--keycols", | |
default="", | |
dest="keycols", | |
help="Comma-separated list of the column names used" | |
"to index shape ids") | |
options_parser.add_option("--max_distance", | |
type="int", | |
default=150, | |
dest="max_distance", | |
help="Max distance from a shape to which to match") | |
options_parser.add_option("--source_gtfs", | |
default="", | |
dest="source_gtfs", | |
metavar="FILE", | |
help="Read input GTFS from FILE") | |
options_parser.add_option("--dest_gtfs", | |
default="", | |
dest="dest_gtfs", | |
metavar="FILE", | |
help="Write output GTFS with shapes to FILE") | |
options_parser.add_option("--extra_shapes", | |
default="", | |
dest="extra_shapes", | |
metavar="FILE", | |
help="Extra shapes.txt (CSV) formatted file") | |
options_parser.add_option("--verbosity", | |
type="int", | |
default=0, | |
dest="verbosity", | |
help="Verbosity level. Higher is more verbose") | |
return options_parser | |
def main(key_cols): | |
print 'Parsing shapefile(s)...' | |
graph = shapelib.PolyGraph() | |
for arg in args: | |
print ' ' + arg | |
AddShapefile(arg, graph, key_cols) | |
if options.extra_shapes: | |
AddExtraShapes(options.extra_shapes, graph) | |
print 'Loading GTFS from %s...' % options.source_gtfs | |
schedule = transitfeed.Loader(options.source_gtfs).Load() | |
shape_count = 0 | |
pattern_count = 0 | |
verbosity = options.verbosity | |
print 'Matching shapes to trips...' | |
for route in schedule.GetRouteList(): | |
print 'Processing route', route.route_short_name | |
patterns = route.GetPatternIdTripDict() | |
for pattern_id, trips in patterns.iteritems(): | |
pattern_count += 1 | |
pattern = trips[0].GetPattern() | |
poly_points = [shapelib.Point.FromLatLng(p.stop_lat, p.stop_lon) | |
for p in pattern] | |
if verbosity >= 2: | |
print "\npattern %d, %d points:" % (pattern_id, len(poly_points)) | |
for i, (stop, point) in enumerate(zip(pattern, poly_points)): | |
print "Stop %d '%s': %s" % (i + 1, stop.stop_name, point.ToLatLng()) | |
# First, try to find polys that run all the way from | |
# the start of the trip to the end. | |
matches = graph.FindMatchingPolys(poly_points[0], poly_points[-1], | |
options.max_distance) | |
if not matches: | |
# Try to find a path through the graph, joining | |
# multiple edges to find a path that covers all the | |
# points in the trip. Some shape files are structured | |
# this way, with a polyline for each segment between | |
# stations instead of a polyline covering an entire line. | |
shortest_path = graph.FindShortestMultiPointPath(poly_points, | |
options.max_distance, | |
verbosity=verbosity) | |
if shortest_path: | |
matches = [shortest_path] | |
else: | |
matches = [] | |
pattern_poly = shapelib.Poly(poly_points) | |
shape_match = GetMatchingShape(pattern_poly, trips[0], | |
matches, options.max_distance, | |
verbosity=verbosity) | |
if shape_match: | |
shape_count += 1 | |
# Rename shape for readability. | |
shape_match = shapelib.Poly(points=shape_match.GetPoints(), | |
name="shape_%d" % shape_count) | |
for trip in trips: | |
try: | |
shape = schedule.GetShape(shape_match.GetName()) | |
except KeyError: | |
shape = transitfeed.Shape(shape_match.GetName()) | |
for point in shape_match.GetPoints(): | |
(lat, lng) = point.ToLatLng() | |
shape.AddPoint(lat, lng) | |
schedule.AddShapeObject(shape) | |
trip.shape_id = shape.shape_id | |
print "Matched %d shapes out of %d patterns" % (shape_count, pattern_count) | |
schedule.WriteGoogleTransitFeed(options.dest_gtfs) | |
if __name__ == '__main__': | |
# Import psyco if available for better performance. | |
try: | |
import psyco | |
psyco.full() | |
except ImportError: | |
pass | |
options_parser = DefineOptions() | |
(options, args) = options_parser.parse_args() | |
ValidateArgs(options_parser, options, args) | |
if options.print_columns: | |
for arg in args: | |
PrintColumns(arg) | |
sys.exit(0) | |
key_cols = options.keycols.split(',') | |
main(key_cols) | |
#!/usr/bin/python | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
Filters out trips which are not on the defualt routes and | |
set their trip_typeattribute accordingly. | |
For usage information run unusual_trip_filter.py --help | |
""" | |
__author__ = 'Jiri Semecky <jiri.semecky@gmail.com>' | |
import codecs | |
import os | |
import os.path | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import util | |
class UnusualTripFilter(object): | |
"""Class filtering trips going on unusual paths. | |
Those are usually trips going to/from depot or changing to another route | |
in the middle. Sets the 'trip_type' attribute of the trips.txt dataset | |
so that non-standard trips are marked as special (value 1) | |
instead of regular (default value 0). | |
""" | |
def __init__ (self, threshold=0.1, force=False, quiet=False, route_type=None): | |
self._threshold = threshold | |
self._quiet = quiet | |
self._force = force | |
if route_type in transitfeed.Route._ROUTE_TYPE_NAMES: | |
self._route_type = transitfeed.Route._ROUTE_TYPE_NAMES[route_type] | |
elif route_type is None: | |
self._route_type = None | |
else: | |
self._route_type = int(route_type) | |
def filter_line(self, route): | |
"""Mark unusual trips for the given route.""" | |
if self._route_type is not None and self._route_type != route.route_type: | |
self.info('Skipping route %s due to different route_type value (%s)' % | |
(route['route_id'], route['route_type'])) | |
return | |
self.info('Filtering infrequent trips for route %s.' % route.route_id) | |
trip_count = len(route.trips) | |
for pattern_id, pattern in route.GetPatternIdTripDict().items(): | |
ratio = float(1.0 * len(pattern) / trip_count) | |
if not self._force: | |
if (ratio < self._threshold): | |
self.info("\t%d trips on route %s with headsign '%s' recognized " | |
"as unusual (ratio %f)" % | |
(len(pattern), | |
route['route_short_name'], | |
pattern[0]['trip_headsign'], | |
ratio)) | |
for trip in pattern: | |
trip.trip_type = 1 # special | |
self.info("\t\tsetting trip_type of trip %s as special" % | |
trip.trip_id) | |
else: | |
self.info("\t%d trips on route %s with headsign '%s' recognized " | |
"as %s (ratio %f)" % | |
(len(pattern), | |
route['route_short_name'], | |
pattern[0]['trip_headsign'], | |
('regular', 'unusual')[ratio < self._threshold], | |
ratio)) | |
for trip in pattern: | |
trip.trip_type = ('0','1')[ratio < self._threshold] | |
self.info("\t\tsetting trip_type of trip %s as %s" % | |
(trip.trip_id, | |
('regular', 'unusual')[ratio < self._threshold])) | |
def filter(self, dataset): | |
"""Mark unusual trips for all the routes in the dataset.""" | |
self.info('Going to filter infrequent routes in the dataset') | |
for route in dataset.routes.values(): | |
self.filter_line(route) | |
def info(self, text): | |
if not self._quiet: | |
print text.encode("utf-8") | |
def main(): | |
usage = \ | |
'''%prog [options] <GTFS.zip> | |
Filters out trips which do not follow the most common stop sequences and | |
sets their trip_type attribute accordingly. <GTFS.zip> is overwritten with | |
the modifed GTFS file unless the --output option is used. | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='Name of the output GTFS file (writing to input feed if omitted).') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Force use of in-memory sqlite db.') | |
parser.add_option('-t', '--threshold', default=0.1, | |
dest='threshold', type='float', | |
help='Frequency threshold for considering pattern as non-regular.') | |
parser.add_option('-r', '--route_type', default=None, | |
dest='route_type', type='string', | |
help='Filter only selected route type (specified by number' | |
'or one of the following names: ' + \ | |
', '.join(transitfeed.Route._ROUTE_TYPE_NAMES) + ').') | |
parser.add_option('-f', '--override_trip_type', default=False, | |
dest='override_trip_type', action='store_true', | |
help='Forces overwrite of current trip_type values.') | |
parser.add_option('-q', '--quiet', dest='quiet', | |
default=False, action='store_true', | |
help='Suppress information output.') | |
(options, args) = parser.parse_args() | |
if len(args) != 1: | |
parser.error('You must provide the path of a single feed.') | |
filter = UnusualTripFilter(float(options.threshold), | |
force=options.override_trip_type, | |
quiet=options.quiet, | |
route_type=options.route_type) | |
feed_name = args[0] | |
feed_name = feed_name.strip() | |
filter.info('Loading %s' % feed_name) | |
loader = transitfeed.Loader(feed_name, extra_validation=True, | |
memory_db=options.memory_db) | |
data = loader.Load() | |
filter.filter(data) | |
print 'Saving data' | |
# Write the result | |
if options.output is None: | |
data.WriteGoogleTransitFeed(feed_name) | |
else: | |
data.WriteGoogleTransitFeed(options.output) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Filter the unused stops out of a transit feed file.""" | |
import optparse | |
import sys | |
import transitfeed | |
def main(): | |
parser = optparse.OptionParser( | |
usage="usage: %prog [options] input_feed output_feed", | |
version="%prog "+transitfeed.__version__) | |
parser.add_option("-l", "--list_removed", dest="list_removed", | |
default=False, | |
action="store_true", | |
help="Print removed stops to stdout") | |
(options, args) = parser.parse_args() | |
if len(args) != 2: | |
print >>sys.stderr, parser.format_help() | |
print >>sys.stderr, "\n\nYou must provide input_feed and output_feed\n\n" | |
sys.exit(2) | |
input_path = args[0] | |
output_path = args[1] | |
loader = transitfeed.Loader(input_path) | |
schedule = loader.Load() | |
print "Removing unused stops..." | |
removed = 0 | |
for stop_id, stop in schedule.stops.items(): | |
if not stop.GetTrips(schedule): | |
removed += 1 | |
del schedule.stops[stop_id] | |
if options.list_removed: | |
print "Removing %s (%s)" % (stop_id, stop.stop_name) | |
if removed == 0: | |
print "No unused stops." | |
elif removed == 1: | |
print "Removed 1 stop" | |
else: | |
print "Removed %d stops" % removed | |
schedule.WriteGoogleTransitFeed(output_path) | |
if __name__ == "__main__": | |
main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Output Google Transit URLs for queries near stops. | |
The output can be used to speed up manual testing. Load the output from this | |
file and then open many of the links in new tabs. In each result check that the | |
polyline looks okay (no unnecassary loops, no jumps to a far away location) and | |
look at the time of each leg. Also check the route names and headsigns are | |
formatted correctly and not redundant. | |
""" | |
from datetime import datetime | |
from datetime import timedelta | |
import math | |
import optparse | |
import os.path | |
import random | |
import sys | |
import transitfeed | |
import urllib | |
import urlparse | |
def Distance(lat0, lng0, lat1, lng1): | |
""" | |
Compute the geodesic distance in meters between two points on the | |
surface of the Earth. The latitude and longitude angles are in | |
degrees. | |
Approximate geodesic distance function (Haversine Formula) assuming | |
a perfect sphere of radius 6367 km (see "What are some algorithms | |
for calculating the distance between 2 points?" in the GIS Faq at | |
http://www.census.gov/geo/www/faq-index.html). The approximate | |
radius is adequate for our needs here, but a more sophisticated | |
geodesic function should be used if greater accuracy is required | |
(see "When is it NOT okay to assume the Earth is a sphere?" in the | |
same faq). | |
""" | |
deg2rad = math.pi / 180.0 | |
lat0 = lat0 * deg2rad | |
lng0 = lng0 * deg2rad | |
lat1 = lat1 * deg2rad | |
lng1 = lng1 * deg2rad | |
dlng = lng1 - lng0 | |
dlat = lat1 - lat0 | |
a = math.sin(dlat*0.5) | |
b = math.sin(dlng*0.5) | |
a = a * a + math.cos(lat0) * math.cos(lat1) * b * b | |
c = 2.0 * math.atan2(math.sqrt(a), math.sqrt(1.0 - a)) | |
return 6367000.0 * c | |
def AddNoiseToLatLng(lat, lng): | |
"""Add up to 500m of error to each coordinate of lat, lng.""" | |
m_per_tenth_lat = Distance(lat, lng, lat + 0.1, lng) | |
m_per_tenth_lng = Distance(lat, lng, lat, lng + 0.1) | |
lat_per_100m = 1 / m_per_tenth_lat * 10 | |
lng_per_100m = 1 / m_per_tenth_lng * 10 | |
return (lat + (lat_per_100m * 5 * (random.random() * 2 - 1)), | |
lng + (lng_per_100m * 5 * (random.random() * 2 - 1))) | |
def GetRandomLocationsNearStops(schedule): | |
"""Return a list of (lat, lng) tuples.""" | |
locations = [] | |
for s in schedule.GetStopList(): | |
locations.append(AddNoiseToLatLng(s.stop_lat, s.stop_lon)) | |
return locations | |
def GetRandomDatetime(): | |
"""Return a datetime in the next week.""" | |
seconds_offset = random.randint(0, 60 * 60 * 24 * 7) | |
dt = datetime.today() + timedelta(seconds=seconds_offset) | |
return dt.replace(second=0, microsecond=0) | |
def FormatLatLng(lat_lng): | |
"""Format a (lat, lng) tuple into a string for maps.google.com.""" | |
return "%0.6f,%0.6f" % lat_lng | |
def LatLngsToGoogleUrl(source, destination, dt): | |
"""Return a URL for routing between two (lat, lng) at a datetime.""" | |
params = {"saddr": FormatLatLng(source), | |
"daddr": FormatLatLng(destination), | |
"time": dt.strftime("%I:%M%p"), | |
"date": dt.strftime("%Y-%m-%d"), | |
"dirflg": "r", | |
"ie": "UTF8", | |
"oe": "UTF8"} | |
url = urlparse.urlunsplit(("http", "maps.google.com", "/maps", | |
urllib.urlencode(params), "")) | |
return url | |
def LatLngsToGoogleLink(source, destination): | |
"""Return a string "<a ..." for a trip at a random time.""" | |
dt = GetRandomDatetime() | |
return "<a href='%s'>from:%s to:%s on %s</a>" % ( | |
LatLngsToGoogleUrl(source, destination, dt), | |
FormatLatLng(source), FormatLatLng(destination), | |
dt.ctime()) | |
def WriteOutput(title, locations, limit, f): | |
"""Write html to f for up to limit trips between locations. | |
Args: | |
title: String used in html title | |
locations: list of (lat, lng) tuples | |
limit: maximum number of queries in the html | |
f: a file object | |
""" | |
output_prefix = """ | |
<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | |
<title>%(title)s</title> | |
</head> | |
<body> | |
Random queries for %(title)s<p> | |
This list of random queries should speed up important manual testing. Here are | |
some things to check when looking at the results of a query. | |
<ul> | |
<li> Check the agency attribution under the trip results: | |
<ul> | |
<li> has correct name and spelling of the agency | |
<li> opens a page with general information about the service | |
</ul> | |
<li> For each alternate trip check that each of these is reasonable: | |
<ul> | |
<li> the total time of the trip | |
<li> the time for each leg. Bad data frequently results in a leg going a long | |
way in a few minutes. | |
<li> the icons and mode names (Tram, Bus, etc) are correct for each leg | |
<li> the route names and headsigns are correctly formatted and not | |
redundant. | |
For a good example see <a | |
href="http://code.google.com/transit/spec/transit_feed_specification.html#transitScreenshots">the | |
screenshots in the Google Transit Feed Specification</a>. | |
<li> the shape line on the map looks correct. Make sure the polyline does | |
not zig-zag, loop, skip stops or jump far away unless the trip does the | |
same thing. | |
<li> the route is active on the day the trip planner returns | |
</ul> | |
</ul> | |
If you find a problem be sure to save the URL. This file is generated randomly. | |
<ol> | |
""" % locals() | |
output_suffix = """ | |
</ol> | |
</body> | |
</html> | |
""" % locals() | |
f.write(transitfeed.EncodeUnicode(output_prefix)) | |
for source, destination in zip(locations[0:limit], locations[1:limit + 1]): | |
f.write(transitfeed.EncodeUnicode("<li>%s\n" % | |
LatLngsToGoogleLink(source, destination))) | |
f.write(transitfeed.EncodeUnicode(output_suffix)) | |
def ParentAndBaseName(path): | |
"""Given a path return only the parent name and file name as a string.""" | |
dirname, basename = os.path.split(path) | |
dirname = dirname.rstrip(os.path.sep) | |
if os.path.altsep: | |
dirname = dirname.rstrip(os.path.altsep) | |
_, parentname = os.path.split(dirname) | |
return os.path.join(parentname, basename) | |
def main(): | |
parser = optparse.OptionParser( | |
usage="usage: %prog [options] feed_filename output_filename", | |
version="%prog "+transitfeed.__version__) | |
parser.add_option("-l", "--limit", dest="limit", type="int", | |
help="Maximum number of URLs to generate") | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='write html output to FILE') | |
parser.set_defaults(output="google_random_queries.html", limit=50) | |
(options, args) = parser.parse_args() | |
if len(args) != 1: | |
print >>sys.stderr, parser.format_help() | |
print >>sys.stderr, "\n\nYou must provide the path of a single feed\n\n" | |
sys.exit(2) | |
feed_path = args[0] | |
# ProblemReporter prints problems on console. | |
loader = transitfeed.Loader(feed_path, problems=transitfeed.ProblemReporter(), | |
load_stop_times=False) | |
schedule = loader.Load() | |
locations = GetRandomLocationsNearStops(schedule) | |
random.shuffle(locations) | |
agencies = ", ".join([a.agency_name for a in schedule.GetAgencyList()]) | |
title = "%s (%s)" % (agencies, ParentAndBaseName(feed_path)) | |
WriteOutput(title, | |
locations, | |
options.limit, | |
open(options.output, "w")) | |
print ("Load %s in your web browser. It contains more instructions." % | |
options.output) | |
if __name__ == "__main__": | |
main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Google has a homegrown database for managing the company shuttle. The | |
database dumps its contents in XML. This scripts converts the proprietary XML | |
format into a Google Transit Feed Specification file. | |
""" | |
import datetime | |
from optparse import OptionParser | |
import os.path | |
import re | |
import transitfeed | |
import urllib | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
class NoUnusedStopExceptionProblemReporter( | |
transitfeed.ExceptionProblemReporter): | |
"""The company shuttle database has a few unused stops for reasons unrelated | |
to this script. Ignore them. | |
""" | |
def UnusedStop(self, stop_id, stop_name): | |
pass | |
def SaveFeed(input, output): | |
tree = ET.parse(urllib.urlopen(input)) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService() | |
service_period.SetStartDate('20070314') | |
service_period.SetEndDate('20071231') | |
# Holidays for 2007 | |
service_period.SetDateHasService('20070528', has_service=False) | |
service_period.SetDateHasService('20070704', has_service=False) | |
service_period.SetDateHasService('20070903', has_service=False) | |
service_period.SetDateHasService('20071122', has_service=False) | |
service_period.SetDateHasService('20071123', has_service=False) | |
service_period.SetDateHasService('20071224', has_service=False) | |
service_period.SetDateHasService('20071225', has_service=False) | |
service_period.SetDateHasService('20071226', has_service=False) | |
service_period.SetDateHasService('20071231', has_service=False) | |
stops = {} # Map from xml stop id to python Stop object | |
agency = schedule.NewDefaultAgency(name='GBus', url='http://shuttle/', | |
timezone='America/Los_Angeles') | |
for xml_stop in tree.getiterator('stop'): | |
stop = schedule.AddStop(lat=float(xml_stop.attrib['lat']), | |
lng=float(xml_stop.attrib['lng']), | |
name=xml_stop.attrib['name']) | |
stops[xml_stop.attrib['id']] = stop | |
for xml_shuttleGroup in tree.getiterator('shuttleGroup'): | |
if xml_shuttleGroup.attrib['name'] == 'Test': | |
continue | |
r = schedule.AddRoute(short_name="", | |
long_name=xml_shuttleGroup.attrib['name'], route_type='Bus') | |
for xml_route in xml_shuttleGroup.getiterator('route'): | |
t = r.AddTrip(schedule=schedule, headsign=xml_route.attrib['name'], | |
trip_id=xml_route.attrib['id']) | |
trip_stops = [] # Build a list of (time, Stop) tuples | |
for xml_schedule in xml_route.getiterator('schedule'): | |
trip_stops.append( (int(xml_schedule.attrib['time']) / 1000, | |
stops[xml_schedule.attrib['stopId']]) ) | |
trip_stops.sort() # Sort by time | |
for (time, stop) in trip_stops: | |
t.AddStopTime(stop=stop, arrival_secs=time, departure_secs=time) | |
schedule.Validate(problems=NoUnusedStopExceptionProblemReporter()) | |
schedule.WriteGoogleTransitFeed(output) | |
def main(): | |
parser = OptionParser() | |
parser.add_option('--input', dest='input', | |
help='Path or URL of input') | |
parser.add_option('--output', dest='output', | |
help='Path of output file. Should end in .zip and if it ' | |
'contains the substring YYYYMMDD it will be replaced with ' | |
'today\'s date. It is impossible to include the literal ' | |
'string YYYYYMMDD in the path of the output file.') | |
parser.add_option('--execute', dest='execute', | |
help='Commands to run to copy the output. %(path)s is ' | |
'replaced with full path of the output and %(name)s is ' | |
'replaced with name part of the path. Try ' | |
'scp %(path)s myhost:www/%(name)s', | |
action='append') | |
parser.set_defaults(input=None, output=None, execute=[]) | |
(options, args) = parser.parse_args() | |
today = datetime.date.today().strftime('%Y%m%d') | |
options.output = re.sub(r'YYYYMMDD', today, options.output) | |
(_, name) = os.path.split(options.output) | |
path = options.output | |
SaveFeed(options.input, options.output) | |
for command in options.execute: | |
import subprocess | |
def check_call(cmd): | |
"""Convenience function that is in the docs for subprocess but not | |
installed on my system.""" | |
retcode = subprocess.call(cmd, shell=True) | |
if retcode < 0: | |
raise Exception("Child '%s' was terminated by signal %d" % (cmd, | |
-retcode)) | |
elif retcode != 0: | |
raise Exception("Child '%s' returned %d" % (cmd, retcode)) | |
# path_output and filename_current can be used to run arbitrary commands | |
check_call(command % locals()) | |
if __name__ == '__main__': | |
main() | |
<shuttle><office id="us-nye" name="US Nye County"> | |
<stops> | |
<stop id="1" name="Stagecoach Hotel and Casino" shortName="Stagecoach" lat="36.915682" lng="-116.751677" /> | |
<stop id="2" name="North Ave / N A Ave" shortName="N Ave / A Ave N" lat="36.914944" lng="-116.761472" /> | |
<stop id="3" name="North Ave / D Ave N" shortName="N Ave / D Ave N" lat="36.914893" lng="-116.76821" /> | |
<stop id="4" name="Doing Ave / D Ave N" shortName="Doing / D Ave N" lat="36.909489" lng="-116.768242" /> | |
<stop id="5" name="E Main St / S Irving St" shortName="E Main / S Irving" lat="36.905697" lng="-116.76218" /> | |
</stops> | |
<shuttleGroups> | |
<shuttleGroup id="4" name="Bar Circle Loop" > | |
<routes> | |
<route id="1" name="Outbound"> | |
<schedules> | |
<schedule id="164" stopId="1" time="60300000"/> | |
<schedule id="165" stopId="2" time="60600000"/> | |
<schedule id="166" stopId="3" time="60720000"/> | |
<schedule id="167" stopId="4" time="60780000"/> | |
<schedule id="168" stopId="5" time="60900000"/> | |
</schedules><meta></meta></route> | |
<route id="2" name="Inbound"> | |
<schedules> | |
<schedule id="260" stopId="5" time="30000000"/> | |
<schedule id="261" stopId="4" time="30120000"/> | |
<schedule id="262" stopId="3" time="30180000"/> | |
<schedule id="263" stopId="2" time="30300000"/> | |
<schedule id="264" stopId="1" time="30600000"/> | |
</schedules><meta></meta></route></routes> | |
</shuttleGroup> | |
</shuttleGroups></office></shuttle> | |
#!/usr/bin/python2.5 | |
# A really simple example of using transitfeed to build a Google Transit | |
# Feed Specification file. | |
import transitfeed | |
from optparse import OptionParser | |
parser = OptionParser() | |
parser.add_option('--output', dest='output', | |
help='Path of output file. Should end in .zip') | |
parser.set_defaults(output='google_transit.zip') | |
(options, args) = parser.parse_args() | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService(True) | |
service_period.SetDateHasService('20070704') | |
stop1 = schedule.AddStop(lng=-122, lat=37.2, name="Suburbia") | |
stop2 = schedule.AddStop(lng=-122.001, lat=37.201, name="Civic Center") | |
route = schedule.AddRoute(short_name="22", long_name="Civic Center Express", | |
route_type="Bus") | |
trip = route.AddTrip(schedule, headsign="To Downtown") | |
trip.AddStopTime(stop1, stop_time='09:00:00') | |
trip.AddStopTime(stop2, stop_time='09:15:00') | |
trip = route.AddTrip(schedule, headsign="To Suburbia") | |
trip.AddStopTime(stop1, stop_time='17:30:00') | |
trip.AddStopTime(stop2, stop_time='17:45:00') | |
schedule.Validate() | |
schedule.WriteGoogleTransitFeed(options.output) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# An example script that demonstrates converting a proprietary format to a | |
# Google Transit Feed Specification file. | |
# | |
# You can load table.txt, the example input, in Excel. It contains three | |
# sections: | |
# 1) A list of global options, starting with a line containing the word | |
# 'options'. Each option has an name in the first column and most options | |
# have a value in the second column. | |
# 2) A table of stops, starting with a line containing the word 'stops'. Each | |
# row of the table has 3 columns: name, latitude, longitude | |
# 3) A list of routes. There is an empty row between each route. The first row | |
# for a route lists the short_name and long_name. After the first row the | |
# left-most column lists the stop names visited by the route. Each column | |
# contains the times a single trip visits the stops. | |
# | |
# This is very simple example which you could use as a base for your own | |
# transit feed builder. | |
import transitfeed | |
from optparse import OptionParser | |
import re | |
stops = {} | |
# table is a list of lists in this form | |
# [ ['Short Name', 'Long Name'], | |
# ['Stop 1', 'Stop 2', ...] | |
# [time_at_1, time_at_2, ...] # times for trip 1 | |
# [time_at_1, time_at_2, ...] # times for trip 2 | |
# ... ] | |
def AddRouteToSchedule(schedule, table): | |
if len(table) >= 2: | |
r = schedule.AddRoute(short_name=table[0][0], long_name=table[0][1], route_type='Bus') | |
for trip in table[2:]: | |
if len(trip) > len(table[1]): | |
print "ignoring %s" % trip[len(table[1]):] | |
trip = trip[0:len(table[1])] | |
t = r.AddTrip(schedule, headsign='My headsign') | |
trip_stops = [] # Build a list of (time, stopname) tuples | |
for i in range(0, len(trip)): | |
if re.search(r'\S', trip[i]): | |
trip_stops.append( (transitfeed.TimeToSecondsSinceMidnight(trip[i]), table[1][i]) ) | |
trip_stops.sort() # Sort by time | |
for (time, stopname) in trip_stops: | |
t.AddStopTime(stop=stops[stopname.lower()], arrival_secs=time, | |
departure_secs=time) | |
def TransposeTable(table): | |
"""Transpose a list of lists, using None to extend all input lists to the | |
same length. | |
For example: | |
>>> TransposeTable( | |
[ [11, 12, 13], | |
[21, 22], | |
[31, 32, 33, 34]]) | |
[ [11, 21, 31], | |
[12, 22, 32], | |
[13, None, 33], | |
[None, None, 34]] | |
""" | |
transposed = [] | |
rows = len(table) | |
cols = max(len(row) for row in table) | |
for x in range(cols): | |
transposed.append([]) | |
for y in range(rows): | |
if x < len(table[y]): | |
transposed[x].append(table[y][x]) | |
else: | |
transposed[x].append(None) | |
return transposed | |
def ProcessOptions(schedule, table): | |
service_period = schedule.GetDefaultServicePeriod() | |
agency_name, agency_url, agency_timezone = (None, None, None) | |
for row in table[1:]: | |
command = row[0].lower() | |
if command == 'weekday': | |
service_period.SetWeekdayService() | |
elif command == 'start_date': | |
service_period.SetStartDate(row[1]) | |
elif command == 'end_date': | |
service_period.SetEndDate(row[1]) | |
elif command == 'add_date': | |
service_period.SetDateHasService(date=row[1]) | |
elif command == 'remove_date': | |
service_period.SetDateHasService(date=row[1], has_service=False) | |
elif command == 'agency_name': | |
agency_name = row[1] | |
elif command == 'agency_url': | |
agency_url = row[1] | |
elif command == 'agency_timezone': | |
agency_timezone = row[1] | |
if not (agency_name and agency_url and agency_timezone): | |
print "You must provide agency information" | |
schedule.NewDefaultAgency(agency_name=agency_name, agency_url=agency_url, | |
agency_timezone=agency_timezone) | |
def AddStops(schedule, table): | |
for name, lat_str, lng_str in table[1:]: | |
stop = schedule.AddStop(lat=float(lat_str), lng=float(lng_str), name=name) | |
stops[name.lower()] = stop | |
def ProcessTable(schedule, table): | |
if table[0][0].lower() == 'options': | |
ProcessOptions(schedule, table) | |
elif table[0][0].lower() == 'stops': | |
AddStops(schedule, table) | |
else: | |
transposed = [table[0]] # Keep route_short_name and route_long_name on first row | |
# Transpose rest of table. Input contains the stop names in table[x][0], x | |
# >= 1 with trips found in columns, so we need to transpose table[1:]. | |
# As a diagram Transpose from | |
# [['stop 1', '10:00', '11:00', '12:00'], | |
# ['stop 2', '10:10', '11:10', '12:10'], | |
# ['stop 3', '10:20', '11:20', '12:20']] | |
# to | |
# [['stop 1', 'stop 2', 'stop 3'], | |
# ['10:00', '10:10', '10:20'], | |
# ['11:00', '11:11', '11:20'], | |
# ['12:00', '12:12', '12:20']] | |
transposed.extend(TransposeTable(table[1:])) | |
AddRouteToSchedule(schedule, transposed) | |
def main(): | |
parser = OptionParser() | |
parser.add_option('--input', dest='input', | |
help='Path of input file') | |
parser.add_option('--output', dest='output', | |
help='Path of output file, should end in .zip') | |
parser.set_defaults(output='feed.zip') | |
(options, args) = parser.parse_args() | |
schedule = transitfeed.Schedule() | |
table = [] | |
for line in open(options.input): | |
line = line.rstrip() | |
if not line: | |
ProcessTable(schedule, table) | |
table = [] | |
else: | |
table.append(line.split('\t')) | |
ProcessTable(schedule, table) | |
schedule.WriteGoogleTransitFeed(options.output) | |
if __name__ == '__main__': | |
main() | |
options | |
weekday | |
start_date 20070315 | |
end_date 20071215 | |
remove_date 20070704 | |
agency_name Gbus | |
agency_url http://shuttle/ | |
agency_timezone America/Los_Angeles | |
stops | |
Stagecoach 36.915682 -116.751677 | |
N Ave / A Ave N 36.914944 -116.761472 | |
N Ave / D Ave N 36.914893 -116.76821 | |
Doing / D Ave N 36.909489 -116.768242 | |
E Main / S Irving 36.905697 -116.76218 | |
O in Bar Circle Inbound | |
Stagecoach 9:00:00 9:30:00 10:00:00 12:00:00 | |
N Ave / A Ave N 9:05:00 9:35:00 10:05:00 12:05:00 | |
N Ave / D Ave N 9:07:00 9:37:00 10:07:00 12:07:00 | |
Doing / D Ave N 9:09:00 9:39:00 10:09:00 12:09:00 | |
E Main / S Irving 9:11:00 9:41:00 10:11:00 12:11:00 | |
O out Bar Circle Outbound | |
E Main / S Irving 15:00:00 15:30:00 16:00:00 18:00:00 | |
Doing / D Ave N 15:05:00 15:35:00 16:05:00 18:05:00 | |
N Ave / D Ave N 15:07:00 15:37:00 16:07:00 18:07:00 | |
N Ave / A Ave N 15:09:00 15:39:00 16:09:00 18:09:00 | |
Stagecoach 15:11:00 15:41:00 16:11:00 18:11:00 | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Validates a GTFS file. | |
For usage information run feedvalidator.py --help | |
""" | |
import bisect | |
import codecs | |
import datetime | |
from transitfeed.util import defaultdict | |
import optparse | |
import os | |
import os.path | |
import re | |
import socket | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import TYPE_ERROR, TYPE_WARNING | |
from urllib2 import Request, urlopen, HTTPError, URLError | |
from transitfeed import util | |
import webbrowser | |
SVN_TAG_URL = 'http://googletransitdatafeed.googlecode.com/svn/tags/' | |
def MaybePluralizeWord(count, word): | |
if count == 1: | |
return word | |
else: | |
return word + 's' | |
def PrettyNumberWord(count, word): | |
return '%d %s' % (count, MaybePluralizeWord(count, word)) | |
def UnCamelCase(camel): | |
return re.sub(r'([a-z])([A-Z])', r'\1 \2', camel) | |
def ProblemCountText(error_count, warning_count): | |
results = [] | |
if error_count: | |
results.append(PrettyNumberWord(error_count, 'error')) | |
if warning_count: | |
results.append(PrettyNumberWord(warning_count, 'warning')) | |
return ' and '.join(results) | |
def CalendarSummary(schedule): | |
today = datetime.date.today() | |
summary_end_date = today + datetime.timedelta(days=60) | |
start_date, end_date = schedule.GetDateRange() | |
if not start_date or not end_date: | |
return {} | |
try: | |
start_date_object = transitfeed.DateStringToDateObject(start_date) | |
end_date_object = transitfeed.DateStringToDateObject(end_date) | |
except ValueError: | |
return {} | |
# Get the list of trips only during the period the feed is active. | |
# As such we have to check if it starts in the future and/or if | |
# if it ends in less than 60 days. | |
date_trips_departures = schedule.GenerateDateTripsDeparturesList( | |
max(today, start_date_object), | |
min(summary_end_date, end_date_object)) | |
if not date_trips_departures: | |
return {} | |
# Check that the dates which will be shown in summary agree with these | |
# calculations. Failure implies a bug which should be fixed. It isn't good | |
# for users to discover assertion failures but means it will likely be fixed. | |
assert start_date <= date_trips_departures[0][0].strftime("%Y%m%d") | |
assert end_date >= date_trips_departures[-1][0].strftime("%Y%m%d") | |
# Generate a map from int number of trips in a day to a list of date objects | |
# with that many trips. The list of dates is sorted. | |
trips_dates = defaultdict(lambda: []) | |
trips = 0 | |
for date, day_trips, day_departures in date_trips_departures: | |
trips += day_trips | |
trips_dates[day_trips].append(date) | |
mean_trips = trips / len(date_trips_departures) | |
max_trips = max(trips_dates.keys()) | |
min_trips = min(trips_dates.keys()) | |
calendar_summary = {} | |
calendar_summary['mean_trips'] = mean_trips | |
calendar_summary['max_trips'] = max_trips | |
calendar_summary['max_trips_dates'] = FormatDateList(trips_dates[max_trips]) | |
calendar_summary['min_trips'] = min_trips | |
calendar_summary['min_trips_dates'] = FormatDateList(trips_dates[min_trips]) | |
calendar_summary['date_trips_departures'] = date_trips_departures | |
calendar_summary['date_summary_range'] = "%s to %s" % ( | |
date_trips_departures[0][0].strftime("%a %b %d"), | |
date_trips_departures[-1][0].strftime("%a %b %d")) | |
return calendar_summary | |
def FormatDateList(dates): | |
if not dates: | |
return "0 service dates" | |
formatted = [d.strftime("%a %b %d") for d in dates[0:3]] | |
if len(dates) > 3: | |
formatted.append("...") | |
return "%s (%s)" % (PrettyNumberWord(len(dates), "service date"), | |
", ".join(formatted)) | |
def MaxVersion(versions): | |
versions = filter(None, versions) | |
versions.sort(lambda x,y: -cmp([int(item) for item in x.split('.')], | |
[int(item) for item in y.split('.')])) | |
if len(versions) > 0: | |
return versions[0] | |
class CountingConsoleProblemReporter(transitfeed.ProblemReporter): | |
def __init__(self): | |
transitfeed.ProblemReporter.__init__(self) | |
self._error_count = 0 | |
self._warning_count = 0 | |
def _Report(self, e): | |
transitfeed.ProblemReporter._Report(self, e) | |
if e.IsError(): | |
self._error_count += 1 | |
else: | |
self._warning_count += 1 | |
def ErrorCount(self): | |
return self._error_count | |
def WarningCount(self): | |
return self._warning_count | |
def FormatCount(self): | |
return ProblemCountText(self.ErrorCount(), self.WarningCount()) | |
def HasIssues(self): | |
return self.ErrorCount() or self.WarningCount() | |
class BoundedProblemList(object): | |
"""A list of one type of ExceptionWithContext objects with bounded size.""" | |
def __init__(self, size_bound): | |
self._count = 0 | |
self._exceptions = [] | |
self._size_bound = size_bound | |
def Add(self, e): | |
self._count += 1 | |
try: | |
bisect.insort(self._exceptions, e) | |
except TypeError: | |
# The base class ExceptionWithContext raises this exception in __cmp__ | |
# to signal that an object is not comparable. Instead of keeping the most | |
# significant issue keep the first reported. | |
if self._count <= self._size_bound: | |
self._exceptions.append(e) | |
else: | |
# self._exceptions is in order. Drop the least significant if the list is | |
# now too long. | |
if self._count > self._size_bound: | |
del self._exceptions[-1] | |
def _GetDroppedCount(self): | |
return self._count - len(self._exceptions) | |
def __repr__(self): | |
return "<BoundedProblemList %s>" % repr(self._exceptions) | |
count = property(lambda s: s._count) | |
dropped_count = property(_GetDroppedCount) | |
problems = property(lambda s: s._exceptions) | |
class LimitPerTypeProblemReporter(transitfeed.ProblemReporter): | |
def __init__(self, limit_per_type): | |
transitfeed.ProblemReporter.__init__(self) | |
# {TYPE_WARNING: {"ClassName": BoundedProblemList()}} | |
self._type_to_name_to_problist = { | |
TYPE_WARNING: defaultdict(lambda: BoundedProblemList(limit_per_type)), | |
TYPE_ERROR: defaultdict(lambda: BoundedProblemList(limit_per_type)) | |
} | |
def HasIssues(self): | |
return (self._type_to_name_to_problist[TYPE_ERROR] or | |
self._type_to_name_to_problist[TYPE_WARNING]) | |
def _Report(self, e): | |
self._type_to_name_to_problist[e.GetType()][e.__class__.__name__].Add(e) | |
def ErrorCount(self): | |
error_sets = self._type_to_name_to_problist[TYPE_ERROR].values() | |
return sum(map(lambda v: v.count, error_sets)) | |
def WarningCount(self): | |
warning_sets = self._type_to_name_to_problist[TYPE_WARNING].values() | |
return sum(map(lambda v: v.count, warning_sets)) | |
def ProblemList(self, problem_type, class_name): | |
"""Return the BoundedProblemList object for given type and class.""" | |
return self._type_to_name_to_problist[problem_type][class_name] | |
def ProblemListMap(self, problem_type): | |
"""Return the map from class name to BoundedProblemList object.""" | |
return self._type_to_name_to_problist[problem_type] | |
class HTMLCountingProblemReporter(LimitPerTypeProblemReporter): | |
def FormatType(self, f, level_name, class_problist): | |
"""Write the HTML dumping all problems of one type. | |
Args: | |
f: file object open for writing | |
level_name: string such as "Error" or "Warning" | |
class_problist: sequence of tuples (class name, | |
BoundedProblemList object) | |
""" | |
class_problist.sort() | |
output = [] | |
for classname, problist in class_problist: | |
output.append('<h4 class="issueHeader"><a name="%s%s">%s</a></h4><ul>\n' % | |
(level_name, classname, UnCamelCase(classname))) | |
for e in problist.problems: | |
self.FormatException(e, output) | |
if problist.dropped_count: | |
output.append('<li>and %d more of this type.' % | |
(problist.dropped_count)) | |
output.append('</ul>\n') | |
f.write(''.join(output)) | |
def FormatTypeSummaryTable(self, level_name, name_to_problist): | |
"""Return an HTML table listing the number of problems by class name. | |
Args: | |
level_name: string such as "Error" or "Warning" | |
name_to_problist: dict mapping class name to an BoundedProblemList object | |
Returns: | |
HTML in a string | |
""" | |
output = [] | |
output.append('<table>') | |
for classname in sorted(name_to_problist.keys()): | |
problist = name_to_problist[classname] | |
human_name = MaybePluralizeWord(problist.count, UnCamelCase(classname)) | |
output.append('<tr><td>%d</td><td><a href="#%s%s">%s</a></td></tr>\n' % | |
(problist.count, level_name, classname, human_name)) | |
output.append('</table>\n') | |
return ''.join(output) | |
def FormatException(self, e, output): | |
"""Append HTML version of e to list output.""" | |
d = e.GetDictToFormat() | |
for k in ('file_name', 'feedname', 'column_name'): | |
if k in d.keys(): | |
d[k] = '<code>%s</code>' % d[k] | |
problem_text = e.FormatProblem(d).replace('\n', '<br>') | |
output.append('<li>') | |
output.append('<div class="problem">%s</div>' % | |
transitfeed.EncodeUnicode(problem_text)) | |
try: | |
if hasattr(e, 'row_num'): | |
line_str = 'line %d of ' % e.row_num | |
else: | |
line_str = '' | |
output.append('in %s<code>%s</code><br>\n' % | |
(line_str, e.file_name)) | |
row = e.row | |
headers = e.headers | |
column_name = e.column_name | |
table_header = '' # HTML | |
table_data = '' # HTML | |
for header, value in zip(headers, row): | |
attributes = '' | |
if header == column_name: | |
attributes = ' class="problem"' | |
table_header += '<th%s>%s</th>' % (attributes, header) | |
table_data += '<td%s>%s</td>' % (attributes, value) | |
# Make sure output is encoded into UTF-8 | |
output.append('<table class="dump"><tr>%s</tr>\n' % | |
transitfeed.EncodeUnicode(table_header)) | |
output.append('<tr>%s</tr></table>\n' % | |
transitfeed.EncodeUnicode(table_data)) | |
except AttributeError, e: | |
pass # Hope this was getting an attribute from e ;-) | |
output.append('<br></li>\n') | |
def FormatCount(self): | |
return ProblemCountText(self.ErrorCount(), self.WarningCount()) | |
def CountTable(self): | |
output = [] | |
output.append('<table class="count_outside">\n') | |
output.append('<tr>') | |
if self.ProblemListMap(TYPE_ERROR): | |
output.append('<td><span class="fail">%s</span></td>' % | |
PrettyNumberWord(self.ErrorCount(), "error")) | |
if self.ProblemListMap(TYPE_WARNING): | |
output.append('<td><span class="fail">%s</span></td>' % | |
PrettyNumberWord(self.WarningCount(), "warning")) | |
output.append('</tr>\n<tr>') | |
if self.ProblemListMap(TYPE_ERROR): | |
output.append('<td>\n') | |
output.append(self.FormatTypeSummaryTable("Error", | |
self.ProblemListMap(TYPE_ERROR))) | |
output.append('</td>\n') | |
if self.ProblemListMap(TYPE_WARNING): | |
output.append('<td>\n') | |
output.append(self.FormatTypeSummaryTable("Warning", | |
self.ProblemListMap(TYPE_WARNING))) | |
output.append('</td>\n') | |
output.append('</table>') | |
return ''.join(output) | |
def WriteOutput(self, feed_location, f, schedule, other_problems): | |
"""Write the html output to f.""" | |
if self.HasIssues(): | |
if self.ErrorCount() + self.WarningCount() == 1: | |
summary = ('<span class="fail">Found this problem:</span>\n%s' % | |
self.CountTable()) | |
else: | |
summary = ('<span class="fail">Found these problems:</span>\n%s' % | |
self.CountTable()) | |
else: | |
summary = '<span class="pass">feed validated successfully</span>' | |
if other_problems is not None: | |
summary = ('<span class="fail">\n%s</span><br><br>' % | |
other_problems) + summary | |
basename = os.path.basename(feed_location) | |
feed_path = (feed_location[:feed_location.rfind(basename)], basename) | |
agencies = ', '.join(['<a href="%s">%s</a>' % (a.agency_url, a.agency_name) | |
for a in schedule.GetAgencyList()]) | |
if not agencies: | |
agencies = '?' | |
dates = "No valid service dates found" | |
(start, end) = schedule.GetDateRange() | |
if start and end: | |
def FormatDate(yyyymmdd): | |
src_format = "%Y%m%d" | |
dst_format = "%B %d, %Y" | |
try: | |
return time.strftime(dst_format, | |
time.strptime(yyyymmdd, src_format)) | |
except ValueError: | |
return yyyymmdd | |
formatted_start = FormatDate(start) | |
formatted_end = FormatDate(end) | |
dates = "%s to %s" % (formatted_start, formatted_end) | |
calendar_summary = CalendarSummary(schedule) | |
if calendar_summary: | |
calendar_summary_html = """<br> | |
During the upcoming service dates %(date_summary_range)s: | |
<table> | |
<tr><th class="header">Average trips per date:</th><td class="header">%(mean_trips)s</td></tr> | |
<tr><th class="header">Most trips on a date:</th><td class="header">%(max_trips)s, on %(max_trips_dates)s</td></tr> | |
<tr><th class="header">Least trips on a date:</th><td class="header">%(min_trips)s, on %(min_trips_dates)s</td></tr> | |
</table>""" % calendar_summary | |
else: | |
calendar_summary_html = "" | |
output_prefix = """ | |
<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | |
<title>FeedValidator: %(feed_file)s</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
table.dump td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
table.dump td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; font-family:monospace} | |
table.count_outside td {vertical-align: top} | |
table.count_outside {border-spacing: 0px; } | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 0.5em} | |
h4.issueHeader {padding-left: 1em} | |
.pass {background-color: lightgreen} | |
.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
GTFS validation results for feed:<br> | |
<code><span class="path">%(feed_dir)s</span><b>%(feed_file)s</b></code> | |
<br><br> | |
<table> | |
<tr><th class="header">Agencies:</th><td class="header">%(agencies)s</td></tr> | |
<tr><th class="header">Routes:</th><td class="header">%(routes)s</td></tr> | |
<tr><th class="header">Stops:</th><td class="header">%(stops)s</td></tr> | |
<tr><th class="header">Trips:</th><td class="header">%(trips)s</td></tr> | |
<tr><th class="header">Shapes:</th><td class="header">%(shapes)s</td></tr> | |
<tr><th class="header">Effective:</th><td class="header">%(dates)s</td></tr> | |
</table> | |
%(calendar_summary)s | |
<br> | |
%(problem_summary)s | |
<br><br> | |
""" % { "feed_file": feed_path[1], | |
"feed_dir": feed_path[0], | |
"agencies": agencies, | |
"routes": len(schedule.GetRouteList()), | |
"stops": len(schedule.GetStopList()), | |
"trips": len(schedule.GetTripList()), | |
"shapes": len(schedule.GetShapeList()), | |
"dates": dates, | |
"problem_summary": summary, | |
"calendar_summary": calendar_summary_html} | |
# In output_suffix string | |
# time.strftime() returns a regular local time string (not a Unicode one) with | |
# default system encoding. And decode() will then convert this time string back | |
# into a Unicode string. We use decode() here because we don't want the operating | |
# system to do any system encoding (which may cause some problem if the string | |
# contains some non-English characters) for the string. Therefore we decode it | |
# back to its original Unicode code print. | |
time_unicode = (time.strftime('%B %d, %Y at %I:%M %p %Z'). | |
decode(sys.getfilesystemencoding())) | |
output_suffix = """ | |
<div class="footer"> | |
Generated by <a href="http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator"> | |
FeedValidator</a> version %s on %s. | |
</div> | |
</body> | |
</html>""" % (transitfeed.__version__, time_unicode) | |
f.write(transitfeed.EncodeUnicode(output_prefix)) | |
if self.ProblemListMap(TYPE_ERROR): | |
f.write('<h3 class="issueHeader">Errors:</h3>') | |
self.FormatType(f, "Error", | |
self.ProblemListMap(TYPE_ERROR).items()) | |
if self.ProblemListMap(TYPE_WARNING): | |
f.write('<h3 class="issueHeader">Warnings:</h3>') | |
self.FormatType(f, "Warning", | |
self.ProblemListMap(TYPE_WARNING).items()) | |
f.write(transitfeed.EncodeUnicode(output_suffix)) | |
def RunValidationOutputFromOptions(feed, options): | |
"""Validate feed, output results per options and return an exit code.""" | |
if options.output.upper() == "CONSOLE": | |
return RunValidationOutputToConsole(feed, options) | |
else: | |
return RunValidationOutputToFilename(feed, options, options.output) | |
def RunValidationOutputToFilename(feed, options, output_filename): | |
"""Validate feed, save HTML at output_filename and return an exit code.""" | |
try: | |
output_file = open(output_filename, 'w') | |
exit_code = RunValidationOutputToFile(feed, options, output_file) | |
output_file.close() | |
except IOError, e: | |
print 'Error while writing %s: %s' % (output_filename, e) | |
output_filename = None | |
exit_code = 2 | |
if options.manual_entry and output_filename: | |
webbrowser.open('file://%s' % os.path.abspath(output_filename)) | |
return exit_code | |
def RunValidationOutputToFile(feed, options, output_file): | |
"""Validate feed, write HTML to output_file and return an exit code.""" | |
problems = HTMLCountingProblemReporter(options.limit_per_type) | |
schedule, exit_code, other_problems_string = RunValidation(feed, options, | |
problems) | |
if isinstance(feed, basestring): | |
feed_location = feed | |
else: | |
feed_location = getattr(feed, 'name', repr(feed)) | |
problems.WriteOutput(feed_location, output_file, schedule, | |
other_problems_string) | |
return exit_code | |
def RunValidationOutputToConsole(feed, options): | |
"""Validate feed, print reports and return an exit code.""" | |
problems = CountingConsoleProblemReporter() | |
_, exit_code, _ = RunValidation(feed, options, problems) | |
return exit_code | |
def RunValidation(feed, options, problems): | |
"""Validate feed, returning the loaded Schedule and exit code. | |
Args: | |
feed: GTFS file, either path of the file as a string or a file object | |
options: options object returned by optparse | |
problems: transitfeed.ProblemReporter instance | |
Returns: | |
a transitfeed.Schedule object, exit code and plain text string of other | |
problems | |
Exit code is 1 if problems are found and 0 if the Schedule is problem free. | |
plain text string is '' if no other problems are found. | |
""" | |
other_problems_string = CheckVersion(latest_version=options.latest_version) | |
print 'validating %s' % feed | |
loader = transitfeed.Loader(feed, problems=problems, extra_validation=False, | |
memory_db=options.memory_db, | |
check_duplicate_trips=\ | |
options.check_duplicate_trips) | |
schedule = loader.Load() | |
schedule.Validate(service_gap_interval=options.service_gap_interval) | |
if feed == 'IWantMyvalidation-crash.txt': | |
# See test/testfeedvalidator.py | |
raise Exception('For testing the feed validator crash handler.') | |
if other_problems_string: | |
print other_problems_string | |
if problems.HasIssues(): | |
print 'ERROR: %s found' % problems.FormatCount() | |
return schedule, 1, other_problems_string | |
else: | |
print 'feed validated successfully' | |
return schedule, 0, other_problems_string | |
def CheckVersion(latest_version=''): | |
""" | |
Check there is newer version of this project. | |
Codes are based on http://www.voidspace.org.uk/python/articles/urllib2.shtml | |
Already got permission from the copyright holder. | |
""" | |
current_version = transitfeed.__version__ | |
if not latest_version: | |
timeout = 20 | |
socket.setdefaulttimeout(timeout) | |
request = Request(SVN_TAG_URL) | |
try: | |
response = urlopen(request) | |
content = response.read() | |
versions = re.findall(r'>transitfeed-([\d\.]+)\/<\/a>', content) | |
latest_version = MaxVersion(versions) | |
except HTTPError, e: | |
return('The server couldn\'t fulfill the request. Error code: %s.' | |
% e.code) | |
except URLError, e: | |
return('We failed to reach transitfeed server. Reason: %s.' % e.reason) | |
if not latest_version: | |
return('We had trouble parsing the contents of %s.' % SVN_TAG_URL) | |
newest_version = MaxVersion([latest_version, current_version]) | |
if current_version != newest_version: | |
return('A new version %s of transitfeed is available. Please visit ' | |
'http://code.google.com/p/googletransitdatafeed and download.' | |
% newest_version) | |
def main(): | |
usage = \ | |
'''%prog [options] [<input GTFS.zip>] | |
Validates GTFS file (or directory) <input GTFS.zip> and writes a HTML | |
report of the results to validation-results.html. | |
If <input GTFS.zip> is ommited the filename is read from the console. Dragging | |
a file into the console may enter the filename. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-n', '--noprompt', action='store_false', | |
dest='manual_entry', | |
help='do not prompt for feed location or load output in ' | |
'browser') | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='write html output to FILE or --output=CONSOLE to ' | |
'print all errors and warnings to the command console') | |
parser.add_option('-p', '--performance', action='store_true', | |
dest='performance', | |
help='output memory and time performance (Availability: ' | |
'Unix') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Use in-memory sqlite db instead of a temporary file. ' | |
'It is faster but uses more RAM.') | |
parser.add_option('-d', '--duplicate_trip_check', | |
dest='check_duplicate_trips', action='store_true', | |
help='Check for duplicate trips which go through the same ' | |
'stops with same service and start times') | |
parser.add_option('-l', '--limit_per_type', | |
dest='limit_per_type', action='store', type='int', | |
help='Maximum number of errors and warnings to keep of ' | |
'each type') | |
parser.add_option('--latest_version', dest='latest_version', | |
action='store', | |
help='a version number such as 1.2.1 or None to get the ' | |
'latest version from code.google.com. Output a warning if ' | |
'transitfeed.py is older than this version.') | |
parser.add_option('--service_gap_interval', | |
dest='service_gap_interval', | |
action='store', | |
type='int', | |
help='the number of consecutive days to search for with no ' | |
'scheduled service. For each interval with no service ' | |
'having this number of days or more a warning will be ' | |
'issued') | |
parser.set_defaults(manual_entry=True, output='validation-results.html', | |
memory_db=False, check_duplicate_trips=False, | |
limit_per_type=5, latest_version='', | |
service_gap_interval=13) | |
(options, args) = parser.parse_args() | |
if not len(args) == 1: | |
if options.manual_entry: | |
feed = raw_input('Enter Feed Location: ') | |
else: | |
parser.error('You must provide the path of a single feed') | |
else: | |
feed = args[0] | |
feed = feed.strip('"') | |
if options.performance: | |
return ProfileRunValidationOutputFromOptions(feed, options) | |
else: | |
return RunValidationOutputFromOptions(feed, options) | |
def ProfileRunValidationOutputFromOptions(feed, options): | |
"""Run RunValidationOutputFromOptions, print profile and return exit code.""" | |
import cProfile | |
import pstats | |
# runctx will modify a dict, but not locals(). We need a way to get rv back. | |
locals_for_exec = locals() | |
cProfile.runctx('rv = RunValidationOutputFromOptions(feed, options)', | |
globals(), locals_for_exec, 'validate-stats') | |
# Only available on Unix, http://docs.python.org/lib/module-resource.html | |
import resource | |
print "Time: %d seconds" % ( | |
resource.getrusage(resource.RUSAGE_SELF).ru_utime + | |
resource.getrusage(resource.RUSAGE_SELF).ru_stime) | |
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/286222 | |
# http://aspn.activestate.com/ASPN/Cookbook/ "The recipes are freely | |
# available for review and use." | |
def _VmB(VmKey): | |
"""Return size from proc status in bytes.""" | |
_proc_status = '/proc/%d/status' % os.getpid() | |
_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0, | |
'KB': 1024.0, 'MB': 1024.0*1024.0} | |
# get pseudo file /proc/<pid>/status | |
try: | |
t = open(_proc_status) | |
v = t.read() | |
t.close() | |
except: | |
raise Exception("no proc file %s" % _proc_status) | |
return 0 # non-Linux? | |
# get VmKey line e.g. 'VmRSS: 9999 kB\n ...' | |
i = v.index(VmKey) | |
v = v[i:].split(None, 3) # whitespace | |
if len(v) < 3: | |
raise Exception("%s" % v) | |
return 0 # invalid format? | |
# convert Vm value to bytes | |
return int(float(v[1]) * _scale[v[2]]) | |
# I ran this on over a hundred GTFS files, comparing VmSize to VmRSS | |
# (resident set size). The difference was always under 2% or 3MB. | |
print "Virtual Memory Size: %d bytes" % _VmB('VmSize:') | |
# Output report of where CPU time was spent. | |
p = pstats.Stats('validate-stats') | |
p.strip_dirs() | |
p.sort_stats('cumulative').print_stats(30) | |
p.sort_stats('cumulative').print_callers(30) | |
return locals_for_exec['rv'] | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
__doc__ = """ | |
Package holding files for Google Transit Feed Specification Schedule Viewer. | |
""" | |
# This package contains the data files for schedule_viewer.py, a script that | |
# comes with the transitfeed distribution. According to the thread | |
# "[Distutils] distutils data_files and setuptools.pkg_resources are driving | |
# me crazy" this is the easiest way to include data files. My experience | |
# agrees. - Tom 2007-05-29 | |
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/__init__.pyc and /dev/null differ
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" | |
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> | |
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml"> | |
<head> | |
<meta http-equiv="content-type" content="text/html; charset=utf-8"/> | |
<title>[agency]</title> | |
<link href="file/style.css" rel="stylesheet" type="text/css" /> | |
<style type="text/css"> | |
v\:* { | |
behavior:url(#default#VML); | |
} | |
</style> | |
<script src="http://[host]/maps?file=api&v=2&key=[key]" type="text/javascript"></script> | |
<script src="/file/labeled_marker.js" type="text/javascript"></script> | |
<script language="VBScript" src="/file/svgcheck.vbs"></script> | |
<script type="text/javascript"> | |
//<![CDATA[ | |
var map; | |
// Set to true when debugging for log statements about HTTP requests. | |
var log = false; | |
var twelveHourTime = false; // set to true to see AM/PM | |
var selectedRoute = null; | |
var forbid_editing = [forbid_editing]; | |
function load() { | |
if (GBrowserIsCompatible()) { | |
sizeRouteList(); | |
var map_dom = document.getElementById("map"); | |
map = new GMap2(map_dom); | |
map.addControl(new GLargeMapControl()); | |
map.addControl(new GMapTypeControl()); | |
map.addControl(new GOverviewMapControl()); | |
map.enableScrollWheelZoom(); | |
var bb = new GLatLngBounds(new GLatLng([min_lat], [min_lon]),new GLatLng([max_lat], [max_lon])); | |
map.setCenter(bb.getCenter(), map.getBoundsZoomLevel(bb)); | |
map.enableDoubleClickZoom(); | |
initIcons(); | |
GEvent.addListener(map, "moveend", callbackMoveEnd); | |
GEvent.addListener(map, "zoomend", callbackZoomEnd); | |
callbackMoveEnd(); // Pretend we just moved to current center | |
fetchRoutes(); | |
} | |
} | |
function callbackZoomEnd() { | |
} | |
function callbackMoveEnd() { | |
// Map moved, search for stops near the center | |
fetchStopsInBounds(map.getBounds()); | |
} | |
/** | |
* Fetch a sample of stops in the bounding box. | |
*/ | |
function fetchStopsInBounds(bounds) { | |
url = "/json/boundboxstops?n=" + bounds.getNorthEast().lat() | |
+ "&e=" + bounds.getNorthEast().lng() | |
+ "&s=" + bounds.getSouthWest().lat() | |
+ "&w=" + bounds.getSouthWest().lng() | |
+ "&limit=50"; | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayStopsBackground); | |
} | |
/** | |
* Displays stops returned by the server on the map. Expected to be called | |
* when GDownloadUrl finishes. | |
* | |
* @param {String} data JSON encoded list of list, each | |
* containing a row of stops.txt | |
* @param {Number} responseCode Response code from server | |
*/ | |
function callbackDisplayStops(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
clearMap(); | |
var stops = eval(data); | |
if (stops.length == 1) { | |
var marker = addStopMarkerFromList(stops[0], true); | |
fetchStopInfoWindow(marker); | |
} else { | |
for (var i=0; i<stops.length; ++i) { | |
addStopMarkerFromList(stops[i], true); | |
} | |
} | |
} | |
function stopTextSearchSubmit() { | |
var text = document.getElementById("stopTextSearchInput").value; | |
var url = "/json/stopsearch?q=" + text; // TODO URI escape | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayStops); | |
} | |
function tripTextSearchSubmit() { | |
var text = document.getElementById("tripTextSearchInput").value; | |
selectTrip(text); | |
} | |
/** | |
* Add stops markers to the map and remove stops no longer in the | |
* background. | |
*/ | |
function callbackDisplayStopsBackground(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var stops = eval(data); | |
// Make a list of all background markers | |
var oldStopMarkers = {}; | |
for (var stopId in stopMarkersBackground) { | |
oldStopMarkers[stopId] = 1; | |
} | |
// Add new markers to the map and remove from oldStopMarkers | |
for (var i=0; i<stops.length; ++i) { | |
var marker = addStopMarkerFromList(stops[i], false); | |
if (oldStopMarkers[marker.stopId]) { | |
delete oldStopMarkers[marker.stopId]; | |
} | |
} | |
// Delete all markers that remain in oldStopMarkers | |
for (var stopId in oldStopMarkers) { | |
GEvent.removeListener(stopMarkersBackground[stopId].clickListener); | |
map.removeOverlay(stopMarkersBackground[stopId]); | |
delete stopMarkersBackground[stopId] | |
} | |
} | |
/** | |
* Remove all overlays from the map | |
*/ | |
function clearMap() { | |
boundsOfPolyLine = null; | |
for (var stopId in stopMarkersSelected) { | |
GEvent.removeListener(stopMarkersSelected[stopId].clickListener); | |
} | |
for (var stopId in stopMarkersBackground) { | |
GEvent.removeListener(stopMarkersBackground[stopId].clickListener); | |
} | |
stopMarkersSelected = {}; | |
stopMarkersBackground = {}; | |
map.clearOverlays(); | |
} | |
/** | |
* Return a new GIcon used for stops | |
*/ | |
function makeStopIcon() { | |
var icon = new GIcon(); | |
icon.iconSize = new GSize(12, 20); | |
icon.shadowSize = new GSize(22, 20); | |
icon.iconAnchor = new GPoint(6, 20); | |
icon.infoWindowAnchor = new GPoint(5, 1); | |
return icon; | |
} | |
/** | |
* Initialize icons. Call once during load. | |
*/ | |
function initIcons() { | |
iconSelected = makeStopIcon(); | |
iconSelected.image = "/file/mm_20_yellow.png"; | |
iconSelected.shadow = "/file/mm_20_shadow.png"; | |
iconBackground = makeStopIcon(); | |
iconBackground.image = "/file/mm_20_blue_trans.png"; | |
iconBackground.shadow = "/file/mm_20_shadow_trans.png"; | |
iconBackgroundStation = makeStopIcon(); | |
iconBackgroundStation.image = "/file/mm_20_red_trans.png"; | |
iconBackgroundStation.shadow = "/file/mm_20_shadow_trans.png"; | |
} | |
var iconSelected; | |
var iconBackground; | |
var iconBackgroundStation; | |
// Map from stopId to GMarker object for stops selected because they are | |
// part of a trip, etc | |
var stopMarkersSelected = {}; | |
// Map from stopId to GMarker object for stops found by the background | |
// passive search | |
var stopMarkersBackground = {}; | |
/** | |
* Add a stop to the map, given a row from stops.txt. | |
*/ | |
function addStopMarkerFromList(list, selected, text) { | |
return addStopMarker(list[0], list[1], list[2], list[3], list[4], selected, text); | |
} | |
/** | |
* Add a stop to the map, returning the new marker | |
*/ | |
function addStopMarker(stopId, stopName, stopLat, stopLon, locationType, selected, text) { | |
if (stopMarkersSelected[stopId]) { | |
// stop was selected | |
var marker = stopMarkersSelected[stopId]; | |
if (text) { | |
oldText = marker.getText(); | |
if (oldText) { | |
oldText = oldText + "<br>"; | |
} | |
marker.setText(oldText + text); | |
} | |
return marker; | |
} | |
if (stopMarkersBackground[stopId]) { | |
// Stop was in the background. Either delete it from the background or | |
// leave it where it is. | |
if (selected) { | |
map.removeOverlay(stopMarkersBackground[stopId]); | |
delete stopMarkersBackground[stopId]; | |
} else { | |
return stopMarkersBackground[stopId]; | |
} | |
} | |
var icon; | |
if (selected) { | |
icon = iconSelected; | |
} else if (locationType == 1) { | |
icon = iconBackgroundStation | |
} else { | |
icon = iconBackground; | |
} | |
var ll = new GLatLng(stopLat,stopLon); | |
var marker; | |
if (selected || text) { | |
if (!text) { | |
text = ""; // Make sure every selected icon has a text box, even if empty | |
} | |
var markerOpts = new Object(); | |
markerOpts.icon = icon; | |
markerOpts.labelText = text; | |
markerOpts.labelClass = "tooltip"; | |
markerOpts.labelOffset = new GSize(6, -20); | |
marker = new LabeledMarker(ll, markerOpts); | |
} else { | |
marker = new GMarker(ll, {icon: icon, draggable: !forbid_editing}); | |
} | |
marker.stopName = stopName; | |
marker.stopId = stopId; | |
if (selected) { | |
stopMarkersSelected[stopId] = marker; | |
} else { | |
stopMarkersBackground[stopId] = marker; | |
} | |
map.addOverlay(marker); | |
marker.clickListener = GEvent.addListener(marker, "click", function() {fetchStopInfoWindow(marker);}); | |
GEvent.addListener(marker, "dragend", function() { | |
document.getElementById("edit").style.visibility = "visible"; | |
document.getElementById("edit_status").innerHTML = "updating..." | |
changeStopLocation(marker); | |
}); | |
return marker; | |
} | |
/** | |
* Sends new location of a stop to server. | |
*/ | |
function changeStopLocation(marker) { | |
var url = "/json/setstoplocation?id=" + | |
encodeURIComponent(marker.stopId) + | |
"&lat=" + encodeURIComponent(marker.getLatLng().lat()) + | |
"&lng=" + encodeURIComponent(marker.getLatLng().lng()); | |
GDownloadUrl(url, function(data, responseCode) { | |
document.getElementById("edit_status").innerHTML = unescape(data); | |
} ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
/** | |
* Saves the current state of the data file opened at server side to file. | |
*/ | |
function saveData() { | |
var url = "/json/savedata"; | |
GDownloadUrl(url, function(data, responseCode) { | |
document.getElementById("edit_status").innerHTML = data;} ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
/** | |
* Fetch the next departing trips from the stop for display in an info | |
* window. | |
*/ | |
function fetchStopInfoWindow(marker) { | |
var url = "/json/stoptrips?stop=" + encodeURIComponent(marker.stopId) + "&time=" + parseTimeInput(); | |
GDownloadUrl(url, function(data, responseCode) { | |
callbackDisplayStopInfoWindow(marker, data, responseCode); } ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
function callbackDisplayStopInfoWindow(marker, data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var timeTrips = eval(data); | |
var html = "<b>" + marker.stopName + "</b> (" + marker.stopId + ")<br>"; | |
var latLng = marker.getLatLng(); | |
html = html + "(" + latLng.lat() + ", " + latLng.lng() + ")<br>"; | |
html = html + "<table><tr><th>service_id<th>time<th>name</tr>"; | |
for (var i=0; i < timeTrips.length; ++i) { | |
var time = timeTrips[i][0]; | |
var tripid = timeTrips[i][1][0]; | |
var tripname = timeTrips[i][1][1]; | |
var service_id = timeTrips[i][1][2]; | |
var timepoint = timeTrips[i][2]; | |
html = html + "<tr onClick='map.closeInfoWindow();selectTrip(\"" + | |
tripid + "\")'>" + | |
"<td>" + service_id + | |
"<td align='right'>" + (timepoint ? "" : "~") + | |
formatTime(time) + "<td>" + tripname + "</tr>"; | |
} | |
html = html + "</table>"; | |
marker.openInfoWindowHtml(html); | |
} | |
function leadingZero(digit) { | |
if (digit < 10) | |
return "0" + digit; | |
else | |
return "" + digit; | |
} | |
function formatTime(secSinceMidnight) { | |
var hours = Math.floor(secSinceMidnight / 3600); | |
var suffix = ""; | |
if (twelveHourTime) { | |
suffix = (hours >= 12) ? "p" : "a"; | |
suffix += (hours >= 24) ? " next day" : ""; | |
hours = hours % 12; | |
if (hours == 0) | |
hours = 12; | |
} | |
var minutes = Math.floor(secSinceMidnight / 60) % 60; | |
var seconds = secSinceMidnight % 60; | |
if (seconds == 0) { | |
return hours + ":" + leadingZero(minutes) + suffix; | |
} else { | |
return hours + ":" + leadingZero(minutes) + ":" + leadingZero(seconds) + suffix; | |
} | |
} | |
function parseTimeInput() { | |
var text = document.getElementById("timeInput").value; | |
var m = text.match(/([012]?\d):([012345]?\d)(:([012345]?\d))?/); | |
if (m) { | |
var seconds = parseInt(m[1], 10) * 3600; | |
seconds += parseInt(m[2], 10) * 60; | |
if (m[4]) { | |
second += parseInt(m[4], 10); | |
} | |
return seconds; | |
} else { | |
if (log) | |
GLog.write("Couldn't match " + text); | |
} | |
} | |
/** | |
* Create a string of dots that gets longer with the log of count. | |
*/ | |
function countToRepeatedDots(count) { | |
// Find ln_2(count) + 1 | |
var logCount = Math.ceil(Math.log(count) / 0.693148) + 1; | |
return new Array(logCount + 1).join("."); | |
} | |
function fetchRoutes() { | |
url = "/json/routes"; | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayRoutes); | |
} | |
function callbackDisplayRoutes(data, responseCode) { | |
if (responseCode != 200) { | |
patternDiv.appendChild(div); | |
} | |
var routes = eval(data); | |
var routesList = document.getElementById("routeList"); | |
while (routesList.hasChildNodes()) { | |
routesList.removeChild(routesList.firstChild); | |
} | |
for (i = 0; i < routes.length; ++i) { | |
var routeId = routes[i][0]; | |
var shortName = document.createElement("span"); | |
shortName.className = "shortName"; | |
shortName.appendChild(document.createTextNode(routes[i][1] + " ")); | |
var routeName = routes[i][2]; | |
var elem = document.createElement("div"); | |
elem.appendChild(shortName); | |
elem.appendChild(document.createTextNode(routeName)); | |
elem.id = "route_" + routeId; | |
elem.className = "routeChoice"; | |
elem.title = routeName; | |
GEvent.addDomListener(elem, "click", makeClosure(selectRoute, routeId)); | |
var routeContainer = document.createElement("div"); | |
routeContainer.id = "route_container_" + routeId; | |
routeContainer.className = "routeContainer"; | |
routeContainer.appendChild(elem); | |
routesList.appendChild(routeContainer); | |
} | |
} | |
function selectRoute(routeId) { | |
var routesList = document.getElementById("routeList"); | |
routeSpans = routesList.getElementsByTagName("div"); | |
for (var i = 0; i < routeSpans.length; ++i) { | |
if (routeSpans[i].className == "routeChoiceSelected") { | |
routeSpans[i].className = "routeChoice"; | |
} | |
} | |
// remove any previously-expanded route | |
var tripInfo = document.getElementById("tripInfo"); | |
if (tripInfo) | |
tripInfo.parentNode.removeChild(tripInfo); | |
selectedRoute = routeId; | |
var span = document.getElementById("route_" + routeId); | |
span.className = "routeChoiceSelected"; | |
fetchPatterns(routeId); | |
} | |
function fetchPatterns(routeId) { | |
url = "/json/routepatterns?route=" + encodeURIComponent(routeId) + "&time=" + parseTimeInput(); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayPatterns); | |
} | |
function callbackDisplayPatterns(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var div = document.createElement("div"); | |
div.className = "tripSection"; | |
div.id = "tripInfo"; | |
var firstTrip = null; | |
var patterns = eval(data); | |
clearMap(); | |
for (i = 0; i < patterns.length; ++i) { | |
patternDiv = document.createElement("div") | |
patternDiv.className = 'patternSection'; | |
div.appendChild(patternDiv) | |
var pat = patterns[i]; // [patName, patId, len(early trips), trips, len(later trips), has_non_zero_trip_type] | |
if (pat[5] == '1') { | |
patternDiv.className += " unusualPattern" | |
} | |
patternDiv.appendChild(document.createTextNode(pat[0])); | |
patternDiv.appendChild(document.createTextNode(", " + (pat[2] + pat[3].length + pat[4]) + " trips: ")); | |
if (pat[2] > 0) { | |
patternDiv.appendChild(document.createTextNode(countToRepeatedDots(pat[2]) + " ")); | |
} | |
for (j = 0; j < pat[3].length; ++j) { | |
var trip = pat[3][j]; | |
var tripId = trip[1]; | |
if ((i == 0) && (j == 0)) | |
firstTrip = tripId; | |
patternDiv.appendChild(document.createTextNode(" ")); | |
var span = document.createElement("span"); | |
span.appendChild(document.createTextNode(formatTime(trip[0]))); | |
span.id = "trip_" + tripId; | |
GEvent.addDomListener(span, "click", makeClosure(selectTrip, tripId)); | |
patternDiv.appendChild(span) | |
span.className = "tripChoice"; | |
} | |
if (pat[4] > 0) { | |
patternDiv.appendChild(document.createTextNode(" " + countToRepeatedDots(pat[4]))); | |
} | |
patternDiv.appendChild(document.createElement("br")); | |
} | |
route = document.getElementById("route_container_" + selectedRoute); | |
route.appendChild(div); | |
if (tripId != null) | |
selectTrip(firstTrip); | |
} | |
// Needed to get around limitation in javascript scope rules. | |
// See http://calculist.blogspot.com/2005/12/gotcha-gotcha.html | |
function makeClosure(f, a, b, c) { | |
return function() { f(a, b, c); }; | |
} | |
function make1ArgClosure(f, a, b, c) { | |
return function(x) { f(x, a, b, c); }; | |
} | |
function make2ArgClosure(f, a, b, c) { | |
return function(x, y) { f(x, y, a, b, c); }; | |
} | |
function selectTrip(tripId) { | |
var tripInfo = document.getElementById("tripInfo"); | |
if (tripInfo) { | |
tripSpans = tripInfo.getElementsByTagName('span'); | |
for (var i = 0; i < tripSpans.length; ++i) { | |
tripSpans[i].className = 'tripChoice'; | |
} | |
} | |
var span = document.getElementById("trip_" + tripId); | |
// Won't find the span if a different route is selected | |
if (span) { | |
span.className = 'tripChoiceSelected'; | |
} | |
clearMap(); | |
url = "/json/tripstoptimes?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayTripStopTimes); | |
fetchTripPolyLine(tripId); | |
fetchTripRows(tripId); | |
} | |
function callbackDisplayTripStopTimes(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var stopsTimes = eval(data); | |
if (!stopsTimes) return; | |
displayTripStopTimes(stopsTimes[0], stopsTimes[1]); | |
} | |
function fetchTripPolyLine(tripId) { | |
url = "/json/tripshape?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayTripPolyLine); | |
} | |
function callbackDisplayTripPolyLine(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var points = eval(data); | |
if (!points) return; | |
displayPolyLine(points); | |
} | |
var boundsOfPolyLine = null; | |
function expandBoundingBox(latLng) { | |
if (boundsOfPolyLine == null) { | |
boundsOfPolyLine = new GLatLngBounds(latLng, latLng); | |
} else { | |
boundsOfPolyLine.extend(latLng); | |
} | |
} | |
/** | |
* Display a line given a list of points | |
* | |
* @param {Array} List of lat,lng pairs | |
*/ | |
function displayPolyLine(points) { | |
var linePoints = Array(); | |
for (i = 0; i < points.length; ++i) { | |
var ll = new GLatLng(points[i][0], points[i][1]); | |
expandBoundingBox(ll); | |
linePoints[linePoints.length] = ll; | |
} | |
var polyline = new GPolyline(linePoints, "#FF0000", 4); | |
map.addOverlay(polyline); | |
map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine)); | |
} | |
function displayTripStopTimes(stops, times) { | |
for (i = 0; i < stops.length; ++i) { | |
var marker; | |
if (times && times[i] != null) { | |
marker = addStopMarkerFromList(stops[i], true, formatTime(times[i])); | |
} else { | |
marker = addStopMarkerFromList(stops[i], true); | |
} | |
expandBoundingBox(marker.getPoint()); | |
} | |
map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine)); | |
} | |
function fetchTripRows(tripId) { | |
url = "/json/triprows?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, make2ArgClosure(callbackDisplayTripRows, tripId)); | |
} | |
function callbackDisplayTripRows(data, responseCode, tripId) { | |
if (responseCode != 200) { | |
return; | |
} | |
var rows = eval(data); | |
if (!rows) return; | |
var html = ""; | |
for (var i = 0; i < rows.length; ++i) { | |
var filename = rows[i][0]; | |
var row = rows[i][1]; | |
html += "<b>" + filename + "</b>: " + formatDictionary(row) + "<br>"; | |
} | |
html += svgTag("/ttablegraph?height=100&trip=" + tripId, "height='115' width='100%'"); | |
var bottombarDiv = document.getElementById("bottombar"); | |
bottombarDiv.style.display = "block"; | |
bottombarDiv.style.height = "175px"; | |
bottombarDiv.innerHTML = html; | |
sizeRouteList(); | |
} | |
/** | |
* Return HTML to embed a SVG object in this page. src is the location of | |
* the SVG and attributes is inserted directly into the object or embed | |
* tag. | |
*/ | |
function svgTag(src, attributes) { | |
if (navigator.userAgent.toLowerCase().indexOf("msie") != -1) { | |
if (isSVGControlInstalled()) { | |
return "<embed pluginspage='http://www.adobe.com/svg/viewer/install/' src='" + src + "' " + attributes +"></embed>"; | |
} else { | |
return "<p>Please install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a> to get SVG support in IE</p>"; | |
} | |
} else { | |
return "<object data='" + src + "' type='image/svg+xml' " + attributes + "><p>No SVG support in your browser. Try Firefox 1.5 or newer or install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a></p></object>"; | |
} | |
} | |
/** | |
* Format an Array object containing key-value pairs into a human readable | |
* string. | |
*/ | |
function formatDictionary(d) { | |
var output = ""; | |
var first = 1; | |
for (var k in d) { | |
if (first) { | |
first = 0; | |
} else { | |
output += " "; | |
} | |
output += "<b>" + k + "</b>=" + d[k]; | |
} | |
return output; | |
} | |
function windowHeight() { | |
// Standard browsers (Mozilla, Safari, etc.) | |
if (self.innerHeight) | |
return self.innerHeight; | |
// IE 6 | |
if (document.documentElement && document.documentElement.clientHeight) | |
return document.documentElement.clientHeight; | |
// IE 5 | |
if (document.body) | |
return document.body.clientHeight; | |
// Just in case. | |
return 0; | |
} | |
function sizeRouteList() { | |
var bottombarHeight = 0; | |
var bottombarDiv = document.getElementById('bottombar'); | |
if (bottombarDiv.style.display != 'none') { | |
bottombarHeight = document.getElementById('bottombar').offsetHeight | |
+ document.getElementById('bottombar').style.marginTop; | |
} | |
var height = windowHeight() - document.getElementById('topbar').offsetHeight - 15 - bottombarHeight; | |
document.getElementById('content').style.height = height + 'px'; | |
if (map) { | |
// Without this displayPolyLine does not use the correct map size | |
map.checkResize(); | |
} | |
} | |
//]]> | |
</script> | |
</head> | |
<body class='sidebar-left' onload="load();" onunload="GUnload()" onresize="sizeRouteList()"> | |
<div id='topbar'> | |
<div id="edit"> | |
<span id="edit_status">...</span> | |
<form onSubmit="saveData(); return false;"><input value="Save" type="submit"> | |
</div> | |
<div id="agencyHeader">[agency]</div> | |
</div> | |
<div id='content'> | |
<div id='sidebar-wrapper'><div id='sidebar'> | |
Time: <input type="text" value="8:00" width="9" id="timeInput"><br> | |
<form onSubmit="stopTextSearchSubmit(); return false;"> | |
Find Station: <input type="text" id="stopTextSearchInput"><input value="Search" type="submit"></form><br> | |
<form onSubmit="tripTextSearchSubmit(); return false;"> | |
Find Trip ID: <input type="text" id="tripTextSearchInput"><input value="Search" type="submit"></form><br> | |
<div id="routeList">routelist</div> | |
</div></div> | |
<div id='map-wrapper'> <div id='map'></div> </div> | |
</div> | |
<div id='bottombar'>bottom bar</div> | |
</body> | |
</html> | |
/* | |
* LabeledMarker Class | |
* | |
* Copyright 2007 Mike Purvis (http://uwmike.com) | |
* | |
* Licensed under the Apache License, Version 2.0 (the "License"); | |
* you may not use this file except in compliance with the License. | |
* You may obtain a copy of the License at | |
* | |
* http://www.apache.org/licenses/LICENSE-2.0 | |
* | |
* Unless required by applicable law or agreed to in writing, software | |
* distributed under the License is distributed on an "AS IS" BASIS, | |
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
* See the License for the specific language governing permissions and | |
* limitations under the License. | |
* | |
* This class extends the Maps API's standard GMarker class with the ability | |
* to support markers with textual labels. Please see articles here: | |
* | |
* http://googlemapsbook.com/2007/01/22/extending-gmarker/ | |
* http://googlemapsbook.com/2007/03/06/clickable-labeledmarker/ | |
*/ | |
/** | |
* Constructor for LabeledMarker, which picks up on strings from the GMarker | |
* options array, and then calls the GMarker constructor. | |
* | |
* @param {GLatLng} latlng | |
* @param {GMarkerOptions} Named optional arguments: | |
* opt_opts.labelText {String} text to place in the overlay div. | |
* opt_opts.labelClass {String} class to use for the overlay div. | |
* (default "markerLabel") | |
* opt_opts.labelOffset {GSize} label offset, the x- and y-distance between | |
* the marker's latlng and the upper-left corner of the text div. | |
*/ | |
function LabeledMarker(latlng, opt_opts){ | |
this.latlng_ = latlng; | |
this.opts_ = opt_opts; | |
this.initText_ = opt_opts.labelText || ""; | |
this.labelClass_ = opt_opts.labelClass || "markerLabel"; | |
this.labelOffset_ = opt_opts.labelOffset || new GSize(0, 0); | |
this.clickable_ = opt_opts.clickable || true; | |
if (opt_opts.draggable) { | |
// This version of LabeledMarker doesn't support dragging. | |
opt_opts.draggable = false; | |
} | |
GMarker.apply(this, arguments); | |
} | |
// It's a limitation of JavaScript inheritance that we can't conveniently | |
// inherit from GMarker without having to run its constructor. In order for | |
// the constructor to run, it requires some dummy GLatLng. | |
LabeledMarker.prototype = new GMarker(new GLatLng(0, 0)); | |
/** | |
* Is called by GMap2's addOverlay method. Creates the text div and adds it | |
* to the relevant parent div. | |
* | |
* @param {GMap2} map the map that has had this labeledmarker added to it. | |
*/ | |
LabeledMarker.prototype.initialize = function(map) { | |
// Do the GMarker constructor first. | |
GMarker.prototype.initialize.apply(this, arguments); | |
this.map_ = map; | |
this.setText(this.initText_); | |
} | |
/** | |
* Create a new div for this label. | |
*/ | |
LabeledMarker.prototype.makeDiv_ = function(map) { | |
if (this.div_) { | |
return; | |
} | |
this.div_ = document.createElement("div"); | |
this.div_.className = this.labelClass_; | |
this.div_.style.position = "absolute"; | |
this.div_.style.cursor = "pointer"; | |
this.map_.getPane(G_MAP_MARKER_PANE).appendChild(this.div_); | |
if (this.clickable_) { | |
/** | |
* Creates a closure for passing events through to the source marker | |
* This is located in here to avoid cluttering the global namespace. | |
* The downside is that the local variables from initialize() continue | |
* to occupy space on the stack. | |
* | |
* @param {Object} object to receive event trigger. | |
* @param {GEventListener} event to be triggered. | |
*/ | |
function newEventPassthru(obj, event) { | |
return function() { | |
GEvent.trigger(obj, event); | |
}; | |
} | |
// Pass through events fired on the text div to the marker. | |
var eventPassthrus = ['click', 'dblclick', 'mousedown', 'mouseup', 'mouseover', 'mouseout']; | |
for(var i = 0; i < eventPassthrus.length; i++) { | |
var name = eventPassthrus[i]; | |
GEvent.addDomListener(this.div_, name, newEventPassthru(this, name)); | |
} | |
} | |
} | |
/** | |
* Return the html in the div of this label, or "" if none is set | |
*/ | |
LabeledMarker.prototype.getText = function(text) { | |
if (this.div_) { | |
return this.div_.innerHTML; | |
} else { | |
return ""; | |
} | |
} | |
/** | |
* Set the html in the div of this label to text. If text is "" or null remove | |
* the div. | |
*/ | |
LabeledMarker.prototype.setText = function(text) { | |
if (this.div_) { | |
if (text) { | |
this.div_.innerHTML = text; | |
} else { | |
// remove div | |
GEvent.clearInstanceListeners(this.div_); | |
this.div_.parentNode.removeChild(this.div_); | |
this.div_ = null; | |
} | |
} else { | |
if (text) { | |
this.makeDiv_(); | |
this.div_.innerHTML = text; | |
this.redraw(); | |
} | |
} | |
} | |
/** | |
* Move the text div based on current projection and zoom level, call the redraw() | |
* handler in GMarker. | |
* | |
* @param {Boolean} force will be true when pixel coordinates need to be recomputed. | |
*/ | |
LabeledMarker.prototype.redraw = function(force) { | |
GMarker.prototype.redraw.apply(this, arguments); | |
if (this.div_) { | |
// Calculate the DIV coordinates of two opposite corners of our bounds to | |
// get the size and position of our rectangle | |
var p = this.map_.fromLatLngToDivPixel(this.latlng_); | |
var z = GOverlay.getZIndex(this.latlng_.lat()); | |
// Now position our div based on the div coordinates of our bounds | |
this.div_.style.left = (p.x + this.labelOffset_.width) + "px"; | |
this.div_.style.top = (p.y + this.labelOffset_.height) + "px"; | |
this.div_.style.zIndex = z; // in front of the marker | |
} | |
} | |
/** | |
* Remove the text div from the map pane, destroy event passthrus, and calls the | |
* default remove() handler in GMarker. | |
*/ | |
LabeledMarker.prototype.remove = function() { | |
this.setText(null); | |
GMarker.prototype.remove.apply(this, arguments); | |
} | |
/** | |
* Return a copy of this overlay, for the parent Map to duplicate itself in full. This | |
* is part of the Overlay interface and is used, for example, to copy everything in the | |
* main view into the mini-map. | |
*/ | |
LabeledMarker.prototype.copy = function() { | |
return new LabeledMarker(this.latlng_, this.opt_opts_); | |
} | |
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_blue.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_blue_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_red_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_shadow.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_shadow_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_yellow.png and /dev/null differ
html { overflow: hidden; } | |
html, body { | |
margin: 0; | |
padding: 0; | |
height: 100%; | |
} | |
body { margin: 5px; } | |
#content { | |
position: relative; | |
margin-top: 5px; | |
} | |
#map-wrapper { | |
position: relative; | |
height: 100%; | |
width: auto; | |
left: 0; | |
top: 0; | |
z-index: 100; | |
} | |
#map { | |
position: relative; | |
height: 100%; | |
width: auto; | |
border: 1px solid #aaa; | |
} | |
#sidebar-wrapper { | |
position: absolute; | |
height: 100%; | |
width: 220px; | |
top: 0; | |
border: 1px solid #aaa; | |
overflow: auto; | |
z-index: 300; | |
} | |
#sidebar { | |
position: relative; | |
width: auto; | |
padding: 4px; | |
overflow: hidden; | |
} | |
#topbar { | |
position: relative; | |
padding: 2px; | |
border: 1px solid #aaa; | |
margin: 0; | |
} | |
#topbar h1 { | |
white-space: nowrap; | |
overflow: hidden; | |
font-size: 14pt; | |
font-weight: bold; | |
font-face: | |
margin: 0; | |
} | |
body.sidebar-right #map-wrapper { margin-right: 229px; } | |
body.sidebar-right #sidebar-wrapper { right: 0; } | |
body.sidebar-left #map { margin-left: 229px; } | |
body.sidebar-left #sidebar { left: 0; } | |
body.nosidebar #map { margin: 0; } | |
body.nosidebar #sidebar { display: none; } | |
#bottombar { | |
position: relative; | |
padding: 2px; | |
border: 1px solid #aaa; | |
margin-top: 5px; | |
display: none; | |
} | |
/* holly hack for IE to get position:bottom right | |
see: http://www.positioniseverything.net/abs_relbugs.html | |
\*/ | |
* html #topbar { height: 1px; } | |
/* */ | |
body { | |
font-family:helvetica,arial,sans, sans-serif; | |
} | |
h1 { | |
margin-top: 0.5em; | |
margin-bottom: 0.5em; | |
} | |
h2 { | |
margin-top: 0.2em; | |
margin-bottom: 0.2em; | |
} | |
h3 { | |
margin-top: 0.2em; | |
margin-bottom: 0.2em; | |
} | |
.tooltip { | |
white-space: nowrap; | |
padding: 2px; | |
color: black; | |
font-size: 12px; | |
background-color: white; | |
border: 1px solid black; | |
cursor: pointer; | |
filter:alpha(opacity=60); | |
-moz-opacity: 0.6; | |
opacity: 0.6; | |
} | |
#routeList { | |
border: 1px solid black; | |
overflow: auto; | |
} | |
.shortName { | |
font-size: bigger; | |
font-weight: bold; | |
} | |
.routeChoice,.tripChoice,.routeChoiceSelected,.tripChoiceSelected { | |
white-space: nowrap; | |
cursor: pointer; | |
padding: 0px 2px; | |
color: black; | |
line-height: 1.4em; | |
font-size: smaller; | |
overflow: hidden; | |
} | |
.tripChoice { | |
color: blue; | |
} | |
.routeChoiceSelected,.tripChoiceSelected { | |
background-color: blue; | |
color: white; | |
} | |
.tripSection { | |
padding-left: 0px; | |
font-size: 10pt; | |
background-color: lightblue; | |
} | |
.patternSection { | |
margin-left: 8px; | |
padding-left: 2px; | |
border-bottom: 1px solid grey; | |
} | |
.unusualPattern { | |
background-color: #aaa; | |
color: #444; | |
} | |
/* Following styles are used by location_editor.py */ | |
#edit { | |
visibility: hidden; | |
float: right; | |
font-size: 80%; | |
} | |
#edit form { | |
display: inline; | |
} |
' Copyright 1999-2000 Adobe Systems Inc. All rights reserved. Permission to redistribute | |
' granted provided that this file is not modified in any way. This file is provided with | |
' absolutely no warranties of any kind. | |
Function isSVGControlInstalled() | |
on error resume next | |
isSVGControlInstalled = IsObject(CreateObject("Adobe.SVGCtl")) | |
end Function | |
#!/usr/bin/python2.5 | |
# | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Output svg/xml data for a marey graph | |
Marey graphs are a visualization form typically used for timetables. Time | |
is on the x-axis and position on the y-axis. This module reads data from a | |
transitfeed.Schedule and creates a marey graph in svg/xml format. The graph | |
shows the speed between stops for each trip of a route. | |
TODO: This module was taken from an internal Google tool. It works but is not | |
well intergrated into transitfeed and schedule_viewer. Also, it has lots of | |
ugly hacks to compensate set canvas size and so on which could be cleaned up. | |
For a little more information see (I didn't make this URL ;-) | |
http://transliteracies.english.ucsb.edu/post/research-project/research-clearinghouse-individual/research-reports/the-indexical-imagination-marey%e2%80%99s-graphic-method-and-the-technological-transformation-of-writing-in-the-nineteenth-century | |
MareyGraph: Class, keeps cache of graph data and graph properties | |
and draws marey graphs in svg/xml format on request. | |
""" | |
import itertools | |
import transitfeed | |
class MareyGraph: | |
"""Produces and caches marey graph from transit feed data.""" | |
_MAX_ZOOM = 5.0 # change docstring of ChangeScaleFactor if this changes | |
_DUMMY_SEPARATOR = 10 #pixel | |
def __init__(self): | |
# Timetablerelated state | |
self._cache = str() | |
self._stoplist = [] | |
self._tlist = [] | |
self._stations = [] | |
self._decorators = [] | |
# TODO: Initialize default values via constructor parameters | |
# or via a class constants | |
# Graph properties | |
self._tspan = 30 # number of hours to display | |
self._offset = 0 # starting hour | |
self._hour_grid = 60 # number of pixels for an hour | |
self._min_grid = 5 # number of pixels between subhour lines | |
# Canvas properties | |
self._zoomfactor = 0.9 # svg Scaling factor | |
self._xoffset = 0 # move graph horizontally | |
self._yoffset = 0 # move graph veritcally | |
self._bgcolor = "lightgrey" | |
# height/width of graph canvas before transform | |
self._gwidth = self._tspan * self._hour_grid | |
def Draw(self, stoplist=None, triplist=None, height=520): | |
"""Main interface for drawing the marey graph. | |
If called without arguments, the data generated in the previous call | |
will be used. New decorators can be added between calls. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
Returns: | |
# A string that contain a svg/xml web-page with a marey graph. | |
" <svg width="1440" height="520" version="1.1" ... " | |
""" | |
output = str() | |
if not triplist: | |
triplist = [] | |
if not stoplist: | |
stoplist = [] | |
if not self._cache or triplist or stoplist: | |
self._gheight = height | |
self._tlist=triplist | |
self._slist=stoplist | |
self._decorators = [] | |
self._stations = self._BuildStations(stoplist) | |
self._cache = "%s %s %s %s" % (self._DrawBox(), | |
self._DrawHours(), | |
self._DrawStations(), | |
self._DrawTrips(triplist)) | |
output = "%s %s %s %s" % (self._DrawHeader(), | |
self._cache, | |
self._DrawDecorators(), | |
self._DrawFooter()) | |
return output | |
def _DrawHeader(self): | |
svg_header = """ | |
<svg width="%s" height="%s" version="1.1" | |
xmlns="http://www.w3.org/2000/svg"> | |
<script type="text/ecmascript"><![CDATA[ | |
function init(evt) { | |
if ( window.svgDocument == null ) | |
svgDocument = evt.target.ownerDocument; | |
} | |
var oldLine = 0; | |
var oldStroke = 0; | |
var hoffset= %s; // Data from python | |
function parseLinePoints(pointnode){ | |
var wordlist = pointnode.split(" "); | |
var xlist = new Array(); | |
var h; | |
var m; | |
// TODO: add linebreaks as appropriate | |
var xstr = " Stop Times :"; | |
for (i=0;i<wordlist.length;i=i+2){ | |
var coord = wordlist[i].split(","); | |
h = Math.floor(parseInt((coord[0])-20)/60); | |
m = parseInt((coord[0]-20))%%60; | |
xstr = xstr +" "+ (hoffset+h) +":"+m; | |
} | |
return xstr; | |
} | |
function LineClick(tripid, x) { | |
var line = document.getElementById(tripid); | |
if (oldLine) | |
oldLine.setAttribute("stroke",oldStroke); | |
oldLine = line; | |
oldStroke = line.getAttribute("stroke"); | |
line.setAttribute("stroke","#fff"); | |
var dynTxt = document.getElementById("dynamicText"); | |
var tripIdTxt = document.createTextNode(x); | |
while (dynTxt.hasChildNodes()){ | |
dynTxt.removeChild(dynTxt.firstChild); | |
} | |
dynTxt.appendChild(tripIdTxt); | |
} | |
]]> </script> | |
<style type="text/css"><![CDATA[ | |
.T { fill:none; stroke-width:1.5 } | |
.TB { fill:none; stroke:#e20; stroke-width:2 } | |
.Station { fill:none; stroke-width:1 } | |
.Dec { fill:none; stroke-width:1.5 } | |
.FullHour { fill:none; stroke:#eee; stroke-width:1 } | |
.SubHour { fill:none; stroke:#ddd; stroke-width:1 } | |
.Label { fill:#aaa; font-family:Helvetica,Arial,sans; | |
text-anchor:middle } | |
.Info { fill:#111; font-family:Helvetica,Arial,sans; | |
text-anchor:start; } | |
]]></style> | |
<text class="Info" id="dynamicText" x="0" y="%d"></text> | |
<g id="mcanvas" transform="translate(%s,%s)"> | |
<g id="zcanvas" transform="scale(%s)"> | |
""" % (self._gwidth + self._xoffset + 20, self._gheight + 15, | |
self._offset, self._gheight + 10, | |
self._xoffset, self._yoffset, self._zoomfactor) | |
return svg_header | |
def _DrawFooter(self): | |
return "</g></g></svg>" | |
def _DrawDecorators(self): | |
"""Used to draw fancy overlays on trip graphs.""" | |
return " ".join(self._decorators) | |
def _DrawBox(self): | |
tmpstr = """<rect x="%s" y="%s" width="%s" height="%s" | |
fill="lightgrey" stroke="%s" stroke-width="2" /> | |
""" % (0, 0, self._gwidth + 20, self._gheight, self._bgcolor) | |
return tmpstr | |
def _BuildStations(self, stoplist): | |
"""Dispatches the best algorithm for calculating station line position. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
Returns: | |
# One integer y-coordinate for each station normalized between | |
# 0 and X, where X is the height of the graph in pixels | |
[0, 33, 140, ... , X] | |
""" | |
stations = [] | |
dists = self._EuclidianDistances(stoplist) | |
stations = self._CalculateYLines(dists) | |
return stations | |
def _EuclidianDistances(self,slist): | |
"""Calculate euclidian distances between stops. | |
Uses the stoplists long/lats to approximate distances | |
between stations and build a list with y-coordinates for the | |
horizontal lines in the graph. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
Returns: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
[0,33,140, ... ,X] | |
""" | |
e_dists2 = [transitfeed.ApproximateDistanceBetweenStops(stop, tail) for | |
(stop,tail) in itertools.izip(slist, slist[1:])] | |
return e_dists2 | |
def _CalculateYLines(self, dists): | |
"""Builds a list with y-coordinates for the horizontal lines in the graph. | |
Args: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
dists: [0,33,140, ... ,X] | |
Returns: | |
# One integer y-coordinate for each station normalized between | |
# 0 and X, where X is the height of the graph in pixels | |
[0, 33, 140, ... , X] | |
""" | |
tot_dist = sum(dists) | |
if tot_dist > 0: | |
pixel_dist = [float(d * (self._gheight-20))/tot_dist for d in dists] | |
pixel_grid = [0]+[int(pd + sum(pixel_dist[0:i])) for i,pd in | |
enumerate(pixel_dist)] | |
else: | |
pixel_grid = [] | |
return pixel_grid | |
def _TravelTimes(self,triplist,index=0): | |
""" Calculate distances and plot stops. | |
Uses a timetable to approximate distances | |
between stations | |
Args: | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
# (Optional) Index of Triplist prefered for timetable Calculation | |
index: 3 | |
Returns: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
[0,33,140, ... ,X] | |
""" | |
def DistanceInTravelTime(dep_secs, arr_secs): | |
t_dist = arr_secs-dep_secs | |
if t_dist<0: | |
t_dist = self._DUMMY_SEPARATOR # min separation | |
return t_dist | |
if not triplist: | |
return [] | |
if 0 < index < len(triplist): | |
trip = triplist[index] | |
else: | |
trip = triplist[0] | |
t_dists2 = [DistanceInTravelTime(stop[3],tail[2]) for (stop,tail) | |
in itertools.izip(trip.GetTimeStops(),trip.GetTimeStops()[1:])] | |
return t_dists2 | |
def _AddWarning(self, str): | |
print str | |
def _DrawTrips(self,triplist,colpar=""): | |
"""Generates svg polylines for each transit trip. | |
Args: | |
# Class Trip is defined in transitfeed.py | |
[Trip, Trip, ...] | |
Returns: | |
# A string containing a polyline tag for each trip | |
' <polyline class="T" stroke="#336633" points="433,0 ...' | |
""" | |
stations = [] | |
if not self._stations and triplist: | |
self._stations = self._CalculateYLines(self._TravelTimes(triplist)) | |
if not self._stations: | |
self._AddWarning("Failed to use traveltimes for graph") | |
self._stations = self._CalculateYLines(self._Uniform(triplist)) | |
if not self._stations: | |
self._AddWarning("Failed to calculate station distances") | |
return | |
stations = self._stations | |
tmpstrs = [] | |
servlist = [] | |
for t in triplist: | |
if not colpar: | |
if t.service_id not in servlist: | |
servlist.append(t.service_id) | |
shade = int(servlist.index(t.service_id) * (200/len(servlist))+55) | |
color = "#00%s00" % hex(shade)[2:4] | |
else: | |
color=colpar | |
start_offsets = [0] | |
first_stop = t.GetTimeStops()[0] | |
for j,freq_offset in enumerate(start_offsets): | |
if j>0 and not colpar: | |
color="purple" | |
scriptcall = 'onmouseover="LineClick(\'%s\',\'Trip %s starting %s\')"' % (t.trip_id, | |
t.trip_id, transitfeed.FormatSecondsSinceMidnight(t.GetStartTime())) | |
tmpstrhead = '<polyline class="T" id="%s" stroke="%s" %s points="' % \ | |
(str(t.trip_id),color, scriptcall) | |
tmpstrs.append(tmpstrhead) | |
for i, s in enumerate(t.GetTimeStops()): | |
arr_t = s[0] | |
dep_t = s[1] | |
if arr_t is None or dep_t is None: | |
continue | |
arr_x = int(arr_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset | |
dep_x = int(dep_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset | |
tmpstrs.append("%s,%s " % (int(arr_x+20), int(stations[i]+20))) | |
tmpstrs.append("%s,%s " % (int(dep_x+20), int(stations[i]+20))) | |
tmpstrs.append('" />') | |
return "".join(tmpstrs) | |
def _Uniform(self, triplist): | |
"""Fallback to assuming uniform distance between stations""" | |
# This should not be neseccary, but we are in fallback mode | |
longest = max([len(t.GetTimeStops()) for t in triplist]) | |
return [100] * longest | |
def _DrawStations(self, color="#aaa"): | |
"""Generates svg with a horizontal line for each station/stop. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stations: [Stop, Stop, ...] | |
Returns: | |
# A string containing a polyline tag for each stop | |
" <polyline class="Station" stroke="#336633" points="20,0 ..." | |
""" | |
stations=self._stations | |
tmpstrs = [] | |
for y in stations: | |
tmpstrs.append(' <polyline class="Station" stroke="%s" \ | |
points="%s,%s, %s,%s" />' %(color,20,20+y+.5,self._gwidth+20,20+y+.5)) | |
return "".join(tmpstrs) | |
def _DrawHours(self): | |
"""Generates svg to show a vertical hour and sub-hour grid | |
Returns: | |
# A string containing a polyline tag for each grid line | |
" <polyline class="FullHour" points="20,0 ..." | |
""" | |
tmpstrs = [] | |
for i in range(0, self._gwidth, self._min_grid): | |
if i % self._hour_grid == 0: | |
tmpstrs.append('<polyline class="FullHour" points="%d,%d, %d,%d" />' \ | |
% (i + .5 + 20, 20, i + .5 + 20, self._gheight)) | |
tmpstrs.append('<text class="Label" x="%d" y="%d">%d</text>' | |
% (i + 20, 20, | |
(i / self._hour_grid + self._offset) % 24)) | |
else: | |
tmpstrs.append('<polyline class="SubHour" points="%d,%d,%d,%d" />' \ | |
% (i + .5 + 20, 20, i + .5 + 20, self._gheight)) | |
return "".join(tmpstrs) | |
def AddStationDecoration(self, index, color="#f00"): | |
"""Flushes existing decorations and highlights the given station-line. | |
Args: | |
# Integer, index of stop to be highlighted. | |
index: 4 | |
# An optional string with a html color code | |
color: "#fff" | |
""" | |
tmpstr = str() | |
num_stations = len(self._stations) | |
ind = int(index) | |
if self._stations: | |
if 0<ind<num_stations: | |
y = self._stations[ind] | |
tmpstr = '<polyline class="Dec" stroke="%s" points="%s,%s,%s,%s" />' \ | |
% (color, 20, 20+y+.5, self._gwidth+20, 20+y+.5) | |
self._decorators.append(tmpstr) | |
def AddTripDecoration(self, triplist, color="#f00"): | |
"""Flushes existing decorations and highlights the given trips. | |
Args: | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
# An optional string with a html color code | |
color: "#fff" | |
""" | |
tmpstr = self._DrawTrips(triplist,color) | |
self._decorators.append(tmpstr) | |
def ChangeScaleFactor(self, newfactor): | |
"""Changes the zoom of the graph manually. | |
1.0 is the original canvas size. | |
Args: | |
# float value between 0.0 and 5.0 | |
newfactor: 0.7 | |
""" | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ScaleLarger(self): | |
"""Increases the zoom of the graph one step (0.1 units).""" | |
newfactor = self._zoomfactor + 0.1 | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ScaleSmaller(self): | |
"""Decreases the zoom of the graph one step(0.1 units).""" | |
newfactor = self._zoomfactor - 0.1 | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ClearDecorators(self): | |
"""Removes all the current decorators. | |
""" | |
self._decorators = [] | |
def AddTextStripDecoration(self,txtstr): | |
tmpstr = '<text class="Info" x="%d" y="%d">%s</text>' % (0, | |
20 + self._gheight, txtstr) | |
self._decorators.append(tmpstr) | |
def SetSpan(self, first_arr, last_arr, mint=5 ,maxt=30): | |
s_hour = (first_arr / 3600) - 1 | |
e_hour = (last_arr / 3600) + 1 | |
self._offset = max(min(s_hour, 23), 0) | |
self._tspan = max(min(e_hour - s_hour, maxt), mint) | |
self._gwidth = self._tspan * self._hour_grid | |
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/marey_graph.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
This package provides implementation of a converter from a kml | |
file format into Google transit feed format. | |
The KmlParser class is the main class implementing the parser. | |
Currently only information about stops is extracted from a kml file. | |
The extractor expects the stops to be represented as placemarks with | |
a single point. | |
""" | |
import re | |
import string | |
import sys | |
import transitfeed | |
from transitfeed import util | |
import xml.dom.minidom as minidom | |
import zipfile | |
class Placemark(object): | |
def __init__(self): | |
self.name = "" | |
self.coordinates = [] | |
def IsPoint(self): | |
return len(self.coordinates) == 1 | |
def IsLine(self): | |
return len(self.coordinates) > 1 | |
class KmlParser(object): | |
def __init__(self, stopNameRe = '(.*)'): | |
""" | |
Args: | |
stopNameRe - a regular expression to extract a stop name from a | |
placemaker name | |
""" | |
self.stopNameRe = re.compile(stopNameRe) | |
def Parse(self, filename, feed): | |
""" | |
Reads the kml file, parses it and updated the Google transit feed | |
object with the extracted information. | |
Args: | |
filename - kml file name | |
feed - an instance of Schedule class to be updated | |
""" | |
dom = minidom.parse(filename) | |
self.ParseDom(dom, feed) | |
def ParseDom(self, dom, feed): | |
""" | |
Parses the given kml dom tree and updates the Google transit feed object. | |
Args: | |
dom - kml dom tree | |
feed - an instance of Schedule class to be updated | |
""" | |
shape_num = 0 | |
for node in dom.getElementsByTagName('Placemark'): | |
p = self.ParsePlacemark(node) | |
if p.IsPoint(): | |
(lon, lat) = p.coordinates[0] | |
m = self.stopNameRe.search(p.name) | |
feed.AddStop(lat, lon, m.group(1)) | |
elif p.IsLine(): | |
shape_num = shape_num + 1 | |
shape = transitfeed.Shape("kml_shape_" + str(shape_num)) | |
for (lon, lat) in p.coordinates: | |
shape.AddPoint(lat, lon) | |
feed.AddShapeObject(shape) | |
def ParsePlacemark(self, node): | |
ret = Placemark() | |
for child in node.childNodes: | |
if child.nodeName == 'name': | |
ret.name = self.ExtractText(child) | |
if child.nodeName == 'Point' or child.nodeName == 'LineString': | |
ret.coordinates = self.ExtractCoordinates(child) | |
return ret | |
def ExtractText(self, node): | |
for child in node.childNodes: | |
if child.nodeType == child.TEXT_NODE: | |
return child.wholeText # is a unicode string | |
return "" | |
def ExtractCoordinates(self, node): | |
coordinatesText = "" | |
for child in node.childNodes: | |
if child.nodeName == 'coordinates': | |
coordinatesText = self.ExtractText(child) | |
break | |
ret = [] | |
for point in coordinatesText.split(): | |
coords = point.split(',') | |
ret.append((float(coords[0]), float(coords[1]))) | |
return ret | |
def main(): | |
usage = \ | |
"""%prog <input.kml> <output GTFS.zip> | |
Reads KML file <input.kml> and creates GTFS file <output GTFS.zip> with | |
placemarks in the KML represented as stops. | |
""" | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
(options, args) = parser.parse_args() | |
if len(args) != 2: | |
parser.error('You did not provide all required command line arguments.') | |
if args[0] == 'IWantMyCrash': | |
raise Exception('For testCrashHandler') | |
parser = KmlParser() | |
feed = transitfeed.Schedule() | |
feed.save_all_stops = True | |
parser.Parse(args[0], feed) | |
feed.WriteGoogleTransitFeed(args[1]) | |
print "Done." | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# | |
# Copyright 2008 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A module for writing GTFS feeds out into Google Earth KML format. | |
For usage information run kmlwriter.py --help | |
If no output filename is specified, the output file will be given the same | |
name as the feed file (with ".kml" appended) and will be placed in the same | |
directory as the input feed. | |
The resulting KML file has a folder hierarchy which looks like this: | |
- Stops | |
* stop1 | |
* stop2 | |
- Routes | |
- route1 | |
- Shapes | |
* shape1 | |
* shape2 | |
- Patterns | |
- pattern1 | |
- pattern2 | |
- Trips | |
* trip1 | |
* trip2 | |
- Shapes | |
* shape1 | |
- Shape Points | |
* shape_point1 | |
* shape_point2 | |
* shape2 | |
- Shape Points | |
* shape_point1 | |
* shape_point2 | |
where the hyphens represent folders and the asteriks represent placemarks. | |
In a trip, a vehicle visits stops in a certain sequence. Such a sequence of | |
stops is called a pattern. A pattern is represented by a linestring connecting | |
the stops. The "Shapes" subfolder of a route folder contains placemarks for | |
each shape used by a trip in the route. The "Patterns" subfolder contains a | |
placemark for each unique pattern used by a trip in the route. The "Trips" | |
subfolder contains a placemark for each trip in the route. | |
Since there can be many trips and trips for the same route are usually similar, | |
they are not exported unless the --showtrips option is used. There is also | |
another option --splitroutes that groups the routes by vehicle type resulting | |
in a folder hierarchy which looks like this at the top level: | |
- Stops | |
- Routes - Bus | |
- Routes - Tram | |
- Routes - Rail | |
- Shapes | |
""" | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
import optparse | |
import os.path | |
import sys | |
import transitfeed | |
from transitfeed import util | |
class KMLWriter(object): | |
"""This class knows how to write out a transit feed as KML. | |
Sample usage: | |
KMLWriter().Write(<transitfeed.Schedule object>, <output filename>) | |
Attributes: | |
show_trips: True if the individual trips should be included in the routes. | |
show_trips: True if the individual trips should be placed on ground. | |
split_routes: True if the routes should be split by type. | |
shape_points: True if individual shape points should be plotted. | |
""" | |
def __init__(self): | |
"""Initialise.""" | |
self.show_trips = False | |
self.split_routes = False | |
self.shape_points = False | |
self.altitude_per_sec = 0.0 | |
self.date_filter = None | |
def _SetIndentation(self, elem, level=0): | |
"""Indented the ElementTree DOM. | |
This is the recommended way to cause an ElementTree DOM to be | |
prettyprinted on output, as per: http://effbot.org/zone/element-lib.htm | |
Run this on the root element before outputting the tree. | |
Args: | |
elem: The element to start indenting from, usually the document root. | |
level: Current indentation level for recursion. | |
""" | |
i = "\n" + level*" " | |
if len(elem): | |
if not elem.text or not elem.text.strip(): | |
elem.text = i + " " | |
for elem in elem: | |
self._SetIndentation(elem, level+1) | |
if not elem.tail or not elem.tail.strip(): | |
elem.tail = i | |
else: | |
if level and (not elem.tail or not elem.tail.strip()): | |
elem.tail = i | |
def _CreateFolder(self, parent, name, visible=True, description=None): | |
"""Create a KML Folder element. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
name: The folder name as a string. | |
visible: Whether the folder is initially visible or not. | |
description: A description string or None. | |
Returns: | |
The folder ElementTree.Element instance. | |
""" | |
folder = ET.SubElement(parent, 'Folder') | |
name_tag = ET.SubElement(folder, 'name') | |
name_tag.text = name | |
if description is not None: | |
desc_tag = ET.SubElement(folder, 'description') | |
desc_tag.text = description | |
if not visible: | |
visibility = ET.SubElement(folder, 'visibility') | |
visibility.text = '0' | |
return folder | |
def _CreateStyleForRoute(self, doc, route): | |
"""Create a KML Style element for the route. | |
The style sets the line colour if the route colour is specified. The | |
line thickness is set depending on the vehicle type. | |
Args: | |
doc: The KML Document ElementTree.Element instance. | |
route: The transitfeed.Route to create the style for. | |
Returns: | |
The id of the style as a string. | |
""" | |
style_id = 'route_%s' % route.route_id | |
style = ET.SubElement(doc, 'Style', {'id': style_id}) | |
linestyle = ET.SubElement(style, 'LineStyle') | |
width = ET.SubElement(linestyle, 'width') | |
type_to_width = {0: '3', # Tram | |
1: '3', # Subway | |
2: '5', # Rail | |
3: '1'} # Bus | |
width.text = type_to_width.get(route.route_type, '1') | |
if route.route_color: | |
color = ET.SubElement(linestyle, 'color') | |
red = route.route_color[0:2].lower() | |
green = route.route_color[2:4].lower() | |
blue = route.route_color[4:6].lower() | |
color.text = 'ff%s%s%s' % (blue, green, red) | |
return style_id | |
def _CreatePlacemark(self, parent, name, style_id=None, visible=True, | |
description=None): | |
"""Create a KML Placemark element. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
name: The placemark name as a string. | |
style_id: If not None, the id of a style to use for the placemark. | |
visible: Whether the placemark is initially visible or not. | |
description: A description string or None. | |
Returns: | |
The placemark ElementTree.Element instance. | |
""" | |
placemark = ET.SubElement(parent, 'Placemark') | |
placemark_name = ET.SubElement(placemark, 'name') | |
placemark_name.text = name | |
if description is not None: | |
desc_tag = ET.SubElement(placemark, 'description') | |
desc_tag.text = description | |
if style_id is not None: | |
styleurl = ET.SubElement(placemark, 'styleUrl') | |
styleurl.text = '#%s' % style_id | |
if not visible: | |
visibility = ET.SubElement(placemark, 'visibility') | |
visibility.text = '0' | |
return placemark | |
def _CreateLineString(self, parent, coordinate_list): | |
"""Create a KML LineString element. | |
The points of the string are given in coordinate_list. Every element of | |
coordinate_list should be one of a tuple (longitude, latitude) or a tuple | |
(longitude, latitude, altitude). | |
Args: | |
parent: The parent ElementTree.Element instance. | |
coordinate_list: The list of coordinates. | |
Returns: | |
The LineString ElementTree.Element instance or None if coordinate_list is | |
empty. | |
""" | |
if not coordinate_list: | |
return None | |
linestring = ET.SubElement(parent, 'LineString') | |
tessellate = ET.SubElement(linestring, 'tessellate') | |
tessellate.text = '1' | |
if len(coordinate_list[0]) == 3: | |
altitude_mode = ET.SubElement(linestring, 'altitudeMode') | |
altitude_mode.text = 'absolute' | |
coordinates = ET.SubElement(linestring, 'coordinates') | |
if len(coordinate_list[0]) == 3: | |
coordinate_str_list = ['%f,%f,%f' % t for t in coordinate_list] | |
else: | |
coordinate_str_list = ['%f,%f' % t for t in coordinate_list] | |
coordinates.text = ' '.join(coordinate_str_list) | |
return linestring | |
def _CreateLineStringForShape(self, parent, shape): | |
"""Create a KML LineString using coordinates from a shape. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
shape: The transitfeed.Shape instance. | |
Returns: | |
The LineString ElementTree.Element instance or None if coordinate_list is | |
empty. | |
""" | |
coordinate_list = [(longitude, latitude) for | |
(latitude, longitude, distance) in shape.points] | |
return self._CreateLineString(parent, coordinate_list) | |
def _CreateStopsFolder(self, schedule, doc): | |
"""Create a KML Folder containing placemarks for each stop in the schedule. | |
If there are no stops in the schedule then no folder is created. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
Returns: | |
The Folder ElementTree.Element instance or None if there are no stops. | |
""" | |
if not schedule.GetStopList(): | |
return None | |
stop_folder = self._CreateFolder(doc, 'Stops') | |
stops = list(schedule.GetStopList()) | |
stops.sort(key=lambda x: x.stop_name) | |
for stop in stops: | |
desc_items = [] | |
if stop.stop_desc: | |
desc_items.append(stop.stop_desc) | |
if stop.stop_url: | |
desc_items.append('Stop info page: <a href="%s">%s</a>' % ( | |
stop.stop_url, stop.stop_url)) | |
description = '<br/>'.join(desc_items) or None | |
placemark = self._CreatePlacemark(stop_folder, stop.stop_name, | |
description=description) | |
point = ET.SubElement(placemark, 'Point') | |
coordinates = ET.SubElement(point, 'coordinates') | |
coordinates.text = '%.6f,%.6f' % (stop.stop_lon, stop.stop_lat) | |
return stop_folder | |
def _CreateRoutePatternsFolder(self, parent, route, | |
style_id=None, visible=True): | |
"""Create a KML Folder containing placemarks for each pattern in the route. | |
A pattern is a sequence of stops used by one of the trips in the route. | |
If there are not patterns for the route then no folder is created and None | |
is returned. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: The id of a style to use if not None. | |
visible: Whether the folder is initially visible or not. | |
Returns: | |
The Folder ElementTree.Element instance or None if there are no patterns. | |
""" | |
pattern_id_to_trips = route.GetPatternIdTripDict() | |
if not pattern_id_to_trips: | |
return None | |
# sort by number of trips using the pattern | |
pattern_trips = pattern_id_to_trips.values() | |
pattern_trips.sort(lambda a, b: cmp(len(b), len(a))) | |
folder = self._CreateFolder(parent, 'Patterns', visible) | |
for n, trips in enumerate(pattern_trips): | |
trip_ids = [trip.trip_id for trip in trips] | |
name = 'Pattern %d (trips: %d)' % (n+1, len(trips)) | |
description = 'Trips using this pattern (%d in total): %s' % ( | |
len(trips), ', '.join(trip_ids)) | |
placemark = self._CreatePlacemark(folder, name, style_id, visible, | |
description) | |
coordinates = [(stop.stop_lon, stop.stop_lat) | |
for stop in trips[0].GetPattern()] | |
self._CreateLineString(placemark, coordinates) | |
return folder | |
def _CreateRouteShapesFolder(self, schedule, parent, route, | |
style_id=None, visible=True): | |
"""Create a KML Folder for the shapes of a route. | |
The folder contains a placemark for each shape referenced by a trip in the | |
route. If there are no such shapes, no folder is created and None is | |
returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: The id of a style to use if not None. | |
visible: Whether the placemark is initially visible or not. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
shape_id_to_trips = {} | |
for trip in route.trips: | |
if trip.shape_id: | |
shape_id_to_trips.setdefault(trip.shape_id, []).append(trip) | |
if not shape_id_to_trips: | |
return None | |
# sort by the number of trips using the shape | |
shape_id_to_trips_items = shape_id_to_trips.items() | |
shape_id_to_trips_items.sort(lambda a, b: cmp(len(b[1]), len(a[1]))) | |
folder = self._CreateFolder(parent, 'Shapes', visible) | |
for shape_id, trips in shape_id_to_trips_items: | |
trip_ids = [trip.trip_id for trip in trips] | |
name = '%s (trips: %d)' % (shape_id, len(trips)) | |
description = 'Trips using this shape (%d in total): %s' % ( | |
len(trips), ', '.join(trip_ids)) | |
placemark = self._CreatePlacemark(folder, name, style_id, visible, | |
description) | |
self._CreateLineStringForShape(placemark, schedule.GetShape(shape_id)) | |
return folder | |
def _CreateRouteTripsFolder(self, parent, route, style_id=None, schedule=None): | |
"""Create a KML Folder containing all the trips in the route. | |
The folder contains a placemark for each of these trips. If there are no | |
trips in the route, no folder is created and None is returned. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: A style id string for the placemarks or None. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
if not route.trips: | |
return None | |
trips = list(route.trips) | |
trips.sort(key=lambda x: x.trip_id) | |
trips_folder = self._CreateFolder(parent, 'Trips', visible=False) | |
for trip in trips: | |
if (self.date_filter and | |
not trip.service_period.IsActiveOn(self.date_filter)): | |
continue | |
if trip.trip_headsign: | |
description = 'Headsign: %s' % trip.trip_headsign | |
else: | |
description = None | |
coordinate_list = [] | |
for secs, stoptime, tp in trip.GetTimeInterpolatedStops(): | |
if self.altitude_per_sec > 0: | |
coordinate_list.append((stoptime.stop.stop_lon, stoptime.stop.stop_lat, | |
(secs - 3600 * 4) * self.altitude_per_sec)) | |
else: | |
coordinate_list.append((stoptime.stop.stop_lon, | |
stoptime.stop.stop_lat)) | |
placemark = self._CreatePlacemark(trips_folder, | |
trip.trip_id, | |
style_id=style_id, | |
visible=False, | |
description=description) | |
self._CreateLineString(placemark, coordinate_list) | |
return trips_folder | |
def _CreateRoutesFolder(self, schedule, doc, route_type=None): | |
"""Create a KML Folder containing routes in a schedule. | |
The folder contains a subfolder for each route in the schedule of type | |
route_type. If route_type is None, then all routes are selected. Each | |
subfolder contains a flattened graph placemark, a route shapes placemark | |
and, if show_trips is True, a subfolder containing placemarks for each of | |
the trips in the route. | |
If there are no routes in the schedule then no folder is created and None | |
is returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
route_type: The route type integer or None. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
def GetRouteName(route): | |
"""Return a placemark name for the route. | |
Args: | |
route: The transitfeed.Route instance. | |
Returns: | |
The name as a string. | |
""" | |
name_parts = [] | |
if route.route_short_name: | |
name_parts.append('<b>%s</b>' % route.route_short_name) | |
if route.route_long_name: | |
name_parts.append(route.route_long_name) | |
return ' - '.join(name_parts) or route.route_id | |
def GetRouteDescription(route): | |
"""Return a placemark description for the route. | |
Args: | |
route: The transitfeed.Route instance. | |
Returns: | |
The description as a string. | |
""" | |
desc_items = [] | |
if route.route_desc: | |
desc_items.append(route.route_desc) | |
if route.route_url: | |
desc_items.append('Route info page: <a href="%s">%s</a>' % ( | |
route.route_url, route.route_url)) | |
description = '<br/>'.join(desc_items) | |
return description or None | |
routes = [route for route in schedule.GetRouteList() | |
if route_type is None or route.route_type == route_type] | |
if not routes: | |
return None | |
routes.sort(key=lambda x: GetRouteName(x)) | |
if route_type is not None: | |
route_type_names = {0: 'Tram, Streetcar or Light rail', | |
1: 'Subway or Metro', | |
2: 'Rail', | |
3: 'Bus', | |
4: 'Ferry', | |
5: 'Cable car', | |
6: 'Gondola or suspended cable car', | |
7: 'Funicular'} | |
type_name = route_type_names.get(route_type, str(route_type)) | |
folder_name = 'Routes - %s' % type_name | |
else: | |
folder_name = 'Routes' | |
routes_folder = self._CreateFolder(doc, folder_name, visible=False) | |
for route in routes: | |
style_id = self._CreateStyleForRoute(doc, route) | |
route_folder = self._CreateFolder(routes_folder, | |
GetRouteName(route), | |
description=GetRouteDescription(route)) | |
self._CreateRouteShapesFolder(schedule, route_folder, route, | |
style_id, False) | |
self._CreateRoutePatternsFolder(route_folder, route, style_id, False) | |
if self.show_trips: | |
self._CreateRouteTripsFolder(route_folder, route, style_id, schedule) | |
return routes_folder | |
def _CreateShapesFolder(self, schedule, doc): | |
"""Create a KML Folder containing all the shapes in a schedule. | |
The folder contains a placemark for each shape. If there are no shapes in | |
the schedule then the folder is not created and None is returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
if not schedule.GetShapeList(): | |
return None | |
shapes_folder = self._CreateFolder(doc, 'Shapes') | |
shapes = list(schedule.GetShapeList()) | |
shapes.sort(key=lambda x: x.shape_id) | |
for shape in shapes: | |
placemark = self._CreatePlacemark(shapes_folder, shape.shape_id) | |
self._CreateLineStringForShape(placemark, shape) | |
if self.shape_points: | |
self._CreateShapePointFolder(shapes_folder, shape) | |
return shapes_folder | |
def _CreateShapePointFolder(self, shapes_folder, shape): | |
"""Create a KML Folder containing all the shape points in a shape. | |
The folder contains placemarks for each shapepoint. | |
Args: | |
shapes_folder: A KML Shape Folder ElementTree.Element instance | |
shape: The shape to plot. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
folder_name = shape.shape_id + ' Shape Points' | |
folder = self._CreateFolder(shapes_folder, folder_name, visible=False) | |
for (index, (lat, lon, dist)) in enumerate(shape.points): | |
placemark = self._CreatePlacemark(folder, str(index+1)) | |
point = ET.SubElement(placemark, 'Point') | |
coordinates = ET.SubElement(point, 'coordinates') | |
coordinates.text = '%.6f,%.6f' % (lon, lat) | |
return folder | |
def Write(self, schedule, output_file): | |
"""Writes out a feed as KML. | |
Args: | |
schedule: A transitfeed.Schedule object containing the feed to write. | |
output_file: The name of the output KML file, or file object to use. | |
""" | |
# Generate the DOM to write | |
root = ET.Element('kml') | |
root.attrib['xmlns'] = 'http://earth.google.com/kml/2.1' | |
doc = ET.SubElement(root, 'Document') | |
open_tag = ET.SubElement(doc, 'open') | |
open_tag.text = '1' | |
self._CreateStopsFolder(schedule, doc) | |
if self.split_routes: | |
route_types = set() | |
for route in schedule.GetRouteList(): | |
route_types.add(route.route_type) | |
route_types = list(route_types) | |
route_types.sort() | |
for route_type in route_types: | |
self._CreateRoutesFolder(schedule, doc, route_type) | |
else: | |
self._CreateRoutesFolder(schedule, doc) | |
self._CreateShapesFolder(schedule, doc) | |
# Make sure we pretty-print | |
self._SetIndentation(root) | |
# Now write the output | |
if isinstance(output_file, file): | |
output = output_file | |
else: | |
output = open(output_file, 'w') | |
output.write("""<?xml version="1.0" encoding="UTF-8"?>\n""") | |
ET.ElementTree(root).write(output, 'utf-8') | |
def main(): | |
usage = \ | |
'''%prog [options] <input GTFS.zip> [<output.kml>] | |
Reads GTFS file or directory <input GTFS.zip> and creates a KML file | |
<output.kml> that contains the geographical features of the input. If | |
<output.kml> is omitted a default filename is picked based on | |
<input GTFS.zip>. By default the KML contains all stops and shapes. | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-t', '--showtrips', action='store_true', | |
dest='show_trips', | |
help='include the individual trips for each route') | |
parser.add_option('-a', '--altitude_per_sec', action='store', type='float', | |
dest='altitude_per_sec', | |
help='if greater than 0 trips are drawn with time axis ' | |
'set to this many meters high for each second of time') | |
parser.add_option('-s', '--splitroutes', action='store_true', | |
dest='split_routes', | |
help='split the routes by type') | |
parser.add_option('-d', '--date_filter', action='store', type='string', | |
dest='date_filter', | |
help='Restrict to trips active on date YYYYMMDD') | |
parser.add_option('-p', '--display_shape_points', action='store_true', | |
dest='shape_points', | |
help='shows the actual points along shapes') | |
parser.set_defaults(altitude_per_sec=1.0) | |
options, args = parser.parse_args() | |
if len(args) < 1: | |
parser.error('You must provide the path of an input GTFS file.') | |
if args[0] == 'IWantMyCrash': | |
raise Exception('For testCrashHandler') | |
input_path = args[0] | |
if len(args) >= 2: | |
output_path = args[1] | |
else: | |
path = os.path.normpath(input_path) | |
(feed_dir, feed) = os.path.split(path) | |
if '.' in feed: | |
feed = feed.rsplit('.', 1)[0] # strip extension | |
output_filename = '%s.kml' % feed | |
output_path = os.path.join(feed_dir, output_filename) | |
loader = transitfeed.Loader(input_path, | |
problems=transitfeed.ProblemReporter()) | |
feed = loader.Load() | |
print "Writing %s" % output_path | |
writer = KMLWriter() | |
writer.show_trips = options.show_trips | |
writer.altitude_per_sec = options.altitude_per_sec | |
writer.split_routes = options.split_routes | |
writer.date_filter = options.date_filter | |
writer.shape_points = options.shape_points | |
writer.Write(feed, output_path) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A tool for merging two Google Transit feeds. | |
Given two Google Transit feeds intending to cover two disjoint calendar | |
intervals, this tool will attempt to produce a single feed by merging as much | |
of the two feeds together as possible. | |
For example, most stops remain the same throughout the year. Therefore, many | |
of the stops given in stops.txt for the first feed represent the same stops | |
given in the second feed. This tool will try to merge these stops so they | |
only appear once in the resultant feed. | |
A note on terminology: The first schedule is referred to as the "old" schedule; | |
the second as the "new" schedule. The resultant schedule is referred to as | |
the "merged" schedule. Names of things in the old schedule are variations of | |
the letter "a" while names of things from the new schedule are variations of | |
"b". The objects that represents routes, agencies and so on are called | |
"entities". | |
usage: merge.py [options] old_feed_path new_feed_path merged_feed_path | |
Run merge.py --help for a list of the possible options. | |
""" | |
__author__ = 'timothy.stranex@gmail.com (Timothy Stranex)' | |
import datetime | |
import optparse | |
import os | |
import re | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import util | |
import webbrowser | |
# TODO: | |
# 1. write unit tests that use actual data | |
# 2. write a proper trip and stop_times merger | |
# 3. add a serialised access method for stop_times and shapes to transitfeed | |
# 4. add support for merging schedules which have some service period overlap | |
def ApproximateDistanceBetweenPoints(pa, pb): | |
"""Finds the distance between two points on the Earth's surface. | |
This is an approximate distance based on assuming that the Earth is a sphere. | |
The points are specified by their lattitude and longitude. | |
Args: | |
pa: the first (lat, lon) point tuple | |
pb: the second (lat, lon) point tuple | |
Returns: | |
The distance as a float in metres. | |
""" | |
alat, alon = pa | |
blat, blon = pb | |
sa = transitfeed.Stop(lat=alat, lng=alon) | |
sb = transitfeed.Stop(lat=blat, lng=blon) | |
return transitfeed.ApproximateDistanceBetweenStops(sa, sb) | |
class Error(Exception): | |
"""The base exception class for this module.""" | |
class MergeError(Error): | |
"""An error produced when two entities could not be merged.""" | |
class MergeProblemWithContext(transitfeed.ExceptionWithContext): | |
"""The base exception class for problem reporting in the merge module. | |
Attributes: | |
dataset_merger: The DataSetMerger that generated this problem. | |
entity_type_name: The entity type of the dataset_merger. This is just | |
dataset_merger.ENTITY_TYPE_NAME. | |
ERROR_TEXT: The text used for generating the problem message. | |
""" | |
def __init__(self, dataset_merger, problem_type=transitfeed.TYPE_WARNING, | |
**kwargs): | |
"""Initialise the exception object. | |
Args: | |
dataset_merger: The DataSetMerger instance that generated this problem. | |
problem_type: The problem severity. This should be set to one of the | |
corresponding constants in transitfeed. | |
kwargs: Keyword arguments to be saved as instance attributes. | |
""" | |
kwargs['type'] = problem_type | |
kwargs['entity_type_name'] = dataset_merger.ENTITY_TYPE_NAME | |
transitfeed.ExceptionWithContext.__init__(self, None, None, **kwargs) | |
self.dataset_merger = dataset_merger | |
def FormatContext(self): | |
return "In files '%s'" % self.dataset_merger.FILE_NAME | |
class SameIdButNotMerged(MergeProblemWithContext): | |
ERROR_TEXT = ("There is a %(entity_type_name)s in the old feed with id " | |
"'%(id)s' and one from the new feed with the same id but " | |
"they could not be merged:") | |
class CalendarsNotDisjoint(MergeProblemWithContext): | |
ERROR_TEXT = ("The service periods could not be merged since they are not " | |
"disjoint.") | |
class MergeNotImplemented(MergeProblemWithContext): | |
ERROR_TEXT = ("The feed merger does not currently support merging in this " | |
"file. The entries have been duplicated instead.") | |
class FareRulesBroken(MergeProblemWithContext): | |
ERROR_TEXT = ("The feed merger is currently unable to handle fare rules " | |
"properly.") | |
class MergeProblemReporterBase(transitfeed.ProblemReporterBase): | |
"""The base problem reporter class for the merge module.""" | |
def SameIdButNotMerged(self, dataset, entity_id, reason): | |
self._Report(SameIdButNotMerged(dataset, id=entity_id, reason=reason)) | |
def CalendarsNotDisjoint(self, dataset): | |
self._Report(CalendarsNotDisjoint(dataset, | |
problem_type=transitfeed.TYPE_ERROR)) | |
def MergeNotImplemented(self, dataset): | |
self._Report(MergeNotImplemented(dataset)) | |
def FareRulesBroken(self, dataset): | |
self._Report(FareRulesBroken(dataset)) | |
class ExceptionProblemReporter(MergeProblemReporterBase): | |
"""A problem reporter that reports errors by raising exceptions.""" | |
def __init__(self, raise_warnings=False): | |
"""Initialise. | |
Args: | |
raise_warnings: If this is True then warnings are also raised as | |
exceptions. | |
""" | |
MergeProblemReporterBase.__init__(self) | |
self._raise_warnings = raise_warnings | |
def _Report(self, merge_problem): | |
if self._raise_warnings or merge_problem.IsError(): | |
raise merge_problem | |
class HTMLProblemReporter(MergeProblemReporterBase): | |
"""A problem reporter which generates HTML output.""" | |
def __init__(self): | |
"""Initialise.""" | |
MergeProblemReporterBase.__init__(self) | |
self._dataset_warnings = {} # a map from DataSetMergers to their warnings | |
self._dataset_errors = {} | |
self._warning_count = 0 | |
self._error_count = 0 | |
def _Report(self, merge_problem): | |
if merge_problem.IsWarning(): | |
dataset_problems = self._dataset_warnings | |
self._warning_count += 1 | |
else: | |
dataset_problems = self._dataset_errors | |
self._error_count += 1 | |
problem_html = '<li>%s</li>' % ( | |
merge_problem.FormatProblem().replace('\n', '<br>')) | |
dataset_problems.setdefault(merge_problem.dataset_merger, []).append( | |
problem_html) | |
def _GenerateStatsTable(self, feed_merger): | |
"""Generate an HTML table of merge statistics. | |
Args: | |
feed_merger: The FeedMerger instance. | |
Returns: | |
The generated HTML as a string. | |
""" | |
rows = [] | |
rows.append('<tr><th class="header"/><th class="header">Merged</th>' | |
'<th class="header">Copied from old feed</th>' | |
'<th class="header">Copied from new feed</th></tr>') | |
for merger in feed_merger.GetMergerList(): | |
stats = merger.GetMergeStats() | |
if stats is None: | |
continue | |
merged, not_merged_a, not_merged_b = stats | |
rows.append('<tr><th class="header">%s</th>' | |
'<td class="header">%d</td>' | |
'<td class="header">%d</td>' | |
'<td class="header">%d</td></tr>' % | |
(merger.DATASET_NAME, merged, not_merged_a, not_merged_b)) | |
return '<table>%s</table>' % '\n'.join(rows) | |
def _GenerateSection(self, problem_type): | |
"""Generate a listing of the given type of problems. | |
Args: | |
problem_type: The type of problem. This is one of the problem type | |
constants from transitfeed. | |
Returns: | |
The generated HTML as a string. | |
""" | |
if problem_type == transitfeed.TYPE_WARNING: | |
dataset_problems = self._dataset_warnings | |
heading = 'Warnings' | |
else: | |
dataset_problems = self._dataset_errors | |
heading = 'Errors' | |
if not dataset_problems: | |
return '' | |
prefix = '<h2 class="issueHeader">%s:</h2>' % heading | |
dataset_sections = [] | |
for dataset_merger, problems in dataset_problems.items(): | |
dataset_sections.append('<h3>%s</h3><ol>%s</ol>' % ( | |
dataset_merger.FILE_NAME, '\n'.join(problems))) | |
body = '\n'.join(dataset_sections) | |
return prefix + body | |
def _GenerateSummary(self): | |
"""Generate a summary of the warnings and errors. | |
Returns: | |
The generated HTML as a string. | |
""" | |
items = [] | |
if self._dataset_errors: | |
items.append('errors: %d' % self._error_count) | |
if self._dataset_warnings: | |
items.append('warnings: %d' % self._warning_count) | |
if items: | |
return '<p><span class="fail">%s</span></p>' % '<br>'.join(items) | |
else: | |
return '<p><span class="pass">feeds merged successfully</span></p>' | |
def WriteOutput(self, output_file, feed_merger, | |
old_feed_path, new_feed_path, merged_feed_path): | |
"""Write the HTML output to a file. | |
Args: | |
output_file: The file object that the HTML output will be written to. | |
feed_merger: The FeedMerger instance. | |
old_feed_path: The path to the old feed file as a string. | |
new_feed_path: The path to the new feed file as a string | |
merged_feed_path: The path to the merged feed file as a string. This | |
may be None if no merged feed was written. | |
""" | |
if merged_feed_path is None: | |
html_merged_feed_path = '' | |
else: | |
html_merged_feed_path = '<p>Merged feed created: <code>%s</code></p>' % ( | |
merged_feed_path) | |
html_header = """<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> | |
<title>Feed Merger Results</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; | |
font-family:monospace} | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 1em} | |
span.pass {background-color: lightgreen} | |
span.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt; padding: 3px} | |
ol,.unused {padding-left: 40pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
<h1>Feed merger results</h1> | |
<p>Old feed: <code>%(old_feed_path)s</code></p> | |
<p>New feed: <code>%(new_feed_path)s</code></p> | |
%(html_merged_feed_path)s""" % locals() | |
html_stats = self._GenerateStatsTable(feed_merger) | |
html_summary = self._GenerateSummary() | |
html_errors = self._GenerateSection(transitfeed.TYPE_ERROR) | |
html_warnings = self._GenerateSection(transitfeed.TYPE_WARNING) | |
html_footer = """ | |
<div class="footer"> | |
Generated using transitfeed version %s on %s. | |
</div> | |
</body> | |
</html>""" % (transitfeed.__version__, | |
time.strftime('%B %d, %Y at %I:%M %p %Z')) | |
output_file.write(transitfeed.EncodeUnicode(html_header)) | |
output_file.write(transitfeed.EncodeUnicode(html_stats)) | |
output_file.write(transitfeed.EncodeUnicode(html_summary)) | |
output_file.write(transitfeed.EncodeUnicode(html_errors)) | |
output_file.write(transitfeed.EncodeUnicode(html_warnings)) | |
output_file.write(transitfeed.EncodeUnicode(html_footer)) | |
class ConsoleWarningRaiseErrorProblemReporter(transitfeed.ProblemReporterBase): | |
"""Problem reporter to use when loading feeds for merge.""" | |
def _Report(self, e): | |
if e.IsError(): | |
raise e | |
else: | |
print transitfeed.EncodeUnicode(e.FormatProblem()) | |
context = e.FormatContext() | |
if context: | |
print context | |
def LoadWithoutErrors(path, memory_db): | |
""""Return a Schedule object loaded from path; sys.exit for any error.""" | |
loading_problem_handler = ConsoleWarningRaiseErrorProblemReporter() | |
try: | |
schedule = transitfeed.Loader(path, | |
memory_db=memory_db, | |
problems=loading_problem_handler).Load() | |
except transitfeed.ExceptionWithContext, e: | |
print >>sys.stderr, ( | |
"\n\nFeeds to merge must load without any errors.\n" | |
"While loading %s the following error was found:\n%s\n%s\n" % | |
(path, e.FormatContext(), transitfeed.EncodeUnicode(e.FormatProblem()))) | |
sys.exit(1) | |
return schedule | |
class DataSetMerger(object): | |
"""A DataSetMerger is in charge of merging a set of entities. | |
This is an abstract class and should be subclassed for each different entity | |
type. | |
Attributes: | |
ENTITY_TYPE_NAME: The name of the entity type like 'agency' or 'stop'. | |
FILE_NAME: The name of the file containing this data set like 'agency.txt'. | |
DATASET_NAME: A name for the dataset like 'Agencies' or 'Stops'. | |
""" | |
def __init__(self, feed_merger): | |
"""Initialise. | |
Args: | |
feed_merger: The FeedMerger. | |
""" | |
self.feed_merger = feed_merger | |
self._num_merged = 0 | |
self._num_not_merged_a = 0 | |
self._num_not_merged_b = 0 | |
def _MergeIdentical(self, a, b): | |
"""Tries to merge two values. The values are required to be identical. | |
Args: | |
a: The first value. | |
b: The second value. | |
Returns: | |
The trivially merged value. | |
Raises: | |
MergeError: The values were not identical. | |
""" | |
if a != b: | |
raise MergeError("values must be identical ('%s' vs '%s')" % | |
(transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return b | |
def _MergeIdenticalCaseInsensitive(self, a, b): | |
"""Tries to merge two strings. | |
The string are required to be the same ignoring case. The second string is | |
always used as the merged value. | |
Args: | |
a: The first string. | |
b: The second string. | |
Returns: | |
The merged string. This is equal to the second string. | |
Raises: | |
MergeError: The strings were not the same ignoring case. | |
""" | |
if a.lower() != b.lower(): | |
raise MergeError("values must be the same (case insensitive) " | |
"('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return b | |
def _MergeOptional(self, a, b): | |
"""Tries to merge two values which may be None. | |
If both values are not None, they are required to be the same and the | |
merge is trivial. If one of the values is None and the other is not None, | |
the merge results in the one which is not None. If both are None, the merge | |
results in None. | |
Args: | |
a: The first value. | |
b: The second value. | |
Returns: | |
The merged value. | |
Raises: | |
MergeError: If both values are not None and are not the same. | |
""" | |
if a and b: | |
if a != b: | |
raise MergeError("values must be identical if both specified " | |
"('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return a or b | |
def _MergeSameAgency(self, a_agency_id, b_agency_id): | |
"""Merge agency ids to the corresponding agency id in the merged schedule. | |
Args: | |
a_agency_id: an agency id from the old schedule | |
b_agency_id: an agency id from the new schedule | |
Returns: | |
The agency id of the corresponding merged agency. | |
Raises: | |
MergeError: If a_agency_id and b_agency_id do not correspond to the same | |
merged agency. | |
KeyError: Either aaid or baid is not a valid agency id. | |
""" | |
a_agency_id = (a_agency_id or | |
self.feed_merger.a_schedule.GetDefaultAgency().agency_id) | |
b_agency_id = (b_agency_id or | |
self.feed_merger.b_schedule.GetDefaultAgency().agency_id) | |
a_agency = self.feed_merger.a_merge_map[ | |
self.feed_merger.a_schedule.GetAgency(a_agency_id)] | |
b_agency = self.feed_merger.b_merge_map[ | |
self.feed_merger.b_schedule.GetAgency(b_agency_id)] | |
if a_agency != b_agency: | |
raise MergeError('agency must be the same') | |
return a_agency.agency_id | |
def _SchemedMerge(self, scheme, a, b): | |
"""Tries to merge two entities according to a merge scheme. | |
A scheme is specified by a map where the keys are entity attributes and the | |
values are merge functions like Merger._MergeIdentical or | |
Merger._MergeOptional. The entity is first migrated to the merged schedule. | |
Then the attributes are individually merged as specified by the scheme. | |
Args: | |
scheme: The merge scheme, a map from entity attributes to merge | |
functions. | |
a: The entity from the old schedule. | |
b: The entity from the new schedule. | |
Returns: | |
The migrated and merged entity. | |
Raises: | |
MergeError: One of the attributes was not able to be merged. | |
""" | |
migrated = self._Migrate(b, self.feed_merger.b_schedule, False) | |
for attr, merger in scheme.items(): | |
a_attr = getattr(a, attr, None) | |
b_attr = getattr(b, attr, None) | |
try: | |
merged_attr = merger(a_attr, b_attr) | |
except MergeError, merge_error: | |
raise MergeError("Attribute '%s' could not be merged: %s." % ( | |
attr, merge_error)) | |
if migrated is not None: | |
setattr(migrated, attr, merged_attr) | |
return migrated | |
def _MergeSameId(self): | |
"""Tries to merge entities based on their ids. | |
This tries to merge only the entities from the old and new schedules which | |
have the same id. These are added into the merged schedule. Entities which | |
do not merge or do not have the same id as another entity in the other | |
schedule are simply migrated into the merged schedule. | |
This method is less flexible than _MergeDifferentId since it only tries | |
to merge entities which have the same id while _MergeDifferentId tries to | |
merge everything. However, it is faster and so should be used whenever | |
possible. | |
This method makes use of various methods like _Merge and _Migrate which | |
are not implemented in the abstract DataSetMerger class. These method | |
should be overwritten in a subclass to allow _MergeSameId to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
a_not_merged = [] | |
b_not_merged = [] | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
try: | |
b = self._GetById(self.feed_merger.b_schedule, self._GetId(a)) | |
except KeyError: | |
# there was no entity in B with the same id as a | |
a_not_merged.append(a) | |
continue | |
try: | |
self._Add(a, b, self._MergeEntities(a, b)) | |
self._num_merged += 1 | |
except MergeError, merge_error: | |
a_not_merged.append(a) | |
b_not_merged.append(b) | |
self._ReportSameIdButNotMerged(self._GetId(a), merge_error) | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
try: | |
a = self._GetById(self.feed_merger.a_schedule, self._GetId(b)) | |
except KeyError: | |
# there was no entity in A with the same id as b | |
b_not_merged.append(b) | |
# migrate the remaining entities | |
for a in a_not_merged: | |
newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a)) | |
self._Add(a, None, self._Migrate(a, self.feed_merger.a_schedule, newid)) | |
for b in b_not_merged: | |
newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b)) | |
self._Add(None, b, self._Migrate(b, self.feed_merger.b_schedule, newid)) | |
self._num_not_merged_a = len(a_not_merged) | |
self._num_not_merged_b = len(b_not_merged) | |
return self._num_merged | |
def _MergeDifferentId(self): | |
"""Tries to merge all possible combinations of entities. | |
This tries to merge every entity in the old schedule with every entity in | |
the new schedule. Unlike _MergeSameId, the ids do not need to match. | |
However, _MergeDifferentId is much slower than _MergeSameId. | |
This method makes use of various methods like _Merge and _Migrate which | |
are not implemented in the abstract DataSetMerger class. These method | |
should be overwritten in a subclass to allow _MergeSameId to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
# TODO: The same entity from A could merge with multiple from B. | |
# This should either generate an error or should be prevented from | |
# happening. | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
try: | |
self._Add(a, b, self._MergeEntities(a, b)) | |
self._num_merged += 1 | |
except MergeError: | |
continue | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
if a not in self.feed_merger.a_merge_map: | |
self._num_not_merged_a += 1 | |
newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a)) | |
self._Add(a, None, | |
self._Migrate(a, self.feed_merger.a_schedule, newid)) | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
if b not in self.feed_merger.b_merge_map: | |
self._num_not_merged_b += 1 | |
newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b)) | |
self._Add(None, b, | |
self._Migrate(b, self.feed_merger.b_schedule, newid)) | |
return self._num_merged | |
def _ReportSameIdButNotMerged(self, entity_id, reason): | |
"""Report that two entities have the same id but could not be merged. | |
Args: | |
entity_id: The id of the entities. | |
reason: A string giving a reason why they could not be merged. | |
""" | |
self.feed_merger.problem_reporter.SameIdButNotMerged(self, | |
entity_id, | |
reason) | |
def _GetIter(self, schedule): | |
"""Returns an iterator of entities for this data set in the given schedule. | |
This method usually corresponds to one of the methods from | |
transitfeed.Schedule like GetAgencyList() or GetRouteList(). | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
schedule: Either the old or new schedule from the FeedMerger. | |
Returns: | |
An iterator of entities. | |
""" | |
raise NotImplementedError() | |
def _GetById(self, schedule, entity_id): | |
"""Returns an entity given its id. | |
This method usually corresponds to one of the methods from | |
transitfeed.Schedule like GetAgency() or GetRoute(). | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
schedule: Either the old or new schedule from the FeedMerger. | |
entity_id: The id string of the entity. | |
Returns: | |
The entity with the given id. | |
Raises: | |
KeyError: There is not entity with the given id. | |
""" | |
raise NotImplementedError() | |
def _HasId(self, schedule, entity_id): | |
"""Check if the schedule has an entity with the given id. | |
Args: | |
schedule: The transitfeed.Schedule instance to look in. | |
entity_id: The id of the entity. | |
Returns: | |
True if the schedule has an entity with the id or False if not. | |
""" | |
try: | |
self._GetById(schedule, entity_id) | |
has = True | |
except KeyError: | |
has = False | |
return has | |
def _MergeEntities(self, a, b): | |
"""Tries to merge the two entities. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
a: The entity from the old schedule. | |
b: The entity from the new schedule. | |
Returns: | |
The merged migrated entity. | |
Raises: | |
MergeError: The entities were not able to be merged. | |
""" | |
raise NotImplementedError() | |
def _Migrate(self, entity, schedule, newid): | |
"""Migrates the entity to the merge schedule. | |
This involves copying the entity and updating any ids to point to the | |
corresponding entities in the merged schedule. If newid is True then | |
a unique id is generated for the migrated entity using the original id | |
as a prefix. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
entity: The entity to migrate. | |
schedule: The schedule from the FeedMerger that contains ent. | |
newid: Whether to generate a new id (True) or keep the original (False). | |
Returns: | |
The migrated entity. | |
""" | |
raise NotImplementedError() | |
def _Add(self, a, b, migrated): | |
"""Adds the migrated entity to the merged schedule. | |
If a and b are both not None, it means that a and b were merged to create | |
migrated. If one of a or b is None, it means that the other was not merged | |
but has been migrated. This mapping is registered with the FeedMerger. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
a: The original entity from the old schedule. | |
b: The original entity from the new schedule. | |
migrated: The migrated entity for the merged schedule. | |
""" | |
raise NotImplementedError() | |
def _GetId(self, entity): | |
"""Returns the id of the given entity. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
entity: The entity. | |
Returns: | |
The id of the entity as a string or None. | |
""" | |
raise NotImplementedError() | |
def MergeDataSets(self): | |
"""Merge the data sets. | |
This method is called in FeedMerger.MergeSchedule(). | |
Note: This method must be overwritten in a subclass. | |
Returns: | |
A boolean which is False if the dataset was unable to be merged and | |
as a result the entire merge should be aborted. In this case, the problem | |
will have been reported using the FeedMerger's problem reporter. | |
""" | |
raise NotImplementedError() | |
def GetMergeStats(self): | |
"""Returns some merge statistics. | |
These are given as a tuple (merged, not_merged_a, not_merged_b) where | |
"merged" is the number of merged entities, "not_merged_a" is the number of | |
entities from the old schedule that were not merged and "not_merged_b" is | |
the number of entities from the new schedule that were not merged. | |
The return value can also be None. This means that there are no statistics | |
for this entity type. | |
The statistics are only available after MergeDataSets() has been called. | |
Returns: | |
Either the statistics tuple or None. | |
""" | |
return (self._num_merged, self._num_not_merged_a, self._num_not_merged_b) | |
class AgencyMerger(DataSetMerger): | |
"""A DataSetMerger for agencies.""" | |
ENTITY_TYPE_NAME = 'agency' | |
FILE_NAME = 'agency.txt' | |
DATASET_NAME = 'Agencies' | |
def _GetIter(self, schedule): | |
return schedule.GetAgencyList() | |
def _GetById(self, schedule, agency_id): | |
return schedule.GetAgency(agency_id) | |
def _MergeEntities(self, a, b): | |
"""Merges two agencies. | |
To be merged, they are required to have the same id, name, url and | |
timezone. The remaining language attribute is taken from the new agency. | |
Args: | |
a: The first agency. | |
b: The second agency. | |
Returns: | |
The merged agency. | |
Raises: | |
MergeError: The agencies could not be merged. | |
""" | |
def _MergeAgencyId(a_agency_id, b_agency_id): | |
"""Merge two agency ids. | |
The only difference between this and _MergeIdentical() is that the values | |
None and '' are regarded as being the same. | |
Args: | |
a_agency_id: The first agency id. | |
b_agency_id: The second agency id. | |
Returns: | |
The merged agency id. | |
Raises: | |
MergeError: The agency ids could not be merged. | |
""" | |
a_agency_id = a_agency_id or None | |
b_agency_id = b_agency_id or None | |
return self._MergeIdentical(a_agency_id, b_agency_id) | |
scheme = {'agency_id': _MergeAgencyId, | |
'agency_name': self._MergeIdentical, | |
'agency_url': self._MergeIdentical, | |
'agency_timezone': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
a = transitfeed.Agency(field_dict=entity) | |
if newid: | |
a.agency_id = self.feed_merger.GenerateId(entity.agency_id) | |
return a | |
def _Add(self, a, b, migrated): | |
self.feed_merger.Register(a, b, migrated) | |
self.feed_merger.merged_schedule.AddAgencyObject(migrated) | |
def _GetId(self, entity): | |
return entity.agency_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class StopMerger(DataSetMerger): | |
"""A DataSetMerger for stops. | |
Attributes: | |
largest_stop_distance: The largest distance allowed between stops that | |
will be merged in metres. | |
""" | |
ENTITY_TYPE_NAME = 'stop' | |
FILE_NAME = 'stops.txt' | |
DATASET_NAME = 'Stops' | |
largest_stop_distance = 10.0 | |
def __init__(self, feed_merger): | |
DataSetMerger.__init__(self, feed_merger) | |
self._merged = [] | |
self._a_not_merged = [] | |
self._b_not_merged = [] | |
def SetLargestStopDistance(self, distance): | |
"""Sets largest_stop_distance.""" | |
self.largest_stop_distance = distance | |
def _GetIter(self, schedule): | |
return schedule.GetStopList() | |
def _GetById(self, schedule, stop_id): | |
return schedule.GetStop(stop_id) | |
def _MergeEntities(self, a, b): | |
"""Merges two stops. | |
For the stops to be merged, they must have: | |
- the same stop_id | |
- the same stop_name (case insensitive) | |
- the same zone_id | |
- locations less than largest_stop_distance apart | |
The other attributes can have arbitary changes. The merged attributes are | |
taken from the new stop. | |
Args: | |
a: The first stop. | |
b: The second stop. | |
Returns: | |
The merged stop. | |
Raises: | |
MergeError: The stops could not be merged. | |
""" | |
distance = transitfeed.ApproximateDistanceBetweenStops(a, b) | |
if distance > self.largest_stop_distance: | |
raise MergeError("Stops are too far apart: %.1fm " | |
"(largest_stop_distance is %.1fm)." % | |
(distance, self.largest_stop_distance)) | |
scheme = {'stop_id': self._MergeIdentical, | |
'stop_name': self._MergeIdenticalCaseInsensitive, | |
'zone_id': self._MergeIdentical, | |
'location_type': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
migrated_stop = transitfeed.Stop(field_dict=entity) | |
if newid: | |
migrated_stop.stop_id = self.feed_merger.GenerateId(entity.stop_id) | |
return migrated_stop | |
def _Add(self, a, b, migrated_stop): | |
self.feed_merger.Register(a, b, migrated_stop) | |
# The migrated_stop will be added to feed_merger.merged_schedule later | |
# since adding must be done after the zone_ids have been finalized. | |
if a and b: | |
self._merged.append((a, b, migrated_stop)) | |
elif a: | |
self._a_not_merged.append((a, migrated_stop)) | |
elif b: | |
self._b_not_merged.append((b, migrated_stop)) | |
def _GetId(self, entity): | |
return entity.stop_id | |
def MergeDataSets(self): | |
num_merged = self._MergeSameId() | |
fm = self.feed_merger | |
# now we do all the zone_id and parent_station mapping | |
# the zone_ids for merged stops can be preserved | |
for (a, b, merged_stop) in self._merged: | |
assert a.zone_id == b.zone_id | |
fm.a_zone_map[a.zone_id] = a.zone_id | |
fm.b_zone_map[b.zone_id] = b.zone_id | |
merged_stop.zone_id = a.zone_id | |
if merged_stop.parent_station: | |
# Merged stop has a parent. Update it to be the parent it had in b. | |
parent_in_b = fm.b_schedule.GetStop(b.parent_station) | |
merged_stop.parent_station = fm.b_merge_map[parent_in_b].stop_id | |
fm.merged_schedule.AddStopObject(merged_stop) | |
self._UpdateAndMigrateUnmerged(self._a_not_merged, fm.a_zone_map, | |
fm.a_merge_map, fm.a_schedule) | |
self._UpdateAndMigrateUnmerged(self._b_not_merged, fm.b_zone_map, | |
fm.b_merge_map, fm.b_schedule) | |
print 'Stops merged: %d of %d, %d' % ( | |
num_merged, | |
len(fm.a_schedule.GetStopList()), | |
len(fm.b_schedule.GetStopList())) | |
return True | |
def _UpdateAndMigrateUnmerged(self, not_merged_stops, zone_map, merge_map, | |
schedule): | |
"""Correct references in migrated unmerged stops and add to merged_schedule. | |
For stops migrated from one of the input feeds to the output feed update the | |
parent_station and zone_id references to point to objects in the output | |
feed. Then add the migrated stop to the new schedule. | |
Args: | |
not_merged_stops: list of stops from one input feed that have not been | |
merged | |
zone_map: map from zone_id in the input feed to zone_id in the output feed | |
merge_map: map from Stop objects in the input feed to Stop objects in | |
the output feed | |
schedule: the input Schedule object | |
""" | |
# for the unmerged stops, we use an already mapped zone_id if possible | |
# if not, we generate a new one and add it to the map | |
for stop, migrated_stop in not_merged_stops: | |
if stop.zone_id in zone_map: | |
migrated_stop.zone_id = zone_map[stop.zone_id] | |
else: | |
migrated_stop.zone_id = self.feed_merger.GenerateId(stop.zone_id) | |
zone_map[stop.zone_id] = migrated_stop.zone_id | |
if stop.parent_station: | |
parent_original = schedule.GetStop(stop.parent_station) | |
migrated_stop.parent_station = merge_map[parent_original].stop_id | |
self.feed_merger.merged_schedule.AddStopObject(migrated_stop) | |
class RouteMerger(DataSetMerger): | |
"""A DataSetMerger for routes.""" | |
ENTITY_TYPE_NAME = 'route' | |
FILE_NAME = 'routes.txt' | |
DATASET_NAME = 'Routes' | |
def _GetIter(self, schedule): | |
return schedule.GetRouteList() | |
def _GetById(self, schedule, route_id): | |
return schedule.GetRoute(route_id) | |
def _MergeEntities(self, a, b): | |
scheme = {'route_short_name': self._MergeIdentical, | |
'route_long_name': self._MergeIdentical, | |
'agency_id': self._MergeSameAgency, | |
'route_type': self._MergeIdentical, | |
'route_id': self._MergeIdentical, | |
'route_url': self._MergeOptional, | |
'route_color': self._MergeOptional, | |
'route_text_color': self._MergeOptional} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
migrated_route = transitfeed.Route(field_dict=entity) | |
if newid: | |
migrated_route.route_id = self.feed_merger.GenerateId(entity.route_id) | |
if entity.agency_id: | |
original_agency = schedule.GetAgency(entity.agency_id) | |
else: | |
original_agency = schedule.GetDefaultAgency() | |
migrated_agency = self.feed_merger.GetMergedObject(original_agency) | |
migrated_route.agency_id = migrated_agency.agency_id | |
return migrated_route | |
def _Add(self, a, b, migrated_route): | |
self.feed_merger.Register(a, b, migrated_route) | |
self.feed_merger.merged_schedule.AddRouteObject(migrated_route) | |
def _GetId(self, entity): | |
return entity.route_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class ServicePeriodMerger(DataSetMerger): | |
"""A DataSetMerger for service periods. | |
Attributes: | |
require_disjoint_calendars: A boolean specifying whether to require | |
disjoint calendars when merging (True) or not (False). | |
""" | |
ENTITY_TYPE_NAME = 'service period' | |
FILE_NAME = 'calendar.txt/calendar_dates.txt' | |
DATASET_NAME = 'Service Periods' | |
def __init__(self, feed_merger): | |
DataSetMerger.__init__(self, feed_merger) | |
self.require_disjoint_calendars = True | |
def _ReportSameIdButNotMerged(self, entity_id, reason): | |
pass | |
def _GetIter(self, schedule): | |
return schedule.GetServicePeriodList() | |
def _GetById(self, schedule, service_id): | |
return schedule.GetServicePeriod(service_id) | |
def _MergeEntities(self, a, b): | |
"""Tries to merge two service periods. | |
Note: Currently this just raises a MergeError since service periods cannot | |
be merged. | |
Args: | |
a: The first service period. | |
b: The second service period. | |
Returns: | |
The merged service period. | |
Raises: | |
MergeError: When the service periods could not be merged. | |
""" | |
raise MergeError('Cannot merge service periods') | |
def _Migrate(self, original_service_period, schedule, newid): | |
migrated_service_period = transitfeed.ServicePeriod() | |
migrated_service_period.day_of_week = list( | |
original_service_period.day_of_week) | |
migrated_service_period.start_date = original_service_period.start_date | |
migrated_service_period.end_date = original_service_period.end_date | |
migrated_service_period.date_exceptions = dict( | |
original_service_period.date_exceptions) | |
if newid: | |
migrated_service_period.service_id = self.feed_merger.GenerateId( | |
original_service_period.service_id) | |
else: | |
migrated_service_period.service_id = original_service_period.service_id | |
return migrated_service_period | |
def _Add(self, a, b, migrated_service_period): | |
self.feed_merger.Register(a, b, migrated_service_period) | |
self.feed_merger.merged_schedule.AddServicePeriodObject( | |
migrated_service_period) | |
def _GetId(self, entity): | |
return entity.service_id | |
def MergeDataSets(self): | |
if self.require_disjoint_calendars and not self.CheckDisjointCalendars(): | |
self.feed_merger.problem_reporter.CalendarsNotDisjoint(self) | |
return False | |
self._MergeSameId() | |
self.feed_merger.problem_reporter.MergeNotImplemented(self) | |
return True | |
def DisjoinCalendars(self, cutoff): | |
"""Forces the old and new calendars to be disjoint about a cutoff date. | |
This truncates the service periods of the old schedule so that service | |
stops one day before the given cutoff date and truncates the new schedule | |
so that service only begins on the cutoff date. | |
Args: | |
cutoff: The cutoff date as a string in YYYYMMDD format. The timezone | |
is the same as used in the calendar.txt file. | |
""" | |
def TruncatePeriod(service_period, start, end): | |
"""Truncate the service period to into the range [start, end]. | |
Args: | |
service_period: The service period to truncate. | |
start: The start date as a string in YYYYMMDD format. | |
end: The end date as a string in YYYYMMDD format. | |
""" | |
service_period.start_date = max(service_period.start_date, start) | |
service_period.end_date = min(service_period.end_date, end) | |
dates_to_delete = [] | |
for k in service_period.date_exceptions: | |
if (k < start) or (k > end): | |
dates_to_delete.append(k) | |
for k in dates_to_delete: | |
del service_period.date_exceptions[k] | |
# find the date one day before cutoff | |
year = int(cutoff[:4]) | |
month = int(cutoff[4:6]) | |
day = int(cutoff[6:8]) | |
cutoff_date = datetime.date(year, month, day) | |
one_day_delta = datetime.timedelta(days=1) | |
before = (cutoff_date - one_day_delta).strftime('%Y%m%d') | |
for a in self.feed_merger.a_schedule.GetServicePeriodList(): | |
TruncatePeriod(a, 0, before) | |
for b in self.feed_merger.b_schedule.GetServicePeriodList(): | |
TruncatePeriod(b, cutoff, '9'*8) | |
def CheckDisjointCalendars(self): | |
"""Check whether any old service periods intersect with any new ones. | |
This is a rather coarse check based on | |
transitfeed.SevicePeriod.GetDateRange. | |
Returns: | |
True if the calendars are disjoint or False if not. | |
""" | |
# TODO: Do an exact check here. | |
a_service_periods = self.feed_merger.a_schedule.GetServicePeriodList() | |
b_service_periods = self.feed_merger.b_schedule.GetServicePeriodList() | |
for a_service_period in a_service_periods: | |
a_start, a_end = a_service_period.GetDateRange() | |
for b_service_period in b_service_periods: | |
b_start, b_end = b_service_period.GetDateRange() | |
overlap_start = max(a_start, b_start) | |
overlap_end = min(a_end, b_end) | |
if overlap_end >= overlap_start: | |
return False | |
return True | |
def GetMergeStats(self): | |
return None | |
class FareMerger(DataSetMerger): | |
"""A DataSetMerger for fares.""" | |
ENTITY_TYPE_NAME = 'fare' | |
FILE_NAME = 'fare_attributes.txt' | |
DATASET_NAME = 'Fares' | |
def _GetIter(self, schedule): | |
return schedule.GetFareList() | |
def _GetById(self, schedule, fare_id): | |
return schedule.GetFare(fare_id) | |
def _MergeEntities(self, a, b): | |
"""Merges the fares if all the attributes are the same.""" | |
scheme = {'price': self._MergeIdentical, | |
'currency_type': self._MergeIdentical, | |
'payment_method': self._MergeIdentical, | |
'transfers': self._MergeIdentical, | |
'transfer_duration': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, original_fare, schedule, newid): | |
migrated_fare = transitfeed.Fare( | |
field_list=original_fare.GetFieldValuesTuple()) | |
if newid: | |
migrated_fare.fare_id = self.feed_merger.GenerateId( | |
original_fare.fare_id) | |
return migrated_fare | |
def _Add(self, a, b, migrated_fare): | |
self.feed_merger.Register(a, b, migrated_fare) | |
self.feed_merger.merged_schedule.AddFareObject(migrated_fare) | |
def _GetId(self, fare): | |
return fare.fare_id | |
def MergeDataSets(self): | |
num_merged = self._MergeSameId() | |
print 'Fares merged: %d of %d, %d' % ( | |
num_merged, | |
len(self.feed_merger.a_schedule.GetFareList()), | |
len(self.feed_merger.b_schedule.GetFareList())) | |
return True | |
class ShapeMerger(DataSetMerger): | |
"""A DataSetMerger for shapes. | |
In this implementation, merging shapes means just taking the new shape. | |
The only conditions for a merge are that the shape_ids are the same and | |
the endpoints of the old and new shapes are no further than | |
largest_shape_distance apart. | |
Attributes: | |
largest_shape_distance: The largest distance between the endpoints of two | |
shapes allowed for them to be merged in metres. | |
""" | |
ENTITY_TYPE_NAME = 'shape' | |
FILE_NAME = 'shapes.txt' | |
DATASET_NAME = 'Shapes' | |
largest_shape_distance = 10.0 | |
def SetLargestShapeDistance(self, distance): | |
"""Sets largest_shape_distance.""" | |
self.largest_shape_distance = distance | |
def _GetIter(self, schedule): | |
return schedule.GetShapeList() | |
def _GetById(self, schedule, shape_id): | |
return schedule.GetShape(shape_id) | |
def _MergeEntities(self, a, b): | |
"""Merges the shapes by taking the new shape. | |
Args: | |
a: The first transitfeed.Shape instance. | |
b: The second transitfeed.Shape instance. | |
Returns: | |
The merged shape. | |
Raises: | |
MergeError: If the ids are different or if the endpoints are further | |
than largest_shape_distance apart. | |
""" | |
if a.shape_id != b.shape_id: | |
raise MergeError('shape_id must be the same') | |
distance = max(ApproximateDistanceBetweenPoints(a.points[0][:2], | |
b.points[0][:2]), | |
ApproximateDistanceBetweenPoints(a.points[-1][:2], | |
b.points[-1][:2])) | |
if distance > self.largest_shape_distance: | |
raise MergeError('The shape endpoints are too far away: %.1fm ' | |
'(largest_shape_distance is %.1fm)' % | |
(distance, self.largest_shape_distance)) | |
return self._Migrate(b, self.feed_merger.b_schedule, False) | |
def _Migrate(self, original_shape, schedule, newid): | |
migrated_shape = transitfeed.Shape(original_shape.shape_id) | |
if newid: | |
migrated_shape.shape_id = self.feed_merger.GenerateId( | |
original_shape.shape_id) | |
for (lat, lon, dist) in original_shape.points: | |
migrated_shape.AddPoint(lat=lat, lon=lon, distance=dist) | |
return migrated_shape | |
def _Add(self, a, b, migrated_shape): | |
self.feed_merger.Register(a, b, migrated_shape) | |
self.feed_merger.merged_schedule.AddShapeObject(migrated_shape) | |
def _GetId(self, shape): | |
return shape.shape_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class TripMerger(DataSetMerger): | |
"""A DataSetMerger for trips. | |
This implementation makes no attempt to merge trips, it simply migrates | |
them all to the merged feed. | |
""" | |
ENTITY_TYPE_NAME = 'trip' | |
FILE_NAME = 'trips.txt' | |
DATASET_NAME = 'Trips' | |
def _ReportSameIdButNotMerged(self, trip_id, reason): | |
pass | |
def _GetIter(self, schedule): | |
return schedule.GetTripList() | |
def _GetById(self, schedule, trip_id): | |
return schedule.GetTrip(trip_id) | |
def _MergeEntities(self, a, b): | |
"""Raises a MergeError because currently trips cannot be merged.""" | |
raise MergeError('Cannot merge trips') | |
def _Migrate(self, original_trip, schedule, newid): | |
migrated_trip = transitfeed.Trip(field_dict=original_trip) | |
# Make new trip_id first. AddTripObject reports a problem if it conflicts | |
# with an existing id. | |
if newid: | |
migrated_trip.trip_id = self.feed_merger.GenerateId( | |
original_trip.trip_id) | |
# Need to add trip to schedule before copying stoptimes | |
self.feed_merger.merged_schedule.AddTripObject(migrated_trip, | |
validate=False) | |
if schedule == self.feed_merger.a_schedule: | |
merge_map = self.feed_merger.a_merge_map | |
else: | |
merge_map = self.feed_merger.b_merge_map | |
original_route = schedule.GetRoute(original_trip.route_id) | |
migrated_trip.route_id = merge_map[original_route].route_id | |
original_service_period = schedule.GetServicePeriod( | |
original_trip.service_id) | |
migrated_trip.service_id = merge_map[original_service_period].service_id | |
if original_trip.block_id: | |
migrated_trip.block_id = '%s_%s' % ( | |
self.feed_merger.GetScheduleName(schedule), | |
original_trip.block_id) | |
if original_trip.shape_id: | |
original_shape = schedule.GetShape(original_trip.shape_id) | |
migrated_trip.shape_id = merge_map[original_shape].shape_id | |
for original_stop_time in original_trip.GetStopTimes(): | |
migrated_stop_time = transitfeed.StopTime( | |
None, | |
merge_map[original_stop_time.stop], | |
original_stop_time.arrival_time, | |
original_stop_time.departure_time, | |
original_stop_time.stop_headsign, | |
original_stop_time.pickup_type, | |
original_stop_time.drop_off_type, | |
original_stop_time.shape_dist_traveled, | |
original_stop_time.arrival_secs, | |
original_stop_time.departure_secs) | |
migrated_trip.AddStopTimeObject(migrated_stop_time) | |
for headway_period in original_trip.GetHeadwayPeriodTuples(): | |
migrated_trip.AddHeadwayPeriod(*headway_period) | |
return migrated_trip | |
def _Add(self, a, b, migrated_trip): | |
# Validate now, since it wasn't done in _Migrate | |
migrated_trip.Validate(self.feed_merger.merged_schedule.problem_reporter) | |
self.feed_merger.Register(a, b, migrated_trip) | |
def _GetId(self, trip): | |
return trip.trip_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
self.feed_merger.problem_reporter.MergeNotImplemented(self) | |
return True | |
def GetMergeStats(self): | |
return None | |
class FareRuleMerger(DataSetMerger): | |
"""A DataSetMerger for fare rules.""" | |
ENTITY_TYPE_NAME = 'fare rule' | |
FILE_NAME = 'fare_rules.txt' | |
DATASET_NAME = 'Fare Rules' | |
def MergeDataSets(self): | |
"""Merge the fare rule datasets. | |
The fare rules are first migrated. Merging is done by removing any | |
duplicate rules. | |
Returns: | |
True since fare rules can always be merged. | |
""" | |
rules = set() | |
for (schedule, merge_map, zone_map) in ([self.feed_merger.a_schedule, | |
self.feed_merger.a_merge_map, | |
self.feed_merger.a_zone_map], | |
[self.feed_merger.b_schedule, | |
self.feed_merger.b_merge_map, | |
self.feed_merger.b_zone_map]): | |
for fare in schedule.GetFareList(): | |
for fare_rule in fare.GetFareRuleList(): | |
fare_id = merge_map[schedule.GetFare(fare_rule.fare_id)].fare_id | |
route_id = (fare_rule.route_id and | |
merge_map[schedule.GetRoute(fare_rule.route_id)].route_id) | |
origin_id = (fare_rule.origin_id and | |
zone_map[fare_rule.origin_id]) | |
destination_id = (fare_rule.destination_id and | |
zone_map[fare_rule.destination_id]) | |
contains_id = (fare_rule.contains_id and | |
zone_map[fare_rule.contains_id]) | |
rules.add((fare_id, route_id, origin_id, destination_id, | |
contains_id)) | |
for fare_rule_tuple in rules: | |
migrated_fare_rule = transitfeed.FareRule(*fare_rule_tuple) | |
self.feed_merger.merged_schedule.AddFareRuleObject(migrated_fare_rule) | |
if rules: | |
self.feed_merger.problem_reporter.FareRulesBroken(self) | |
print 'Fare Rules: union has %d fare rules' % len(rules) | |
return True | |
def GetMergeStats(self): | |
return None | |
class FeedMerger(object): | |
"""A class for merging two whole feeds. | |
This class takes two instances of transitfeed.Schedule and uses | |
DataSetMerger instances to merge the feeds and produce the resultant | |
merged feed. | |
Attributes: | |
a_schedule: The old transitfeed.Schedule instance. | |
b_schedule: The new transitfeed.Schedule instance. | |
problem_reporter: The merge problem reporter. | |
merged_schedule: The merged transitfeed.Schedule instance. | |
a_merge_map: A map from old entities to merged entities. | |
b_merge_map: A map from new entities to merged entities. | |
a_zone_map: A map from old zone ids to merged zone ids. | |
b_zone_map: A map from new zone ids to merged zone ids. | |
""" | |
def __init__(self, a_schedule, b_schedule, merged_schedule, | |
problem_reporter=None): | |
"""Initialise the merger. | |
Once this initialiser has been called, a_schedule and b_schedule should | |
not be modified. | |
Args: | |
a_schedule: The old schedule, an instance of transitfeed.Schedule. | |
b_schedule: The new schedule, an instance of transitfeed.Schedule. | |
problem_reporter: The problem reporter, an instance of | |
transitfeed.ProblemReporterBase. This can be None in | |
which case the ExceptionProblemReporter is used. | |
""" | |
self.a_schedule = a_schedule | |
self.b_schedule = b_schedule | |
self.merged_schedule = merged_schedule | |
self.a_merge_map = {} | |
self.b_merge_map = {} | |
self.a_zone_map = {} | |
self.b_zone_map = {} | |
self._mergers = [] | |
self._idnum = max(self._FindLargestIdPostfixNumber(self.a_schedule), | |
self._FindLargestIdPostfixNumber(self.b_schedule)) | |
if problem_reporter is not None: | |
self.problem_reporter = problem_reporter | |
else: | |
self.problem_reporter = ExceptionProblemReporter() | |
def _FindLargestIdPostfixNumber(self, schedule): | |
"""Finds the largest integer used as the ending of an id in the schedule. | |
Args: | |
schedule: The schedule to check. | |
Returns: | |
The maximum integer used as an ending for an id. | |
""" | |
postfix_number_re = re.compile('(\d+)$') | |
def ExtractPostfixNumber(entity_id): | |
"""Try to extract an integer from the end of entity_id. | |
If entity_id is None or if there is no integer ending the id, zero is | |
returned. | |
Args: | |
entity_id: An id string or None. | |
Returns: | |
An integer ending the entity_id or zero. | |
""" | |
if entity_id is None: | |
return 0 | |
match = postfix_number_re.search(entity_id) | |
if match is not None: | |
return int(match.group(1)) | |
else: | |
return 0 | |
id_data_sets = {'agency_id': schedule.GetAgencyList(), | |
'stop_id': schedule.GetStopList(), | |
'route_id': schedule.GetRouteList(), | |
'trip_id': schedule.GetTripList(), | |
'service_id': schedule.GetServicePeriodList(), | |
'fare_id': schedule.GetFareList(), | |
'shape_id': schedule.GetShapeList()} | |
max_postfix_number = 0 | |
for id_name, entity_list in id_data_sets.items(): | |
for entity in entity_list: | |
entity_id = getattr(entity, id_name) | |
postfix_number = ExtractPostfixNumber(entity_id) | |
max_postfix_number = max(max_postfix_number, postfix_number) | |
return max_postfix_number | |
def GetScheduleName(self, schedule): | |
"""Returns a single letter identifier for the schedule. | |
This only works for the old and new schedules which return 'a' and 'b' | |
respectively. The purpose of such identifiers is for generating ids. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
Returns: | |
The schedule identifier. | |
Raises: | |
KeyError: schedule is not the old or new schedule. | |
""" | |
return {self.a_schedule: 'a', self.b_schedule: 'b'}[schedule] | |
def GenerateId(self, entity_id=None): | |
"""Generate a unique id based on the given id. | |
This is done by appending a counter which is then incremented. The | |
counter is initialised at the maximum number used as an ending for | |
any id in the old and new schedules. | |
Args: | |
entity_id: The base id string. This is allowed to be None. | |
Returns: | |
The generated id. | |
""" | |
self._idnum += 1 | |
if entity_id: | |
return '%s_merged_%d' % (entity_id, self._idnum) | |
else: | |
return 'merged_%d' % self._idnum | |
def Register(self, a, b, migrated_entity): | |
"""Registers a merge mapping. | |
If a and b are both not None, this means that entities a and b were merged | |
to produce migrated_entity. If one of a or b are not None, then it means | |
it was not merged but simply migrated. | |
The effect of a call to register is to update a_merge_map and b_merge_map | |
according to the merge. | |
Args: | |
a: The entity from the old feed or None. | |
b: The entity from the new feed or None. | |
migrated_entity: The migrated entity. | |
""" | |
if a is not None: self.a_merge_map[a] = migrated_entity | |
if b is not None: self.b_merge_map[b] = migrated_entity | |
def AddMerger(self, merger): | |
"""Add a DataSetMerger to be run by Merge(). | |
Args: | |
merger: The DataSetMerger instance. | |
""" | |
self._mergers.append(merger) | |
def AddDefaultMergers(self): | |
"""Adds the default DataSetMergers defined in this module.""" | |
self.AddMerger(AgencyMerger(self)) | |
self.AddMerger(StopMerger(self)) | |
self.AddMerger(RouteMerger(self)) | |
self.AddMerger(ServicePeriodMerger(self)) | |
self.AddMerger(FareMerger(self)) | |
self.AddMerger(ShapeMerger(self)) | |
self.AddMerger(TripMerger(self)) | |
self.AddMerger(FareRuleMerger(self)) | |
def GetMerger(self, cls): | |
"""Looks for an added DataSetMerger derived from the given class. | |
Args: | |
cls: A class derived from DataSetMerger. | |
Returns: | |
The matching DataSetMerger instance. | |
Raises: | |
LookupError: No matching DataSetMerger has been added. | |
""" | |
for merger in self._mergers: | |
if isinstance(merger, cls): | |
return merger | |
raise LookupError('No matching DataSetMerger found') | |
def GetMergerList(self): | |
"""Returns the list of DataSetMerger instances that have been added.""" | |
return self._mergers | |
def MergeSchedules(self): | |
"""Merge the schedules. | |
This is done by running the DataSetMergers that have been added with | |
AddMerger() in the order that they were added. | |
Returns: | |
True if the merge was successful. | |
""" | |
for merger in self._mergers: | |
if not merger.MergeDataSets(): | |
return False | |
return True | |
def GetMergedSchedule(self): | |
"""Returns the merged schedule. | |
This will be empty before MergeSchedules() is called. | |
Returns: | |
The merged schedule. | |
""" | |
return self.merged_schedule | |
def GetMergedObject(self, original): | |
"""Returns an object that represents original in the merged schedule.""" | |
# TODO: I think this would be better implemented by adding a private | |
# attribute to the objects in the original feeds | |
merged = (self.a_merge_map.get(original) or | |
self.b_merge_map.get(original)) | |
if merged: | |
return merged | |
else: | |
raise KeyError() | |
def main(): | |
"""Run the merge driver program.""" | |
usage = \ | |
"""%prog [options] <input GTFS a.zip> <input GTFS b.zip> <output GTFS.zip> | |
Merges <input GTFS a.zip> and <input GTFS b.zip> into a new GTFS file | |
<output GTFS.zip>. | |
""" | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('--cutoff_date', | |
dest='cutoff_date', | |
default=None, | |
help='a transition date from the old feed to the new ' | |
'feed in the format YYYYMMDD') | |
parser.add_option('--largest_stop_distance', | |
dest='largest_stop_distance', | |
default=StopMerger.largest_stop_distance, | |
help='the furthest distance two stops can be apart and ' | |
'still be merged, in metres') | |
parser.add_option('--largest_shape_distance', | |
dest='largest_shape_distance', | |
default=ShapeMerger.largest_shape_distance, | |
help='the furthest distance the endpoints of two shapes ' | |
'can be apart and the shape still be merged, in metres') | |
parser.add_option('--html_output_path', | |
dest='html_output_path', | |
default='merge-results.html', | |
help='write the html output to this file') | |
parser.add_option('--no_browser', | |
dest='no_browser', | |
action='store_true', | |
help='prevents the merge results from being opened in a ' | |
'browser') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Use in-memory sqlite db instead of a temporary file. ' | |
'It is faster but uses more RAM.') | |
parser.set_defaults(memory_db=False) | |
(options, args) = parser.parse_args() | |
if len(args) != 3: | |
parser.error('You did not provide all required command line arguments.') | |
old_feed_path = os.path.abspath(args[0]) | |
new_feed_path = os.path.abspath(args[1]) | |
merged_feed_path = os.path.abspath(args[2]) | |
if old_feed_path.find("IWantMyCrash") != -1: | |
# See test/testmerge.py | |
raise Exception('For testing the merge crash handler.') | |
a_schedule = LoadWithoutErrors(old_feed_path, options.memory_db) | |
b_schedule = LoadWithoutErrors(new_feed_path, options.memory_db) | |
merged_schedule = transitfeed.Schedule(memory_db=options.memory_db) | |
problem_reporter = HTMLProblemReporter() | |
feed_merger = FeedMerger(a_schedule, b_schedule, merged_schedule, | |
problem_reporter) | |
feed_merger.AddDefaultMergers() | |
feed_merger.GetMerger(StopMerger).SetLargestStopDistance(float( | |
options.largest_stop_distance)) | |
feed_merger.GetMerger(ShapeMerger).SetLargestShapeDistance(float( | |
options.largest_shape_distance)) | |
if options.cutoff_date is not None: | |
service_period_merger = feed_merger.GetMerger(ServicePeriodMerger) | |
service_period_merger.DisjoinCalendars(options.cutoff_date) | |
if feed_merger.MergeSchedules(): | |
feed_merger.GetMergedSchedule().WriteGoogleTransitFeed(merged_feed_path) | |
else: | |
merged_feed_path = None | |
output_file = file(options.html_output_path, 'w') | |
problem_reporter.WriteOutput(output_file, feed_merger, | |
old_feed_path, new_feed_path, merged_feed_path) | |
output_file.close() | |
if not options.no_browser: | |
webbrowser.open('file://%s' % os.path.abspath(options.html_output_path)) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
An example application that uses the transitfeed module. | |
You must provide a Google Maps API key. | |
""" | |
import BaseHTTPServer, sys, urlparse | |
import bisect | |
from gtfsscheduleviewer.marey_graph import MareyGraph | |
import gtfsscheduleviewer | |
import mimetypes | |
import os.path | |
import re | |
import signal | |
import simplejson | |
import socket | |
import time | |
import transitfeed | |
from transitfeed import util | |
import urllib | |
# By default Windows kills Python with Ctrl+Break. Instead make Ctrl+Break | |
# raise a KeyboardInterrupt. | |
if hasattr(signal, 'SIGBREAK'): | |
signal.signal(signal.SIGBREAK, signal.default_int_handler) | |
mimetypes.add_type('text/plain', '.vbs') | |
class ResultEncoder(simplejson.JSONEncoder): | |
def default(self, obj): | |
try: | |
iterable = iter(obj) | |
except TypeError: | |
pass | |
else: | |
return list(iterable) | |
return simplejson.JSONEncoder.default(self, obj) | |
# Code taken from | |
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/425210/index_txt | |
# An alternate approach is shown at | |
# http://mail.python.org/pipermail/python-list/2003-July/212751.html | |
# but it requires multiple threads. A sqlite object can only be used from one | |
# thread. | |
class StoppableHTTPServer(BaseHTTPServer.HTTPServer): | |
def server_bind(self): | |
BaseHTTPServer.HTTPServer.server_bind(self) | |
self.socket.settimeout(1) | |
self._run = True | |
def get_request(self): | |
while self._run: | |
try: | |
sock, addr = self.socket.accept() | |
sock.settimeout(None) | |
return (sock, addr) | |
except socket.timeout: | |
pass | |
def stop(self): | |
self._run = False | |
def serve(self): | |
while self._run: | |
self.handle_request() | |
def StopToTuple(stop): | |
"""Return tuple as expected by javascript function addStopMarkerFromList""" | |
return (stop.stop_id, stop.stop_name, float(stop.stop_lat), | |
float(stop.stop_lon), stop.location_type) | |
class ScheduleRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): | |
def do_GET(self): | |
scheme, host, path, x, params, fragment = urlparse.urlparse(self.path) | |
parsed_params = {} | |
for k in params.split('&'): | |
k = urllib.unquote(k) | |
if '=' in k: | |
k, v = k.split('=', 1) | |
parsed_params[k] = unicode(v, 'utf8') | |
else: | |
parsed_params[k] = '' | |
if path == '/': | |
return self.handle_GET_home() | |
m = re.match(r'/json/([a-z]{1,64})', path) | |
if m: | |
handler_name = 'handle_json_GET_%s' % m.group(1) | |
handler = getattr(self, handler_name, None) | |
if callable(handler): | |
return self.handle_json_wrapper_GET(handler, parsed_params) | |
# Restrict allowable file names to prevent relative path attacks etc | |
m = re.match(r'/file/([a-z0-9_-]{1,64}\.?[a-z0-9_-]{1,64})$', path) | |
if m and m.group(1): | |
try: | |
f, mime_type = self.OpenFile(m.group(1)) | |
return self.handle_static_file_GET(f, mime_type) | |
except IOError, e: | |
print "Error: unable to open %s" % m.group(1) | |
# Ignore and treat as 404 | |
m = re.match(r'/([a-z]{1,64})', path) | |
if m: | |
handler_name = 'handle_GET_%s' % m.group(1) | |
handler = getattr(self, handler_name, None) | |
if callable(handler): | |
return handler(parsed_params) | |
return self.handle_GET_default(parsed_params, path) | |
def OpenFile(self, filename): | |
"""Try to open filename in the static files directory of this server. | |
Return a tuple (file object, string mime_type) or raise an exception.""" | |
(mime_type, encoding) = mimetypes.guess_type(filename) | |
assert mime_type | |
# A crude guess of when we should use binary mode. Without it non-unix | |
# platforms may corrupt binary files. | |
if mime_type.startswith('text/'): | |
mode = 'r' | |
else: | |
mode = 'rb' | |
return open(os.path.join(self.server.file_dir, filename), mode), mime_type | |
def handle_GET_default(self, parsed_params, path): | |
self.send_error(404) | |
def handle_static_file_GET(self, fh, mime_type): | |
content = fh.read() | |
self.send_response(200) | |
self.send_header('Content-Type', mime_type) | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def AllowEditMode(self): | |
return False | |
def handle_GET_home(self): | |
schedule = self.server.schedule | |
(min_lat, min_lon, max_lat, max_lon) = schedule.GetStopBoundingBox() | |
forbid_editing = ('true', 'false')[self.AllowEditMode()] | |
agency = ', '.join(a.agency_name for a in schedule.GetAgencyList()).encode('utf-8') | |
key = self.server.key | |
host = self.server.host | |
# A very simple template system. For a fixed set of values replace [xxx] | |
# with the value of local variable xxx | |
f, _ = self.OpenFile('index.html') | |
content = f.read() | |
for v in ('agency', 'min_lat', 'min_lon', 'max_lat', 'max_lon', 'key', | |
'host', 'forbid_editing'): | |
content = content.replace('[%s]' % v, str(locals()[v])) | |
self.send_response(200) | |
self.send_header('Content-Type', 'text/html') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def handle_json_GET_routepatterns(self, params): | |
"""Given a route_id generate a list of patterns of the route. For each | |
pattern include some basic information and a few sample trips.""" | |
schedule = self.server.schedule | |
route = schedule.GetRoute(params.get('route', None)) | |
if not route: | |
self.send_error(404) | |
return | |
time = int(params.get('time', 0)) | |
sample_size = 3 # For each pattern return the start time for this many trips | |
pattern_id_trip_dict = route.GetPatternIdTripDict() | |
patterns = [] | |
for pattern_id, trips in pattern_id_trip_dict.items(): | |
time_stops = trips[0].GetTimeStops() | |
if not time_stops: | |
continue | |
has_non_zero_trip_type = False; | |
for trip in trips: | |
if trip['trip_type'] and trip['trip_type'] != '0': | |
has_non_zero_trip_type = True | |
name = u'%s to %s, %d stops' % (time_stops[0][2].stop_name, time_stops[-1][2].stop_name, len(time_stops)) | |
transitfeed.SortListOfTripByTime(trips) | |
num_trips = len(trips) | |
if num_trips <= sample_size: | |
start_sample_index = 0 | |
num_after_sample = 0 | |
else: | |
# Will return sample_size trips that start after the 'time' param. | |
# Linear search because I couldn't find a built-in way to do a binary | |
# search with a custom key. | |
start_sample_index = len(trips) | |
for i, trip in enumerate(trips): | |
if trip.GetStartTime() >= time: | |
start_sample_index = i | |
break | |
num_after_sample = num_trips - (start_sample_index + sample_size) | |
if num_after_sample < 0: | |
# Less than sample_size trips start after 'time' so return all the | |
# last sample_size trips. | |
num_after_sample = 0 | |
start_sample_index = num_trips - sample_size | |
sample = [] | |
for t in trips[start_sample_index:start_sample_index + sample_size]: | |
sample.append( (t.GetStartTime(), t.trip_id) ) | |
patterns.append((name, pattern_id, start_sample_index, sample, | |
num_after_sample, (0,1)[has_non_zero_trip_type])) | |
patterns.sort() | |
return patterns | |
def handle_json_wrapper_GET(self, handler, parsed_params): | |
"""Call handler and output the return value in JSON.""" | |
schedule = self.server.schedule | |
result = handler(parsed_params) | |
content = ResultEncoder().encode(result) | |
self.send_response(200) | |
self.send_header('Content-Type', 'text/plain') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def handle_json_GET_routes(self, params): | |
"""Return a list of all routes.""" | |
schedule = self.server.schedule | |
result = [] | |
for r in schedule.GetRouteList(): | |
result.append( (r.route_id, r.route_short_name, r.route_long_name) ) | |
result.sort(key = lambda x: x[1:3]) | |
return result | |
def handle_json_GET_routerow(self, params): | |
schedule = self.server.schedule | |
route = schedule.GetRoute(params.get('route', None)) | |
return [transitfeed.Route._FIELD_NAMES, route.GetFieldValuesTuple()] | |
def handle_json_GET_triprows(self, params): | |
"""Return a list of rows from the feed file that are related to this | |
trip.""" | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip', None)) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
route = schedule.GetRoute(trip.route_id) | |
trip_row = dict(trip.iteritems()) | |
route_row = dict(route.iteritems()) | |
return [['trips.txt', trip_row], ['routes.txt', route_row]] | |
def handle_json_GET_tripstoptimes(self, params): | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip')) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
time_stops = trip.GetTimeStops() | |
stops = [] | |
times = [] | |
for arr,dep,stop in time_stops: | |
stops.append(StopToTuple(stop)) | |
times.append(arr) | |
return [stops, times] | |
def handle_json_GET_tripshape(self, params): | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip')) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
points = [] | |
if trip.shape_id: | |
shape = schedule.GetShape(trip.shape_id) | |
for (lat, lon, dist) in shape.points: | |
points.append((lat, lon)) | |
else: | |
time_stops = trip.GetTimeStops() | |
for arr,dep,stop in time_stops: | |
points.append((stop.stop_lat, stop.stop_lon)) | |
return points | |
def handle_json_GET_neareststops(self, params): | |
"""Return a list of the nearest 'limit' stops to 'lat', 'lon'""" | |
schedule = self.server.schedule | |
lat = float(params.get('lat')) | |
lon = float(params.get('lon')) | |
limit = int(params.get('limit')) | |
stops = schedule.GetNearestStops(lat=lat, lon=lon, n=limit) | |
return [StopToTuple(s) for s in stops] | |
def handle_json_GET_boundboxstops(self, params): | |
"""Return a list of up to 'limit' stops within bounding box with 'n','e' | |
and 's','w' in the NE and SW corners. Does not handle boxes crossing | |
longitude line 180.""" | |
schedule = self.server.schedule | |
n = float(params.get('n')) | |
e = float(params.get('e')) | |
s = float(params.get('s')) | |
w = float(params.get('w')) | |
limit = int(params.get('limit')) | |
stops = schedule.GetStopsInBoundingBox(north=n, east=e, south=s, west=w, n=limit) | |
return [StopToTuple(s) for s in stops] | |
def handle_json_GET_stopsearch(self, params): | |
schedule = self.server.schedule | |
query = params.get('q', None).lower() | |
matches = [] | |
for s in schedule.GetStopList(): | |
if s.stop_id.lower().find(query) != -1 or s.stop_name.lower().find(query) != -1: | |
matches.append(StopToTuple(s)) | |
return matches | |
def handle_json_GET_stoptrips(self, params): | |
"""Given a stop_id and time in seconds since midnight return the next | |
trips to visit the stop.""" | |
schedule = self.server.schedule | |
stop = schedule.GetStop(params.get('stop', None)) | |
time = int(params.get('time', 0)) | |
time_trips = stop.GetStopTimeTrips(schedule) | |
time_trips.sort() # OPT: use bisect.insort to make this O(N*ln(N)) -> O(N) | |
# Keep the first 5 after param 'time'. | |
# Need make a tuple to find correct bisect point | |
time_trips = time_trips[bisect.bisect_left(time_trips, (time, 0)):] | |
time_trips = time_trips[:5] | |
# TODO: combine times for a route to show next 2 departure times | |
result = [] | |
for time, (trip, index), tp in time_trips: | |
headsign = None | |
# Find the most recent headsign from the StopTime objects | |
for stoptime in trip.GetStopTimes()[index::-1]: | |
if stoptime.stop_headsign: | |
headsign = stoptime.stop_headsign | |
break | |
# If stop_headsign isn't found, look for a trip_headsign | |
if not headsign: | |
headsign = trip.trip_headsign | |
route = schedule.GetRoute(trip.route_id) | |
trip_name = '' | |
if route.route_short_name: | |
trip_name += route.route_short_name | |
if route.route_long_name: | |
if len(trip_name): | |
trip_name += " - " | |
trip_name += route.route_long_name | |
if headsign: | |
trip_name += " (Direction: %s)" % headsign | |
result.append((time, (trip.trip_id, trip_name, trip.service_id), tp)) | |
return result | |
def handle_GET_ttablegraph(self,params): | |
"""Draw a Marey graph in SVG for a pattern (collection of trips in a route | |
that visit the same sequence of stops).""" | |
schedule = self.server.schedule | |
marey = MareyGraph() | |
trip = schedule.GetTrip(params.get('trip', None)) | |
route = schedule.GetRoute(trip.route_id) | |
height = int(params.get('height', 300)) | |
if not route: | |
print 'no such route' | |
self.send_error(404) | |
return | |
pattern_id_trip_dict = route.GetPatternIdTripDict() | |
pattern_id = trip.pattern_id | |
if pattern_id not in pattern_id_trip_dict: | |
print 'no pattern %s found in %s' % (pattern_id, pattern_id_trip_dict.keys()) | |
self.send_error(404) | |
return | |
triplist = pattern_id_trip_dict[pattern_id] | |
pattern_start_time = min((t.GetStartTime() for t in triplist)) | |
pattern_end_time = max((t.GetEndTime() for t in triplist)) | |
marey.SetSpan(pattern_start_time,pattern_end_time) | |
marey.Draw(triplist[0].GetPattern(), triplist, height) | |
content = marey.Draw() | |
self.send_response(200) | |
self.send_header('Content-Type', 'image/svg+xml') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def FindPy2ExeBase(): | |
"""If this is running in py2exe return the install directory else return | |
None""" | |
# py2exe puts gtfsscheduleviewer in library.zip. For py2exe setup.py is | |
# configured to put the data next to library.zip. | |
windows_ending = gtfsscheduleviewer.__file__.find('\\library.zip\\') | |
if windows_ending != -1: | |
return transitfeed.__file__[:windows_ending] | |
else: | |
return None | |
def FindDefaultFileDir(): | |
"""Return the path of the directory containing the static files. By default | |
the directory is called 'files'. The location depends on where setup.py put | |
it.""" | |
base = FindPy2ExeBase() | |
if base: | |
return os.path.join(base, 'schedule_viewer_files') | |
else: | |
# For all other distributions 'files' is in the gtfsscheduleviewer | |
# directory. | |
base = os.path.dirname(gtfsscheduleviewer.__file__) # Strip __init__.py | |
return os.path.join(base, 'files') | |
def GetDefaultKeyFilePath(): | |
"""In py2exe return absolute path of file in the base directory and in all | |
other distributions return relative path 'key.txt'""" | |
windows_base = FindPy2ExeBase() | |
if windows_base: | |
return os.path.join(windows_base, 'key.txt') | |
else: | |
return 'key.txt' | |
def main(RequestHandlerClass = ScheduleRequestHandler): | |
usage = \ | |
'''%prog [options] [<input GTFS.zip>] | |
Runs a webserver that lets you explore a <input GTFS.zip> in your browser. | |
If <input GTFS.zip> is omited the filename is read from the console. Dragging | |
a file into the console may enter the filename. | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('--feed_filename', '--feed', dest='feed_filename', | |
help='file name of feed to load') | |
parser.add_option('--key', dest='key', | |
help='Google Maps API key or the name ' | |
'of a text file that contains an API key') | |
parser.add_option('--host', dest='host', help='Host name of Google Maps') | |
parser.add_option('--port', dest='port', type='int', | |
help='port on which to listen') | |
parser.add_option('--file_dir', dest='file_dir', | |
help='directory containing static files') | |
parser.add_option('-n', '--noprompt', action='store_false', | |
dest='manual_entry', | |
help='disable interactive prompts') | |
parser.set_defaults(port=8765, | |
host='maps.google.com', | |
file_dir=FindDefaultFileDir(), | |
manual_entry=True) | |
(options, args) = parser.parse_args() | |
if not os.path.isfile(os.path.join(options.file_dir, 'index.html')): | |
print "Can't find index.html with --file_dir=%s" % options.file_dir | |
exit(1) | |
if not options.feed_filename and len(args) == 1: | |
options.feed_filename = args[0] | |
if not options.feed_filename and options.manual_entry: | |
options.feed_filename = raw_input('Enter Feed Location: ').strip('"') | |
default_key_file = GetDefaultKeyFilePath() | |
if not options.key and os.path.isfile(default_key_file): | |
options.key = open(default_key_file).read().strip() | |
if options.key and os.path.isfile(options.key): | |
options.key = open(options.key).read().strip() | |
schedule = transitfeed.Schedule(problem_reporter=transitfeed.ProblemReporter()) | |
print 'Loading data from feed "%s"...' % options.feed_filename | |
print '(this may take a few minutes for larger cities)' | |
schedule.Load(options.feed_filename) | |
server = StoppableHTTPServer(server_address=('', options.port), | |
RequestHandlerClass=RequestHandlerClass) | |
server.key = options.key | |
server.schedule = schedule | |
server.file_dir = options.file_dir | |
server.host = options.host | |
server.feed_path = options.feed_filename | |
print ("To view, point your browser at http://localhost:%d/" % | |
(server.server_port)) | |
server.serve_forever() | |
if __name__ == '__main__': | |
main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
This script can be used to create a source distribution, binary distribution | |
or Windows executable files. The output is put in dist/ | |
See | |
http://code.google.com/p/googletransitdatafeed/wiki/BuildingPythonWindowsExecutables | |
for help on creating Windows executables. | |
""" | |
from distutils.core import setup | |
import glob | |
import os.path | |
from transitfeed import __version__ as VERSION | |
try: | |
import py2exe | |
has_py2exe = True | |
except ImportError, e: | |
# Won't be able to generate win32 exe | |
has_py2exe = False | |
# py2exe doesn't automatically include pytz dependency because it is optional | |
options = {'py2exe': {'packages': ['pytz']}} | |
scripts_for_py2exe = ['feedvalidator.py', 'schedule_viewer.py', 'kmlparser.py', | |
'kmlwriter.py', 'merge.py', 'unusual_trip_filter.py'] | |
# On Nov 23, 2009 Tom Brown said: I'm not confident that we can include a | |
# working copy of this script in the py2exe distribution because it depends on | |
# ogr. I do want it included in the source tar.gz. | |
scripts_for_source_only = ['shape_importer.py'] | |
kwargs = {} | |
if has_py2exe: | |
kwargs['console'] = scripts_for_py2exe | |
# py2exe seems to ignore package_data and not add marey_graph. This makes it | |
# work. | |
kwargs['data_files'] = \ | |
[('schedule_viewer_files', | |
glob.glob(os.path.join('gtfsscheduleviewer', 'files', '*')))] | |
options['py2exe'] = {'dist_dir': 'transitfeed-windows-binary-%s' % VERSION} | |
setup( | |
version=VERSION, | |
name='transitfeed', | |
url='http://code.google.com/p/googletransitdatafeed/', | |
download_url='http://googletransitdatafeed.googlecode.com/' | |
'files/transitfeed-%s.tar.gz' % VERSION, | |
maintainer='Tom Brown', | |
maintainer_email='tom.brown.code@gmail.com', | |
description='Google Transit Feed Specification library and tools', | |
long_description='This module provides a library for reading, writing and ' | |
'validating Google Transit Feed Specification files. It includes some ' | |
'scripts that validate a feed, display it using the Google Maps API and ' | |
'the start of a KML importer and exporter.', | |
platforms='OS Independent', | |
license='Apache License, Version 2.0', | |
packages=['gtfsscheduleviewer', 'transitfeed'], | |
# Also need to list package_data contents in MANIFEST.in for it to be | |
# included in sdist. See "[Distutils] package_data not used by sdist | |
# command" Feb 2, 2007 | |
package_data={'gtfsscheduleviewer': ['files/*']}, | |
scripts=scripts_for_py2exe + scripts_for_source_only, | |
zip_safe=False, | |
classifiers=[ | |
'Development Status :: 4 - Beta', | |
'Intended Audience :: Developers', | |
'Intended Audience :: Information Technology', | |
'Intended Audience :: Other Audience', | |
'License :: OSI Approved :: Apache Software License', | |
'Operating System :: OS Independent', | |
'Programming Language :: Python', | |
'Topic :: Scientific/Engineering :: GIS', | |
'Topic :: Software Development :: Libraries :: Python Modules' | |
], | |
options=options, | |
**kwargs | |
) | |
if has_py2exe: | |
# Sometime between pytz-2008a and pytz-2008i common_timezones started to | |
# include only names of zones with a corresponding data file in zoneinfo. | |
# pytz installs the zoneinfo directory tree in the same directory | |
# as the pytz/__init__.py file. These data files are loaded using | |
# pkg_resources.resource_stream. py2exe does not copy this to library.zip so | |
# resource_stream can't find the files and common_timezones is empty when | |
# read in the py2exe executable. | |
# This manually copies zoneinfo into the zip. See also | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=121 | |
import pytz | |
import zipfile | |
# Make sure the layout of pytz hasn't changed | |
assert (pytz.__file__.endswith('__init__.pyc') or | |
pytz.__file__.endswith('__init__.py')), pytz.__file__ | |
zoneinfo_dir = os.path.join(os.path.dirname(pytz.__file__), 'zoneinfo') | |
# '..\\Lib\\pytz\\__init__.py' -> '..\\Lib' | |
disk_basedir = os.path.dirname(os.path.dirname(pytz.__file__)) | |
zipfile_path = os.path.join(options['py2exe']['dist_dir'], 'library.zip') | |
z = zipfile.ZipFile(zipfile_path, 'a') | |
for absdir, directories, filenames in os.walk(zoneinfo_dir): | |
assert absdir.startswith(disk_basedir), (absdir, disk_basedir) | |
zip_dir = absdir[len(disk_basedir):] | |
for f in filenames: | |
z.write(os.path.join(absdir, f), os.path.join(zip_dir, f)) | |
z.close() | |
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A utility program to help add shapes to an existing GTFS feed. | |
Requires the ogr python package. | |
""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import csv | |
import glob | |
import ogr | |
import os | |
import shutil | |
import sys | |
import tempfile | |
import transitfeed | |
from transitfeed import shapelib | |
from transitfeed import util | |
import zipfile | |
class ShapeImporterError(Exception): | |
pass | |
def PrintColumns(shapefile): | |
""" | |
Print the columns of layer 0 of the shapefile to the screen. | |
""" | |
ds = ogr.Open(shapefile) | |
layer = ds.GetLayer(0) | |
if len(layer) == 0: | |
raise ShapeImporterError("Layer 0 has no elements!") | |
feature = layer.GetFeature(0) | |
print "%d features" % feature.GetFieldCount() | |
for j in range(0, feature.GetFieldCount()): | |
print '--' + feature.GetFieldDefnRef(j).GetName() + \ | |
': ' + feature.GetFieldAsString(j) | |
def AddShapefile(shapefile, graph, key_cols): | |
""" | |
Adds shapes found in the given shape filename to the given polyline | |
graph object. | |
""" | |
ds = ogr.Open(shapefile) | |
layer = ds.GetLayer(0) | |
for i in range(0, len(layer)): | |
feature = layer.GetFeature(i) | |
geometry = feature.GetGeometryRef() | |
if key_cols: | |
key_list = [] | |
for col in key_cols: | |
key_list.append(str(feature.GetField(col))) | |
shape_id = '-'.join(key_list) | |
else: | |
shape_id = '%s-%d' % (shapefile, i) | |
poly = shapelib.Poly(name=shape_id) | |
for j in range(0, geometry.GetPointCount()): | |
(lat, lng) = (round(geometry.GetY(j), 15), round(geometry.GetX(j), 15)) | |
poly.AddPoint(shapelib.Point.FromLatLng(lat, lng)) | |
graph.AddPoly(poly) | |
return graph | |
def GetMatchingShape(pattern_poly, trip, matches, max_distance, verbosity=0): | |
""" | |
Tries to find a matching shape for the given pattern Poly object, | |
trip, and set of possibly matching Polys from which to choose a match. | |
""" | |
if len(matches) == 0: | |
print ('No matching shape found within max-distance %d for trip %s ' | |
% (max_distance, trip.trip_id)) | |
return None | |
if verbosity >= 1: | |
for match in matches: | |
print "match: size %d" % match.GetNumPoints() | |
scores = [(pattern_poly.GreedyPolyMatchDist(match), match) | |
for match in matches] | |
scores.sort() | |
if scores[0][0] > max_distance: | |
print ('No matching shape found within max-distance %d for trip %s ' | |
'(min score was %f)' | |
% (max_distance, trip.trip_id, scores[0][0])) | |
return None | |
return scores[0][1] | |
def AddExtraShapes(extra_shapes_txt, graph): | |
""" | |
Add extra shapes into our input set by parsing them out of a GTFS-formatted | |
shapes.txt file. Useful for manually adding lines to a shape file, since it's | |
a pain to edit .shp files. | |
""" | |
print "Adding extra shapes from %s" % extra_shapes_txt | |
try: | |
tmpdir = tempfile.mkdtemp() | |
shutil.copy(extra_shapes_txt, os.path.join(tmpdir, 'shapes.txt')) | |
loader = transitfeed.ShapeLoader(tmpdir) | |
schedule = loader.Load() | |
for shape in schedule.GetShapeList(): | |
print "Adding extra shape: %s" % shape.shape_id | |
graph.AddPoly(ShapeToPoly(shape)) | |
finally: | |
if tmpdir: | |
shutil.rmtree(tmpdir) | |
# Note: this method lives here to avoid cross-dependencies between | |
# shapelib and transitfeed. | |
def ShapeToPoly(shape): | |
poly = shapelib.Poly(name=shape.shape_id) | |
for lat, lng, distance in shape.points: | |
point = shapelib.Point.FromLatLng(round(lat, 15), round(lng, 15)) | |
poly.AddPoint(point) | |
return poly | |
def ValidateArgs(options_parser, options, args): | |
if not (args and options.source_gtfs and options.dest_gtfs): | |
options_parser.error("You must specify a source and dest GTFS file, " | |
"and at least one source shapefile") | |
def DefineOptions(): | |
usage = \ | |
"""%prog [options] --source_gtfs=<input GTFS.zip> --dest_gtfs=<output GTFS.zip>\ | |
<input.shp> [<input.shp>...] | |
Try to match shapes in one or more SHP files to trips in a GTFS file.""" | |
options_parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
options_parser.add_option("--print_columns", | |
action="store_true", | |
default=False, | |
dest="print_columns", | |
help="Print column names in shapefile DBF and exit") | |
options_parser.add_option("--keycols", | |
default="", | |
dest="keycols", | |
help="Comma-separated list of the column names used" | |
"to index shape ids") | |
options_parser.add_option("--max_distance", | |
type="int", | |
default=150, | |
dest="max_distance", | |
help="Max distance from a shape to which to match") | |
options_parser.add_option("--source_gtfs", | |
default="", | |
dest="source_gtfs", | |
metavar="FILE", | |
help="Read input GTFS from FILE") | |
options_parser.add_option("--dest_gtfs", | |
default="", | |
dest="dest_gtfs", | |
metavar="FILE", | |
help="Write output GTFS with shapes to FILE") | |
options_parser.add_option("--extra_shapes", | |
default="", | |
dest="extra_shapes", | |
metavar="FILE", | |
help="Extra shapes.txt (CSV) formatted file") | |
options_parser.add_option("--verbosity", | |
type="int", | |
default=0, | |
dest="verbosity", | |
help="Verbosity level. Higher is more verbose") | |
return options_parser | |
def main(key_cols): | |
print 'Parsing shapefile(s)...' | |
graph = shapelib.PolyGraph() | |
for arg in args: | |
print ' ' + arg | |
AddShapefile(arg, graph, key_cols) | |
if options.extra_shapes: | |
AddExtraShapes(options.extra_shapes, graph) | |
print 'Loading GTFS from %s...' % options.source_gtfs | |
schedule = transitfeed.Loader(options.source_gtfs).Load() | |
shape_count = 0 | |
pattern_count = 0 | |
verbosity = options.verbosity | |
print 'Matching shapes to trips...' | |
for route in schedule.GetRouteList(): | |
print 'Processing route', route.route_short_name | |
patterns = route.GetPatternIdTripDict() | |
for pattern_id, trips in patterns.iteritems(): | |
pattern_count += 1 | |
pattern = trips[0].GetPattern() | |
poly_points = [shapelib.Point.FromLatLng(p.stop_lat, p.stop_lon) | |
for p in pattern] | |
if verbosity >= 2: | |
print "\npattern %d, %d points:" % (pattern_id, len(poly_points)) | |
for i, (stop, point) in enumerate(zip(pattern, poly_points)): | |
print "Stop %d '%s': %s" % (i + 1, stop.stop_name, point.ToLatLng()) | |
# First, try to find polys that run all the way from | |
# the start of the trip to the end. | |
matches = graph.FindMatchingPolys(poly_points[0], poly_points[-1], | |
options.max_distance) | |
if not matches: | |
# Try to find a path through the graph, joining | |
# multiple edges to find a path that covers all the | |
# points in the trip. Some shape files are structured | |
# this way, with a polyline for each segment between | |
# stations instead of a polyline covering an entire line. | |
shortest_path = graph.FindShortestMultiPointPath(poly_points, | |
options.max_distance, | |
verbosity=verbosity) | |
if shortest_path: | |
matches = [shortest_path] | |
else: | |
matches = [] | |
pattern_poly = shapelib.Poly(poly_points) | |
shape_match = GetMatchingShape(pattern_poly, trips[0], | |
matches, options.max_distance, | |
verbosity=verbosity) | |
if shape_match: | |
shape_count += 1 | |
# Rename shape for readability. | |
shape_match = shapelib.Poly(points=shape_match.GetPoints(), | |
name="shape_%d" % shape_count) | |
for trip in trips: | |
try: | |
shape = schedule.GetShape(shape_match.GetName()) | |
except KeyError: | |
shape = transitfeed.Shape(shape_match.GetName()) | |
for point in shape_match.GetPoints(): | |
(lat, lng) = point.ToLatLng() | |
shape.AddPoint(lat, lng) | |
schedule.AddShapeObject(shape) | |
trip.shape_id = shape.shape_id | |
print "Matched %d shapes out of %d patterns" % (shape_count, pattern_count) | |
schedule.WriteGoogleTransitFeed(options.dest_gtfs) | |
if __name__ == '__main__': | |
# Import psyco if available for better performance. | |
try: | |
import psyco | |
psyco.full() | |
except ImportError: | |
pass | |
options_parser = DefineOptions() | |
(options, args) = options_parser.parse_args() | |
ValidateArgs(options_parser, options, args) | |
if options.print_columns: | |
for arg in args: | |
PrintColumns(arg) | |
sys.exit(0) | |
key_cols = options.keycols.split(',') | |
main(key_cols) | |
agency_id,agency_name,agency_url,agency_timezone,agency_phone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles,123 12314 | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,2007.01.01,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 | |
service_id,date,exception_type | |
FULLW,2007-06-04,2 | |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code,location_type,parent_station | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,,,1234,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,0,BEATTY_AIRPORT_STATION | |
BEATTY_AIRPORT_STATION,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,1, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,,,,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,,,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,,,,, | |
from_stop_id,to_stop_id,transfer_type,min_transfer_time | |
NADAV,NANAA,3, | |
EMSI,NANAA,2,1200 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
Binary files a/origin-src/transitfeed-1.2.5/test/data/bad_eol.zip and /dev/null differ
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3 | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3 | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode short name,3 | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
Binary files a/origin-src/transitfeed-1.2.5/test/data/contains_null/stops.txt and /dev/null differ
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,Ō,0 | |
CITY,FULLW,CITY2,Ō,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 | |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode short name,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Démonstration),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,Ō,0,, | |
CITY,FULLW,CITY2,Ō,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,FROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
FROG,Bull Frog,,36.881083,-116.817968 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,10,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
CITY,FULLW,CITY1,,0,, | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3, | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle,1, | |
CITY,FULLW,CITY1,,0, | |
CITY,FULLW,CITY2,,1, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0, | |
AAMV,WE,AAMV2,to Airport,1, | |
AAMV,WE,AAMV3,to Amargosa Valley,0, | |
AAMV,WE,AAMV4,to Airport,1, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,10,Airport - Bullfrog,,3,,, | |
BFC,DTA,20,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,30,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,40,City,,3,,, | |
AAMV,DTA,50,Airport - Amargosa Valley,,3,,, |
shape_id,shape_pt_lat,shape_pt_lon,shape_pt_sequence,shape_dist_traveled |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY3,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY3,6:05:00,6:07:00,NANAA,2,,, | |
CITY3,6:12:00,6:14:00,NADAV,3,,, | |
CITY3,6:19:00,6:21:00,DADAN,4,,, | |
CITY3,6:26:00,6:28:00,EMSI,5,,, | |
CITY4,6:28:00,6:30:00,EMSI,1,,, | |
CITY4,6:35:00,6:37:00,DADAN,2,,, | |
CITY4,6:42:00,6:44:00,NADAV,3,,, | |
CITY4,6:49:00,6:51:00,NANAA,4,,, | |
CITY4,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY5,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY5,6:05:00,6:07:00,NANAA,2,,, | |
CITY5,6:12:00,6:14:00,NADAV,3,,, | |
CITY5,6:19:00,6:21:00,DADAN,4,,, | |
CITY5,6:26:00,6:28:00,EMSI,5,,, | |
CITY6,6:28:00,6:30:00,EMSI,1,,, | |
CITY6,6:35:00,6:37:00,DADAN,2,,, | |
CITY6,6:42:00,6:44:00,NADAV,3,,, | |
CITY6,6:49:00,6:51:00,NANAA,4,,, | |
CITY6,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY7,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY7,6:05:00,6:07:00,NANAA,2,,, | |
CITY7,6:12:00,6:14:00,NADAV,3,,, | |
CITY7,6:19:00,6:21:00,DADAN,4,,, | |
CITY7,6:26:00,6:28:00,EMSI,5,,, | |
CITY8,6:28:00,6:30:00,EMSI,1,,, | |
CITY8,6:35:00,6:37:00,DADAN,2,,, | |
CITY8,6:42:00,6:44:00,NADAV,3,,, | |
CITY8,6:49:00,6:51:00,NANAA,4,,, | |
CITY8,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY9,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY9,6:05:00,6:07:00,NANAA,2,,, | |
CITY9,6:12:00,6:14:00,NADAV,3,,, | |
CITY9,6:19:00,6:21:00,DADAN,4,,, | |
CITY9,6:26:00,6:28:00,EMSI,5,,, | |
CITY10,6:28:00,6:30:00,EMSI,1,,, | |
CITY10,6:35:00,6:37:00,DADAN,2,,, | |
CITY10,6:42:00,6:44:00,NADAV,3,,, | |
CITY10,6:49:00,6:51:00,NANAA,4,,, | |
CITY10,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY11,6:00:00,6:00:00,NANAA,1,,, | |
CITY11,6:05:00,6:07:00,BEATTY_AIRPORT,2,,, | |
CITY11,6:12:00,6:14:00,BULLFROG,3,,, | |
CITY11,6:19:00,6:21:00,DADAN,4,,, | |
CITY11,6:26:00,6:28:00,EMSI,5,,, | |
CITY12,6:28:00,6:30:00,EMSI,1,,, | |
CITY12,6:35:00,6:37:00,DADAN,2,,, | |
CITY12,7:07:00,7:09:00,AMV,3,,, | |
CITY12,7:39:00,7:41:00,BEATTY_AIRPORT,4,,, | |
CITY12,7:46:00,7:48:00,STAGECOACH,5,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
CITY,FULLW,CITY3,,0,, | |
CITY,FULLW,CITY4,,1,, | |
CITY,FULLW,CITY5,,0,, | |
CITY,FULLW,CITY6,,1,, | |
CITY,FULLW,CITY7,,0,, | |
CITY,FULLW,CITY8,,1,, | |
CITY,FULLW,CITY9,,0,, | |
CITY,FULLW,CITY10,,1,, | |
CITY,FULLW,CITY11,,0,, | |
CITY,FULLW,CITY12,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 | |
WE,20070604,1 | |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
route_1,DTA,1,route with a single trip,,0,http://routes.com/route_1,FF0000, | |
route_2,DTA,2,route with two trips and one component,test route desc 2,1,,00FF00, | |
route_3,DTA,3,route with two trips and two components,test route desc 3,2,http://routes.com/route_3,, | |
route_4,DTA,4,route with two equal trips,test route desc 4,3,http://routes.com/route_4,FFFF00, | |
route_5,DTA,5,route with two trip but no graph,test route desc 5,4,http://routes.com/route_5,FF00FF, | |
route_6,DTA,6,route with one trip and no stops,test route desc 6,5,http://routes.com/route_6,00FFFF, | |
route_7,DTA,7,route with no trips,test route desc 7,6,http://routes.com/route_7,, | |
route_8,DTA,8,route with a cyclic pattern,test route desc 8,7,http://routes.com/route_8,, | |
shape_id,shape_pt_sequence,shape_pt_lat,shape_pt_lon | |
shape_1,1,1,1 | |
shape_1,2,2,4 | |
shape_1,3,3,9 | |
shape_1,4,4,16 | |
shape_2,1,11,11 | |
shape_2,2,12,14 | |
shape_2,3,13,19 | |
shape_2,4,14,26 | |
shape_3,1,21,21 | |
shape_3,2,22,24 | |
shape_3,3,23,29 | |
shape_3,4,24,36 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
route_1_1,6:00:00,6:00:00,stop1,1 | |
route_1_1,7:00:00,7:00:00,stop2,2 | |
route_1_1,8:00:00,8:00:00,stop3,3 | |
route_2_1,6:00:00,6:00:00,stop1,1 | |
route_2_1,7:00:00,7:00:00,stop2,2 | |
route_2_1,8:00:00,8:00:00,stop3,3 | |
route_2_2,6:00:00,6:00:00,stop2,1 | |
route_2_2,7:00:00,7:00:00,stop4,2 | |
route_2_2,8:00:00,8:00:00,stop5,3 | |
route_3_1,6:00:00,6:00:00,stop1,1 | |
route_3_1,7:00:00,7:00:00,stop2,2 | |
route_3_1,8:00:00,8:00:00,stop3,3 | |
route_3_2,6:00:00,6:00:00,stop4,1 | |
route_3_2,7:00:00,7:00:00,stop5,2 | |
route_3_2,8:00:00,8:00:00,stop6,3 | |
route_4_1,6:00:00,6:00:00,stop1,1 | |
route_4_1,7:00:00,7:00:00,stop2,2 | |
route_4_1,8:00:00,8:00:00,stop3,3 | |
route_4_2,6:00:00,6:00:00,stop1,1 | |
route_4_2,7:00:00,7:00:00,stop2,2 | |
route_4_2,8:00:00,8:00:00,stop3,3 | |
route_5_1,6:00:00,6:00:00,stop1,1 | |
route_5_2,6:00:00,6:00:00,stop2,1 | |
route_8_1,6:00:00,6:00:00,stop1,1 | |
route_8_1,7:00:00,7:00:00,stop2,2 | |
route_8_1,8:00:00,8:00:00,stop3,3 | |
route_8_1,9:00:00,9:00:00,stop1,4 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
stop1,Furnace Creek Resort (Demo),,36.425288,-117.133162,,http://stops.com/stop1 | |
stop2,Nye County Airport (Demo),the stop at Nye County Airport,36.868446,-116.784582,, | |
stop3,Bullfrog (Demo),the stop at Bullfrog,36.88108,-116.81797,,http://stops.com/stop3 | |
stop4,Stagecoach Hotel & Casino (Demo),the stop at Stagecoach Hotel & Casino,36.915682,-116.751677,,http://stops.com/stop4 | |
stop5,North Ave / D Ave N (Demo),the stop at North Ave / D Ave N,36.914893,-116.76821,,http://stops.com/stop5 | |
stop6,North Ave / N A Ave (Demo),the stop at North Ave / N A Ave,36.914944,-116.761472,,http://stops.com/stop6 | |
stop7,Doing Ave / D Ave N (Demo),the stop at Doing Ave / D Ave N,36.909489,-116.768242,,http://stops.com/stop7 | |
stop8,E Main St / S Irving St (Demo),the stop at E Main St / S Irving St,36.905697,-116.76218,,http://stops.com/stop8 | |
stop9,Amargosa Valley (Demo),the stop at Amargosa Valley,36.641496,-116.40094,,http://stops.com/stop9 | |
route_id,service_id,trip_id,shape_id | |
route_1,FULLW,route_1_1,shape_1 | |
route_2,FULLW,route_2_1,shape_2 | |
route_2,FULLW,route_2_2,shape_3 | |
route_3,FULLW,route_3_1,shape_1 | |
route_3,FULLW,route_3_2,shape_1 | |
route_4,FULLW,route_4_1, | |
route_4,FULLW,route_4_2, | |
route_5,FULLW,route_5_1, | |
route_5,FULLW,route_5_2, | |
route_8,FULLW,route_8_1, | |
route_8,WE,route_8_2, | |
Binary files a/origin-src/transitfeed-1.2.5/test/data/good_feed.zip and /dev/null differ
agency_id,agency_name,agency_url,agency_timezone,agency_phone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles,123 12314 | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20111231 | |
WE,0,0,0,0,0,1,1,20070101,20111231 | |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code,location_type,parent_station | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,,,1234,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,0,BEATTY_AIRPORT_STATION | |
BEATTY_AIRPORT_STATION,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,1, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,,,,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,,,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,,,,, | |
from_stop_id,to_stop_id,transfer_type,min_transfer_time | |
NADAV,NANAA,3, | |
EMSI,NANAA,2,1200 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DVT,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 | |
route_id,service_id,trip_id,trip_headsign,direction_id | |
AB,FULLW,AB1,to Bullfrog,0 | |
AB,FULLW,AB2,to Airport,1 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0 | |
BFC,FULLW,BFC2,to Bullfrog,1 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_url,agency_timezone | |
DTA,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:12:00,,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
CITY,FULLW,CITY1,,0,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode short name,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,,,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,,,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Démonstration),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,Ō,0,, | |
CITY,FULLW,CITY2,Ō,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url | |
AB,DTA,,Airport ⇒ Bullfrog,,3, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,http://google.com | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3 | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon | |
FUR_CREEK_RES,Furnace Creek Resort (Démonstration),,36.425288,-117.13316 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,Ō,,, | |
CITY,FULLW,CITY2,Ō,,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
"service_id","monday","tuesday","wednesday","friday","saturday","sunday","start_date","end_date" | |
"FULLW",1,1,1,1,1,1,20070101,20101231 | |
"WE",0,0,0,0,1,1,20070101,20101231 | |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,1,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,1,,,, | |
CITY1,6:12:00,6:14:00,NADAV,2,,,, | |
CITY1,6:19:00,6:21:00,DADAN,3,,,, | |
CITY1,6:26:00,6:28:00,EMSI,4,,,, | |
CITY2,6:28:00,6:30:00,EMSI,-2,,,, | |
CITY2,6:35:00,6:37:00,DADAN,1,,,, | |
CITY2,6:42:00,6:44:00,NADAV,2,,,, | |
CITY2,6:49:00,6:51:00,NANAA,3,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,4,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,0,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,1,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,0,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,1,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,0,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,1,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,0,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,1,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,0,,,, | |
AAMV1,9:00:00,9:00:00,AMV,1,,,, | |
AAMV2,10:00:00,10:00:00,AMV,0,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,0,,,, | |
AAMV3,14:00:00,14:00:00,AMV,1,,,, | |
AAMV4,15:00:00,15:00:00,AMV,0,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,1,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
<?xml version="1.0" encoding="UTF-8"?> | |
<kml xmlns="http://earth.google.com/kml/2.0"> | |
<Document> | |
<name>A test file with one placemark</name> | |
<decription></decription> | |
<Placemark> | |
<name>Test</name> | |
<description></description> | |
<LineString> | |
<coordinates> | |
-93.238861,44.854240,0.000000 | |
-93.238708,44.853081,0.000000 | |
-93.237923,44.852638,0.000000 | |
</coordinates> | |
</LineString> | |
</Placemark> | |
</Document> | |
</kml> | |
<?xml version="1.0" encoding="UTF-8"?> | |
<kml xmlns="http://earth.google.com/kml/2.0"> | |
<Document> | |
<name>A test file with one placemark</name> | |
<decription></decription> | |
<Placemark> | |
<name>Stop Name</name> | |
<description></description> | |
<Point> | |
<coordinates>-93.239037,44.854164,0.000000</coordinates> | |
</Point> | |
</Placemark> | |
</Document> | |
</kml> | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,date,exception_type | |
FULLW,20070604,1 | |
WE,20070605,1 | |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
STBB,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,City,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:40:00,6:41:00,NADAR,3,,,, | |
CITY2,6:42:00,6:44:00,NADAV,4,,,, | |
CITY2,6:49:00,6:51:00,NANAA,5,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,6,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone,agency_phone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles,123 12314 | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code,location_type,parent_station | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,,,1234,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,0,BEATTY_AIRPORT_STATION | |
BEATTY_AIRPORT_STATION,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,1, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,,,,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,,,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,,,,, | |
from_stop_id,to_stop_id,transfer_type,min_transfer_time | |
NADAV,NANAA,3, | |
EMSI,NANAA,2,1200 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
not a real zip file | |
agency_id,agency_name,agency_url,agency_timezone,agency_lange | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles,en |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date,leap_day | |
FULLW,1,1,1,1,1,1,1,20070101,20101231, | |
WE,0,0,0,0,0,1,1,20070101,20101231, |
service_id,date,exception_type,leap_day | |
FULLW,20070604,2, |
fare_id,price,currency_type,payment_method,transfers,transfer_time | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,source_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs,superfluous | |
STBA,6:00:00,22:00:00,1800, | |
CITY1,6:00:00,7:59:59,1800, | |
CITY2,6:00:00,7:59:59,1800, | |
CITY1,8:00:00,9:59:59,600, | |
CITY2,8:00:00,9:59:59,600, | |
CITY1,10:00:00,15:59:59,1800, | |
CITY2,10:00:00,15:59:59,1800, | |
CITY1,16:00:00,18:59:59,600, | |
CITY2,16:00:00,18:59:59,600, | |
CITY1,19:00:00,22:00:00,1800, | |
CITY2,19:00:00,22:00:00,1800, |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,Route_Text_Color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_time,shapedisttraveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_uri | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
from_stop_id,to_stop_id,transfer_type,min_transfer_time,to_stop | |
NADAV,NANAA,3,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,sharpe_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
BOGUS,Bogus Stop (Demo),,36.914682,-116.750677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/agency.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/calendar.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/calendar_dates.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/fare_attributes.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/fare_rules.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/frequencies.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/routes.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/stop_times.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/stops.txt and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/test/data/utf16/trips.txt and /dev/null differ
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
#!/usr/bin/python2.5 | |
# Test the examples to make sure they are not broken | |
import os | |
import re | |
import transitfeed | |
import unittest | |
import urllib | |
import util | |
class WikiExample(util.TempDirTestCaseBase): | |
# Download example from wiki and run it | |
def runTest(self): | |
wiki_source = urllib.urlopen( | |
'http://googletransitdatafeed.googlecode.com/svn/wiki/TransitFeed.wiki' | |
).read() | |
m = re.search(r'{{{(.*import transitfeed.*)}}}', wiki_source, re.DOTALL) | |
if not m: | |
raise Exception("Failed to find source code on wiki page") | |
wiki_code = m.group(1) | |
exec wiki_code | |
class shuttle_from_xmlfeed(util.TempDirTestCaseBase): | |
def runTest(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('shuttle_from_xmlfeed.py'), | |
'--input', 'file:' + self.GetExamplePath('shuttle_from_xmlfeed.xml'), | |
'--output', 'shuttle-YYYYMMDD.zip', | |
# save the path of the dated output to tempfilepath | |
'--execute', 'echo %(path)s > outputpath']) | |
dated_path = open('outputpath').read().strip() | |
self.assertTrue(re.match(r'shuttle-20\d\d[01]\d[0123]\d.zip$', dated_path)) | |
if not os.path.exists(dated_path): | |
raise Exception('did not create expected file') | |
class table(util.TempDirTestCaseBase): | |
def runTest(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('table.py'), | |
'--input', self.GetExamplePath('table.txt'), | |
'--output', 'google_transit.zip']) | |
if not os.path.exists('google_transit.zip'): | |
raise Exception('should have created output') | |
class small_builder(util.TempDirTestCaseBase): | |
def runTest(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('small_builder.py'), | |
'--output', 'google_transit.zip']) | |
if not os.path.exists('google_transit.zip'): | |
raise Exception('should have created output') | |
class google_random_queries(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('google_random_queries.py'), | |
'--output', 'queries.html', | |
'--limit', '5', | |
self.GetPath('test', 'data', 'good_feed')]) | |
if not os.path.exists('queries.html'): | |
raise Exception('should have created output') | |
def testInvalidFeedStillWorks(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('google_random_queries.py'), | |
'--output', 'queries.html', | |
'--limit', '5', | |
self.GetPath('test', 'data', 'invalid_route_agency')]) | |
if not os.path.exists('queries.html'): | |
raise Exception('should have created output') | |
def testBadArgs(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('google_random_queries.py'), | |
'--output', 'queries.html', | |
'--limit', '5'], | |
expected_retcode=2) | |
if os.path.exists('queries.html'): | |
raise Exception('should not have created output') | |
class filter_unused_stops(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
unused_stop_path = self.GetPath('test', 'data', 'unused_stop') | |
# Make sure load fails for input | |
problem_reporter = transitfeed.ExceptionProblemReporter(raise_warnings=True) | |
try: | |
transitfeed.Loader( | |
unused_stop_path, | |
problems=problem_reporter, extra_validation=True).Load() | |
self.fail('UnusedStop exception expected') | |
except transitfeed.UnusedStop, e: | |
pass | |
(stdout, stderr) = self.CheckCallWithPath( | |
[self.GetExamplePath('filter_unused_stops.py'), | |
'--list_removed', | |
unused_stop_path, 'output.zip']) | |
# Extra stop was listed on stdout | |
self.assertNotEqual(stdout.find('Bogus Stop'), -1) | |
# Make sure unused stop was removed and another stop wasn't | |
schedule = transitfeed.Loader( | |
'output.zip', problems=problem_reporter, extra_validation=True).Load() | |
schedule.GetStop('STAGECOACH') | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testexamples.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Smoke tests feed validator. Make sure it runs and returns the right things | |
# for a valid feed and a feed with errors. | |
import datetime | |
import feedvalidator | |
import os.path | |
import re | |
import StringIO | |
import transitfeed | |
import unittest | |
from urllib2 import HTTPError, URLError | |
import urllib2 | |
import util | |
import zipfile | |
class FullTests(util.TempDirTestCaseBase): | |
def testGoodFeed(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertFalse(re.search(r'ERROR', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(re.search(r'ERROR', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testGoodFeedConsoleOutput(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, | |
'--output=CONSOLE', self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertFalse(re.search(r'ERROR', out)) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testMissingStops(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, | |
self.GetPath('test', 'data', 'missing_stops')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'Invalid value BEATTY_AIRPORT', htmlout)) | |
self.assertFalse(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testMissingStopsConsoleOutput(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '-o', 'console', | |
'--latest_version', transitfeed.__version__, | |
self.GetPath('test', 'data', 'missing_stops')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
self.assertTrue(re.search(r'Invalid value BEATTY_AIRPORT', out)) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testLimitedErrors(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-l', '2', '-n', | |
'--latest_version', transitfeed.__version__, | |
self.GetPath('test', 'data', 'missing_stops')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertEquals(2, len(re.findall(r'class="problem">stop_id<', htmlout))) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testBadDateFormat(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, | |
self.GetPath('test', 'data', 'bad_date_format')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'in field <code>start_date', htmlout)) | |
self.assertTrue(re.search(r'in field <code>date', htmlout)) | |
self.assertFalse(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testBadUtf8(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, self.GetPath('test', 'data', 'bad_utf8')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'Unicode error', htmlout)) | |
self.assertFalse(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testFileNotFound(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, 'file-not-found.zip'], | |
expected_retcode=1) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testBadOutputPath(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, '-o', 'path/does/not/exist.html', | |
self.GetPath('test', 'data', 'good_feed')], | |
expected_retcode=2) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, 'IWantMyvalidation-crash.txt'], | |
expected_retcode=127) | |
self.assertTrue(re.search(r'Yikes', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertTrue(re.search(r'For testing the feed validator crash handler', | |
crashout)) | |
def testCheckVersionIsRun(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
'100.100.100', self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertTrue(re.search(r'A new version 100.100.100', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'A new version 100.100.100', htmlout)) | |
self.assertFalse(re.search(r'ERROR', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCheckVersionIsRunConsoleOutput(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '-o', 'console', | |
'--latest_version=100.100.100', | |
self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertTrue(re.search(r'A new version 100.100.100', out)) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testUsage(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '--invalid_opt'], expected_retcode=2) | |
self.assertMatchesRegex(r'[Uu]sage: feedvalidator.py \[options\]', err) | |
self.assertMatchesRegex(r'wiki/FeedValidator', err) | |
self.assertMatchesRegex(r'--output', err) # output includes all usage info | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
# Regression tests to ensure that CalendarSummary works properly | |
# even when the feed starts in the future or expires in less than | |
# 60 days | |
# See http://code.google.com/p/googletransitdatafeed/issues/detail?id=204 | |
class CalendarSummaryTestCase(unittest.TestCase): | |
# Test feeds starting in the future | |
def testFutureFeedDoesNotCrashCalendarSummary(self): | |
today = datetime.date.today() | |
start_date = today + datetime.timedelta(days=20) | |
end_date = today + datetime.timedelta(days=80) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate(start_date.strftime("%Y%m%d")) | |
service_period.SetEndDate(end_date.strftime("%Y%m%d")) | |
service_period.SetWeekdayService(True) | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals(0, result['max_trips']) | |
self.assertEquals(0, result['min_trips']) | |
self.assertTrue(re.search("40 service dates", result['max_trips_dates'])) | |
# Test feeds ending in less than 60 days | |
def testShortFeedDoesNotCrashCalendarSummary(self): | |
start_date = datetime.date.today() | |
end_date = start_date + datetime.timedelta(days=15) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate(start_date.strftime("%Y%m%d")) | |
service_period.SetEndDate(end_date.strftime("%Y%m%d")) | |
service_period.SetWeekdayService(True) | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals(0, result['max_trips']) | |
self.assertEquals(0, result['min_trips']) | |
self.assertTrue(re.search("15 service dates", result['max_trips_dates'])) | |
# Test feeds starting in the future *and* ending in less than 60 days | |
def testFutureAndShortFeedDoesNotCrashCalendarSummary(self): | |
today = datetime.date.today() | |
start_date = today + datetime.timedelta(days=2) | |
end_date = today + datetime.timedelta(days=3) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate(start_date.strftime("%Y%m%d")) | |
service_period.SetEndDate(end_date.strftime("%Y%m%d")) | |
service_period.SetWeekdayService(True) | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals(0, result['max_trips']) | |
self.assertEquals(0, result['min_trips']) | |
self.assertTrue(re.search("1 service date", result['max_trips_dates'])) | |
# Test feeds without service days | |
def testFeedWithNoDaysDoesNotCrashCalendarSummary(self): | |
schedule = transitfeed.Schedule() | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals({}, result) | |
class MockOptions: | |
"""Pretend to be an optparse options object suitable for testing.""" | |
def __init__(self): | |
self.limit_per_type = 5 | |
self.memory_db = True | |
self.check_duplicate_trips = True | |
self.latest_version = transitfeed.__version__ | |
self.output = 'fake-filename.zip' | |
self.manual_entry = False | |
self.service_gap_interval = None | |
class FeedValidatorTestCase(util.TempDirTestCaseBase): | |
def testBadEolContext(self): | |
"""Make sure the filename is included in the report of a bad eol.""" | |
zipfile_mem = StringIO.StringIO(open( | |
self.GetPath('test', 'data', 'good_feed.zip'), 'rb').read()) | |
zip = zipfile.ZipFile(zipfile_mem, 'a') | |
routes_txt = zip.read('routes.txt') | |
# routes_txt_modified is invalid because the first line ends with \r\n. | |
routes_txt_modified = routes_txt.replace('\n', '\r\n', 1) | |
self.assertNotEquals(routes_txt_modified, routes_txt) | |
zip.writestr('routes.txt', routes_txt_modified) | |
zip.close() | |
options = MockOptions() | |
output_file = StringIO.StringIO() | |
feedvalidator.RunValidationOutputToFile(zipfile_mem, options, output_file) | |
self.assertMatchesRegex("routes.txt", output_file.getvalue()) | |
class LimitPerTypeProblemReporterTestCase(unittest.TestCase): | |
def assertProblemsAttribute(self, problem_type, class_name, attribute_name, | |
expected): | |
"""Join the value of each exception's attribute_name in order.""" | |
problem_attribute_list = [] | |
for e in self.problems.ProblemList(problem_type, class_name).problems: | |
problem_attribute_list.append(getattr(e, attribute_name)) | |
self.assertEquals(expected, " ".join(problem_attribute_list)) | |
def testLimitOtherProblems(self): | |
"""The first N of each type should be kept.""" | |
self.problems = feedvalidator.LimitPerTypeProblemReporter(2) | |
self.problems.OtherProblem("e1", type=transitfeed.TYPE_ERROR) | |
self.problems.OtherProblem("w1", type=transitfeed.TYPE_WARNING) | |
self.problems.OtherProblem("e2", type=transitfeed.TYPE_ERROR) | |
self.problems.OtherProblem("e3", type=transitfeed.TYPE_ERROR) | |
self.problems.OtherProblem("w2", type=transitfeed.TYPE_WARNING) | |
self.assertEquals(2, self.problems.WarningCount()) | |
self.assertEquals(3, self.problems.ErrorCount()) | |
# These are BoundedProblemList objects | |
warning_bounded_list = self.problems.ProblemList( | |
transitfeed.TYPE_WARNING, "OtherProblem") | |
error_bounded_list = self.problems.ProblemList( | |
transitfeed.TYPE_ERROR, "OtherProblem") | |
self.assertEquals(2, warning_bounded_list.count) | |
self.assertEquals(3, error_bounded_list.count) | |
self.assertEquals(0, warning_bounded_list.dropped_count) | |
self.assertEquals(1, error_bounded_list.dropped_count) | |
self.assertProblemsAttribute(transitfeed.TYPE_ERROR, "OtherProblem", | |
"description", "e1 e2") | |
self.assertProblemsAttribute(transitfeed.TYPE_WARNING, "OtherProblem", | |
"description", "w1 w2") | |
def testKeepUnsorted(self): | |
"""An imperfect test that insort triggers ExceptionWithContext.__cmp__.""" | |
# If ExceptionWithContext.__cmp__ doesn't trigger TypeError in | |
# bisect.insort then the default comparison of object id will be used. The | |
# id values tend to be given out in order of creation so call | |
# problems._Report with objects in a different order. This test should | |
# break if ExceptionWithContext.__cmp__ is removed or changed to return 0 | |
# or cmp(id(self), id(y)). | |
exceptions = [] | |
for i in range(20): | |
exceptions.append(transitfeed.OtherProblem(description="e%i" % i)) | |
exceptions = exceptions[10:] + exceptions[:10] | |
self.problems = feedvalidator.LimitPerTypeProblemReporter(3) | |
for e in exceptions: | |
self.problems._Report(e) | |
self.assertEquals(0, self.problems.WarningCount()) | |
self.assertEquals(20, self.problems.ErrorCount()) | |
bounded_list = self.problems.ProblemList( | |
transitfeed.TYPE_ERROR, "OtherProblem") | |
self.assertEquals(20, bounded_list.count) | |
self.assertEquals(17, bounded_list.dropped_count) | |
self.assertProblemsAttribute(transitfeed.TYPE_ERROR, "OtherProblem", | |
"description", "e10 e11 e12") | |
def testLimitSortedTooFastTravel(self): | |
"""Sort by decreasing distance, keeping the N greatest.""" | |
self.problems = feedvalidator.LimitPerTypeProblemReporter(3) | |
self.problems.TooFastTravel("t1", "prev stop", "next stop", 11230.4, 5, | |
None) | |
self.problems.TooFastTravel("t2", "prev stop", "next stop", 1120.4, 5, None) | |
self.problems.TooFastTravel("t3", "prev stop", "next stop", 1130.4, 5, None) | |
self.problems.TooFastTravel("t4", "prev stop", "next stop", 1230.4, 5, None) | |
self.assertEquals(0, self.problems.WarningCount()) | |
self.assertEquals(4, self.problems.ErrorCount()) | |
self.assertProblemsAttribute(transitfeed.TYPE_ERROR, "TooFastTravel", | |
"trip_id", "t1 t4 t3") | |
def testLimitSortedStopTooFarFromParentStation(self): | |
"""Sort by decreasing distance, keeping the N greatest.""" | |
self.problems = feedvalidator.LimitPerTypeProblemReporter(3) | |
for i, distance in enumerate((1000, 3002.0, 1500, 2434.1, 5023.21)): | |
self.problems.StopTooFarFromParentStation( | |
"s%d" % i, "S %d" % i, "p%d" % i, "P %d" % i, distance) | |
self.assertEquals(5, self.problems.WarningCount()) | |
self.assertEquals(0, self.problems.ErrorCount()) | |
self.assertProblemsAttribute(transitfeed.TYPE_WARNING, | |
"StopTooFarFromParentStation", "stop_id", "s4 s1 s3") | |
def testLimitSortedStopsTooClose(self): | |
"""Sort by increasing distance, keeping the N closest.""" | |
self.problems = feedvalidator.LimitPerTypeProblemReporter(3) | |
for i, distance in enumerate((4.0, 3.0, 2.5, 2.2, 1.0, 0.0)): | |
self.problems.StopsTooClose( | |
"Sa %d" % i, "sa%d" % i, "Sb %d" % i, "sb%d" % i, distance) | |
self.assertEquals(6, self.problems.WarningCount()) | |
self.assertEquals(0, self.problems.ErrorCount()) | |
self.assertProblemsAttribute(transitfeed.TYPE_WARNING, | |
"StopsTooClose", "stop_id_a", "sa5 sa4 sa3") | |
class CheckVersionTestCase(util.TempDirTestCaseBase): | |
def setUp(self): | |
self.mock = MockURLOpen() | |
def tearDown(self): | |
self.mock = None | |
feedvalidator.urlopen = urllib2.urlopen | |
def testAssignedDifferentVersion(self): | |
problems = feedvalidator.CheckVersion('100.100.100') | |
self.assertTrue(re.search(r'A new version 100.100.100', problems)) | |
def testAssignedSameVersion(self): | |
problems = feedvalidator.CheckVersion(transitfeed.__version__) | |
self.assertEquals(problems, None) | |
def testGetCorrectReturns(self): | |
feedvalidator.urlopen = self.mock.mockedConnectSuccess | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'A new version 100.0.1', problems)) | |
def testPageNotFound(self): | |
feedvalidator.urlopen = self.mock.mockedPageNotFound | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'The server couldn\'t', problems)) | |
self.assertTrue(re.search(r'Error code: 404', problems)) | |
def testConnectionTimeOut(self): | |
feedvalidator.urlopen = self.mock.mockedConnectionTimeOut | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'We failed to reach', problems)) | |
self.assertTrue(re.search(r'Reason: Connection timed', problems)) | |
def testGetAddrInfoFailed(self): | |
feedvalidator.urlopen = self.mock.mockedGetAddrInfoFailed | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'We failed to reach', problems)) | |
self.assertTrue(re.search(r'Reason: Getaddrinfo failed', problems)) | |
def testEmptyIsReturned(self): | |
feedvalidator.urlopen = self.mock.mockedEmptyIsReturned | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'We had trouble parsing', problems)) | |
class MockURLOpen: | |
"""Pretend to be a urllib2.urlopen suitable for testing.""" | |
def mockedConnectSuccess(self, request): | |
return StringIO.StringIO('<li><a href="transitfeed-1.0.0/">transitfeed-' | |
'1.0.0/</a></li><li><a href=transitfeed-100.0.1/>' | |
'transitfeed-100.0.1/</a></li>') | |
def mockedPageNotFound(self, request): | |
raise HTTPError(request.get_full_url(), 404, 'Not Found', | |
request.header_items(), None) | |
def mockedConnectionTimeOut(self, request): | |
raise URLError('Connection timed out') | |
def mockedGetAddrInfoFailed(self, request): | |
raise URLError('Getaddrinfo failed') | |
def mockedEmptyIsReturned(self, request): | |
return StringIO.StringIO() | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testfeedvalidator.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Unit tests for the kmlparser module. | |
import kmlparser | |
import os.path | |
import shutil | |
from StringIO import StringIO | |
import transitfeed | |
import unittest | |
import util | |
class TestStopsParsing(util.GetPathTestCase): | |
def testSingleStop(self): | |
feed = transitfeed.Schedule() | |
kmlFile = self.GetTestDataPath('one_stop.kml') | |
kmlparser.KmlParser().Parse(kmlFile, feed) | |
stops = feed.GetStopList() | |
self.assertEqual(1, len(stops)) | |
stop = stops[0] | |
self.assertEqual(u'Stop Name', stop.stop_name) | |
self.assertAlmostEqual(-93.239037, stop.stop_lon) | |
self.assertAlmostEqual(44.854164, stop.stop_lat) | |
write_output = StringIO() | |
feed.WriteGoogleTransitFeed(write_output) | |
def testSingleShape(self): | |
feed = transitfeed.Schedule() | |
kmlFile = self.GetTestDataPath('one_line.kml') | |
kmlparser.KmlParser().Parse(kmlFile, feed) | |
shapes = feed.GetShapeList() | |
self.assertEqual(1, len(shapes)) | |
shape = shapes[0] | |
self.assertEqual(3, len(shape.points)) | |
self.assertAlmostEqual(44.854240, shape.points[0][0]) | |
self.assertAlmostEqual(-93.238861, shape.points[0][1]) | |
self.assertAlmostEqual(44.853081, shape.points[1][0]) | |
self.assertAlmostEqual(-93.238708, shape.points[1][1]) | |
self.assertAlmostEqual(44.852638, shape.points[2][0]) | |
self.assertAlmostEqual(-93.237923, shape.points[2][1]) | |
write_output = StringIO() | |
feed.WriteGoogleTransitFeed(write_output) | |
class FullTests(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
shutil.copyfile(self.GetTestDataPath('one_stop.kml'), 'one_stop.kml') | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlparser.py'), 'one_stop.kml', 'one_stop.zip']) | |
# There will be lots of problems, but ignore them | |
problems = util.RecordingProblemReporter(self) | |
schedule = transitfeed.Loader('one_stop.zip', problems=problems).Load() | |
self.assertEquals(len(schedule.GetStopList()), 1) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCommandLineError(self): | |
(out, err) = self.CheckCallWithPath([self.GetPath('kmlparser.py')], | |
expected_retcode=2) | |
self.assertMatchesRegex(r'did not provide .+ arguments', err) | |
self.assertMatchesRegex(r'[Uu]sage:', err) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlparser.py'), 'IWantMyCrash', 'output.zip'], | |
stdin_str="\n", expected_retcode=127) | |
self.assertMatchesRegex(r'Yikes', out) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertMatchesRegex(r'For testCrashHandler', crashout) | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testkmlparser.pyc and /dev/null differ
#!/usr/bin/python2.4 | |
# | |
# Copyright 2008 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Unit tests for the kmlwriter module.""" | |
import os | |
import StringIO | |
import tempfile | |
import unittest | |
import kmlparser | |
import kmlwriter | |
import transitfeed | |
import util | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
def DataPath(path): | |
"""Return the path to a given file in the test data directory. | |
Args: | |
path: The path relative to the test data directory. | |
Returns: | |
The absolute path. | |
""" | |
here = os.path.dirname(__file__) | |
return os.path.join(here, 'data', path) | |
def _ElementToString(root): | |
"""Returns the node as an XML string. | |
Args: | |
root: The ElementTree.Element instance. | |
Returns: | |
The XML string. | |
""" | |
output = StringIO.StringIO() | |
ET.ElementTree(root).write(output, 'utf-8') | |
return output.getvalue() | |
class TestKMLStopsRoundtrip(unittest.TestCase): | |
"""Checks to see whether all stops are preserved when going to and from KML. | |
""" | |
def setUp(self): | |
fd, self.kml_output = tempfile.mkstemp('kml') | |
os.close(fd) | |
def tearDown(self): | |
os.remove(self.kml_output) | |
def runTest(self): | |
gtfs_input = DataPath('good_feed.zip') | |
feed1 = transitfeed.Loader(gtfs_input).Load() | |
kmlwriter.KMLWriter().Write(feed1, self.kml_output) | |
feed2 = transitfeed.Schedule() | |
kmlparser.KmlParser().Parse(self.kml_output, feed2) | |
stop_name_mapper = lambda x: x.stop_name | |
stops1 = set(map(stop_name_mapper, feed1.GetStopList())) | |
stops2 = set(map(stop_name_mapper, feed2.GetStopList())) | |
self.assertEqual(stops1, stops2) | |
class TestKMLGeneratorMethods(unittest.TestCase): | |
"""Tests the various KML element creation methods of KMLWriter.""" | |
def setUp(self): | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateFolderVisible(self): | |
element = self.kmlwriter._CreateFolder(self.parent, 'folder_name') | |
self.assertEqual(_ElementToString(element), | |
'<Folder><name>folder_name</name></Folder>') | |
def testCreateFolderNotVisible(self): | |
element = self.kmlwriter._CreateFolder(self.parent, 'folder_name', | |
visible=False) | |
self.assertEqual(_ElementToString(element), | |
'<Folder><name>folder_name</name>' | |
'<visibility>0</visibility></Folder>') | |
def testCreateFolderWithDescription(self): | |
element = self.kmlwriter._CreateFolder(self.parent, 'folder_name', | |
description='folder_desc') | |
self.assertEqual(_ElementToString(element), | |
'<Folder><name>folder_name</name>' | |
'<description>folder_desc</description></Folder>') | |
def testCreatePlacemark(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef') | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name></Placemark>') | |
def testCreatePlacemarkWithStyle(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef', | |
style_id='ghijkl') | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name>' | |
'<styleUrl>#ghijkl</styleUrl></Placemark>') | |
def testCreatePlacemarkNotVisible(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef', | |
visible=False) | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name>' | |
'<visibility>0</visibility></Placemark>') | |
def testCreatePlacemarkWithDescription(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef', | |
description='ghijkl') | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name>' | |
'<description>ghijkl</description></Placemark>') | |
def testCreateLineString(self): | |
coord_list = [(2.0, 1.0), (4.0, 3.0), (6.0, 5.0)] | |
element = self.kmlwriter._CreateLineString(self.parent, coord_list) | |
self.assertEqual(_ElementToString(element), | |
'<LineString><tessellate>1</tessellate>' | |
'<coordinates>%f,%f %f,%f %f,%f</coordinates>' | |
'</LineString>' % (2.0, 1.0, 4.0, 3.0, 6.0, 5.0)) | |
def testCreateLineStringWithAltitude(self): | |
coord_list = [(2.0, 1.0, 10), (4.0, 3.0, 20), (6.0, 5.0, 30.0)] | |
element = self.kmlwriter._CreateLineString(self.parent, coord_list) | |
self.assertEqual(_ElementToString(element), | |
'<LineString><tessellate>1</tessellate>' | |
'<altitudeMode>absolute</altitudeMode>' | |
'<coordinates>%f,%f,%f %f,%f,%f %f,%f,%f</coordinates>' | |
'</LineString>' % | |
(2.0, 1.0, 10.0, 4.0, 3.0, 20.0, 6.0, 5.0, 30.0)) | |
def testCreateLineStringForShape(self): | |
shape = transitfeed.Shape('shape') | |
shape.AddPoint(1.0, 1.0) | |
shape.AddPoint(2.0, 4.0) | |
shape.AddPoint(3.0, 9.0) | |
element = self.kmlwriter._CreateLineStringForShape(self.parent, shape) | |
self.assertEqual(_ElementToString(element), | |
'<LineString><tessellate>1</tessellate>' | |
'<coordinates>%f,%f %f,%f %f,%f</coordinates>' | |
'</LineString>' % (1.0, 1.0, 4.0, 2.0, 9.0, 3.0)) | |
class TestRouteKML(unittest.TestCase): | |
"""Tests the routes folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateRoutePatternsFolderNoPatterns(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_7')) | |
self.assert_(folder is None) | |
def testCreateRoutePatternsFolderOnePattern(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_1')) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 1) | |
def testCreateRoutePatternsFolderTwoPatterns(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_3')) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 2) | |
def testCreateRoutePatternFolderTwoEqualPatterns(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_4')) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 1) | |
def testCreateRouteShapesFolderOneTripOneShape(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_1')) | |
self.assertEqual(len(folder.findall('Placemark')), 1) | |
def testCreateRouteShapesFolderTwoTripsTwoShapes(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_2')) | |
self.assertEqual(len(folder.findall('Placemark')), 2) | |
def testCreateRouteShapesFolderTwoTripsOneShape(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_3')) | |
self.assertEqual(len(folder.findall('Placemark')), 1) | |
def testCreateRouteShapesFolderTwoTripsNoShapes(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_4')) | |
self.assert_(folder is None) | |
def assertRouteFolderContainsTrips(self, tripids, folder): | |
"""Assert that the route folder contains exactly tripids""" | |
actual_tripds = set() | |
for placemark in folder.findall('Placemark'): | |
actual_tripds.add(placemark.find('name').text) | |
self.assertEquals(set(tripids), actual_tripds) | |
def testCreateTripsFolderForRouteTwoTrips(self): | |
route = self.feed.GetRoute('route_2') | |
folder = self.kmlwriter._CreateRouteTripsFolder(self.parent, route) | |
self.assertRouteFolderContainsTrips(['route_2_1', 'route_2_2'], folder) | |
def testCreateTripsFolderForRouteDateFilterNone(self): | |
self.kmlwriter.date_filter = None | |
route = self.feed.GetRoute('route_8') | |
folder = self.kmlwriter._CreateRouteTripsFolder(self.parent, route) | |
self.assertRouteFolderContainsTrips(['route_8_1', 'route_8_2'], folder) | |
def testCreateTripsFolderForRouteDateFilterSet(self): | |
self.kmlwriter.date_filter = '20070604' | |
route = self.feed.GetRoute('route_8') | |
folder = self.kmlwriter._CreateRouteTripsFolder(self.parent, route) | |
self.assertRouteFolderContainsTrips(['route_8_2'], folder) | |
def _GetTripPlacemark(self, route_folder, trip_name): | |
for trip_placemark in route_folder.findall('Placemark'): | |
if trip_placemark.find('name').text == trip_name: | |
return trip_placemark | |
def testCreateRouteTripsFolderAltitude0(self): | |
self.kmlwriter.altitude_per_sec = 0.0 | |
folder = self.kmlwriter._CreateRouteTripsFolder( | |
self.parent, self.feed.GetRoute('route_4')) | |
trip_placemark = self._GetTripPlacemark(folder, 'route_4_1') | |
self.assertEqual(_ElementToString(trip_placemark.find('LineString')), | |
'<LineString><tessellate>1</tessellate>' | |
'<coordinates>-117.133162,36.425288 ' | |
'-116.784582,36.868446 ' | |
'-116.817970,36.881080</coordinates></LineString>') | |
def testCreateRouteTripsFolderAltitude1(self): | |
self.kmlwriter.altitude_per_sec = 0.5 | |
folder = self.kmlwriter._CreateRouteTripsFolder( | |
self.parent, self.feed.GetRoute('route_4')) | |
trip_placemark = self._GetTripPlacemark(folder, 'route_4_1') | |
self.assertEqual(_ElementToString(trip_placemark.find('LineString')), | |
'<LineString><tessellate>1</tessellate>' | |
'<altitudeMode>absolute</altitudeMode>' | |
'<coordinates>-117.133162,36.425288,3600.000000 ' | |
'-116.784582,36.868446,5400.000000 ' | |
'-116.817970,36.881080,7200.000000</coordinates>' | |
'</LineString>') | |
def testCreateRouteTripsFolderNoTrips(self): | |
folder = self.kmlwriter._CreateRouteTripsFolder( | |
self.parent, self.feed.GetRoute('route_7')) | |
self.assert_(folder is None) | |
def testCreateRoutesFolderNoRoutes(self): | |
schedule = transitfeed.Schedule() | |
folder = self.kmlwriter._CreateRoutesFolder(schedule, self.parent) | |
self.assert_(folder is None) | |
def testCreateRoutesFolderNoRoutesWithRouteType(self): | |
folder = self.kmlwriter._CreateRoutesFolder(self.feed, self.parent, 999) | |
self.assert_(folder is None) | |
def _TestCreateRoutesFolder(self, show_trips): | |
self.kmlwriter.show_trips = show_trips | |
folder = self.kmlwriter._CreateRoutesFolder(self.feed, self.parent) | |
self.assertEquals(folder.tag, 'Folder') | |
styles = self.parent.findall('Style') | |
self.assertEquals(len(styles), len(self.feed.GetRouteList())) | |
route_folders = folder.findall('Folder') | |
self.assertEquals(len(route_folders), len(self.feed.GetRouteList())) | |
def testCreateRoutesFolder(self): | |
self._TestCreateRoutesFolder(False) | |
def testCreateRoutesFolderShowTrips(self): | |
self._TestCreateRoutesFolder(True) | |
def testCreateRoutesFolderWithRouteType(self): | |
folder = self.kmlwriter._CreateRoutesFolder(self.feed, self.parent, 1) | |
route_folders = folder.findall('Folder') | |
self.assertEquals(len(route_folders), 1) | |
class TestShapesKML(unittest.TestCase): | |
"""Tests the shapes folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.flatten_feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.good_feed = transitfeed.Loader(DataPath('good_feed.zip')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateShapesFolderNoShapes(self): | |
folder = self.kmlwriter._CreateShapesFolder(self.good_feed, self.parent) | |
self.assertEquals(folder, None) | |
def testCreateShapesFolder(self): | |
folder = self.kmlwriter._CreateShapesFolder(self.flatten_feed, self.parent) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 3) | |
for placemark in placemarks: | |
self.assert_(placemark.find('LineString') is not None) | |
class TestStopsKML(unittest.TestCase): | |
"""Tests the stops folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateStopsFolderNoStops(self): | |
schedule = transitfeed.Schedule() | |
folder = self.kmlwriter._CreateStopsFolder(schedule, self.parent) | |
self.assert_(folder is None) | |
def testCreateStopsFolder(self): | |
folder = self.kmlwriter._CreateStopsFolder(self.feed, self.parent) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), len(self.feed.GetStopList())) | |
class TestShapePointsKML(unittest.TestCase): | |
"""Tests the shape points folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.flatten_feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.kmlwriter.shape_points = True | |
self.parent = ET.Element('parent') | |
def testCreateShapePointsFolder(self): | |
folder = self.kmlwriter._CreateShapesFolder(self.flatten_feed, self.parent) | |
shape_point_folder = folder.find('Folder') | |
self.assertEquals(shape_point_folder.find('name').text, | |
'shape_1 Shape Points') | |
placemarks = shape_point_folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 4) | |
for placemark in placemarks: | |
self.assert_(placemark.find('Point') is not None) | |
class FullTests(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlwriter.py'), self.GetTestDataPath('good_feed.zip'), | |
'good_feed.kml']) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
self.assertTrue(os.path.exists('good_feed.kml')) | |
def testCommandLineError(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlwriter.py'), '--bad_flag'], expected_retcode=2) | |
self.assertMatchesRegex(r'no such option.*--bad_flag', err) | |
self.assertMatchesRegex(r'--showtrips', err) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlwriter.py'), 'IWantMyCrash', 'output.zip'], | |
stdin_str="\n", expected_retcode=127) | |
self.assertMatchesRegex(r'Yikes', out) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertMatchesRegex(r'For testCrashHandler', crashout) | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testkmlwriter.pyc and /dev/null differ
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Unit tests for the merge module.""" | |
__author__ = 'timothy.stranex@gmail.com (Timothy Stranex)' | |
import merge | |
import os.path | |
import re | |
import StringIO | |
import transitfeed | |
import unittest | |
import util | |
import zipfile | |
def CheckAttribs(a, b, attrs, assertEquals): | |
"""Checks that the objects a and b have the same values for the attributes | |
given in attrs. These checks are done using the given assert function. | |
Args: | |
a: The first object. | |
b: The second object. | |
attrs: The list of attribute names (strings). | |
assertEquals: The assertEquals method from unittest.TestCase. | |
""" | |
# For Stop objects (and maybe others in the future) Validate converts some | |
# attributes from string to native type | |
a.Validate() | |
b.Validate() | |
for k in attrs: | |
assertEquals(getattr(a, k), getattr(b, k)) | |
def CreateAgency(): | |
"""Create an transitfeed.Agency object for testing. | |
Returns: | |
The agency object. | |
""" | |
return transitfeed.Agency(name='agency', | |
url='http://agency', | |
timezone='Africa/Johannesburg', | |
id='agency') | |
class TestingProblemReporter(merge.MergeProblemReporterBase): | |
"""This problem reporter keeps track of all problems. | |
Attributes: | |
problems: The list of problems reported. | |
""" | |
def __init__(self): | |
merge.MergeProblemReporterBase.__init__(self) | |
self.problems = [] | |
self._expect_classes = [] | |
def _Report(self, problem): | |
problem.FormatProblem() # Shouldn't crash | |
self.problems.append(problem) | |
for problem_class in self._expect_classes: | |
if isinstance(problem, problem_class): | |
return | |
raise problem | |
def CheckReported(self, problem_class): | |
"""Checks if a problem of the given class was reported. | |
Args: | |
problem_class: The problem class, a class inheriting from | |
MergeProblemWithContext. | |
Returns: | |
True if a matching problem was reported. | |
""" | |
for problem in self.problems: | |
if isinstance(problem, problem_class): | |
return True | |
return False | |
def ExpectProblemClass(self, problem_class): | |
"""Supresses exception raising for problems inheriting from this class. | |
Args: | |
problem_class: The problem class, a class inheriting from | |
MergeProblemWithContext. | |
""" | |
self._expect_classes.append(problem_class) | |
def assertExpectedProblemsReported(self, testcase): | |
"""Asserts that every expected problem class has been reported. | |
The assertions are done using the assert_ method of the testcase. | |
Args: | |
testcase: The unittest.TestCase instance. | |
""" | |
for problem_class in self._expect_classes: | |
testcase.assert_(self.CheckReported(problem_class)) | |
class TestApproximateDistanceBetweenPoints(unittest.TestCase): | |
def _assertWithinEpsilon(self, a, b, epsilon=1.0): | |
"""Asserts that a and b are equal to within an epsilon. | |
Args: | |
a: The first value (float). | |
b: The second value (float). | |
epsilon: The epsilon value (float). | |
""" | |
self.assert_(abs(a-b) < epsilon) | |
def testDegenerate(self): | |
p = (30.0, 30.0) | |
self._assertWithinEpsilon( | |
merge.ApproximateDistanceBetweenPoints(p, p), 0.0) | |
def testFar(self): | |
p1 = (30.0, 30.0) | |
p2 = (40.0, 40.0) | |
self.assert_(merge.ApproximateDistanceBetweenPoints(p1, p2) > 1e4) | |
class TestSchemedMerge(unittest.TestCase): | |
class TestEntity: | |
"""A mock entity (like Route or Stop) for testing.""" | |
def __init__(self, x, y, z): | |
self.x = x | |
self.y = y | |
self.z = z | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
TestingProblemReporter()) | |
self.ds = merge.DataSetMerger(self.fm) | |
def Migrate(ent, sched, newid): | |
"""A migration function for the mock entity.""" | |
return self.TestEntity(ent.x, ent.y, ent.z) | |
self.ds._Migrate = Migrate | |
def testMergeIdentical(self): | |
class TestAttrib: | |
"""An object that is equal to everything.""" | |
def __cmp__(self, b): | |
return 0 | |
x = 99 | |
a = TestAttrib() | |
b = TestAttrib() | |
self.assert_(self.ds._MergeIdentical(x, x) == x) | |
self.assert_(self.ds._MergeIdentical(a, b) is b) | |
self.assertRaises(merge.MergeError, self.ds._MergeIdentical, 1, 2) | |
def testMergeIdenticalCaseInsensitive(self): | |
self.assert_(self.ds._MergeIdenticalCaseInsensitive('abc', 'ABC') == 'ABC') | |
self.assert_(self.ds._MergeIdenticalCaseInsensitive('abc', 'AbC') == 'AbC') | |
self.assertRaises(merge.MergeError, | |
self.ds._MergeIdenticalCaseInsensitive, 'abc', 'bcd') | |
self.assertRaises(merge.MergeError, | |
self.ds._MergeIdenticalCaseInsensitive, 'abc', 'ABCD') | |
def testMergeOptional(self): | |
x = 99 | |
y = 100 | |
self.assertEquals(self.ds._MergeOptional(None, None), None) | |
self.assertEquals(self.ds._MergeOptional(None, x), x) | |
self.assertEquals(self.ds._MergeOptional(x, None), x) | |
self.assertEquals(self.ds._MergeOptional(x, x), x) | |
self.assertRaises(merge.MergeError, self.ds._MergeOptional, x, y) | |
def testMergeSameAgency(self): | |
kwargs = {'name': 'xxx', | |
'agency_url': 'http://www.example.com', | |
'agency_timezone': 'Europe/Zurich'} | |
id1 = 'agency1' | |
id2 = 'agency2' | |
id3 = 'agency3' | |
id4 = 'agency4' | |
id5 = 'agency5' | |
a = self.fm.a_schedule.NewDefaultAgency(id=id1, **kwargs) | |
b = self.fm.b_schedule.NewDefaultAgency(id=id2, **kwargs) | |
c = transitfeed.Agency(id=id3, **kwargs) | |
self.fm.merged_schedule.AddAgencyObject(c) | |
self.fm.Register(a, b, c) | |
d = transitfeed.Agency(id=id4, **kwargs) | |
e = transitfeed.Agency(id=id5, **kwargs) | |
self.fm.a_schedule.AddAgencyObject(d) | |
self.fm.merged_schedule.AddAgencyObject(e) | |
self.fm.Register(d, None, e) | |
self.assertEquals(self.ds._MergeSameAgency(id1, id2), id3) | |
self.assertEquals(self.ds._MergeSameAgency(None, None), id3) | |
self.assertEquals(self.ds._MergeSameAgency(id1, None), id3) | |
self.assertEquals(self.ds._MergeSameAgency(None, id2), id3) | |
# id1 is not a valid agency_id in the new schedule so it cannot be merged | |
self.assertRaises(KeyError, self.ds._MergeSameAgency, id1, id1) | |
# this fails because d (id4) and b (id2) don't map to the same agency | |
# in the merged schedule | |
self.assertRaises(merge.MergeError, self.ds._MergeSameAgency, id4, id2) | |
def testSchemedMerge_Success(self): | |
def Merger(a, b): | |
return a + b | |
scheme = {'x': Merger, 'y': Merger, 'z': Merger} | |
a = self.TestEntity(1, 2, 3) | |
b = self.TestEntity(4, 5, 6) | |
c = self.ds._SchemedMerge(scheme, a, b) | |
self.assertEquals(c.x, 5) | |
self.assertEquals(c.y, 7) | |
self.assertEquals(c.z, 9) | |
def testSchemedMerge_Failure(self): | |
def Merger(a, b): | |
raise merge.MergeError() | |
scheme = {'x': Merger, 'y': Merger, 'z': Merger} | |
a = self.TestEntity(1, 2, 3) | |
b = self.TestEntity(4, 5, 6) | |
self.assertRaises(merge.MergeError, self.ds._SchemedMerge, | |
scheme, a, b) | |
def testSchemedMerge_NoNewId(self): | |
class TestDataSetMerger(merge.DataSetMerger): | |
def _Migrate(self, entity, schedule, newid): | |
self.newid = newid | |
return entity | |
dataset_merger = TestDataSetMerger(self.fm) | |
a = self.TestEntity(1, 2, 3) | |
b = self.TestEntity(4, 5, 6) | |
dataset_merger._SchemedMerge({}, a, b) | |
self.assertEquals(dataset_merger.newid, False) | |
def testSchemedMerge_ErrorTextContainsAttributeNameAndReason(self): | |
reason = 'my reason' | |
attribute_name = 'long_attribute_name' | |
def GoodMerger(a, b): | |
return a + b | |
def BadMerger(a, b): | |
raise merge.MergeError(reason) | |
a = self.TestEntity(1, 2, 3) | |
setattr(a, attribute_name, 1) | |
b = self.TestEntity(4, 5, 6) | |
setattr(b, attribute_name, 2) | |
scheme = {'x': GoodMerger, 'y': GoodMerger, 'z': GoodMerger, | |
attribute_name: BadMerger} | |
try: | |
self.ds._SchemedMerge(scheme, a, b) | |
except merge.MergeError, merge_error: | |
error_text = str(merge_error) | |
self.assert_(reason in error_text) | |
self.assert_(attribute_name in error_text) | |
class TestFeedMerger(unittest.TestCase): | |
class Merger: | |
def __init__(self, test, n, should_fail=False): | |
self.test = test | |
self.n = n | |
self.should_fail = should_fail | |
def MergeDataSets(self): | |
self.test.called.append(self.n) | |
return not self.should_fail | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
TestingProblemReporter()) | |
self.called = [] | |
def testDefaultProblemReporter(self): | |
feed_merger = merge.FeedMerger(self.fm.a_schedule, | |
self.fm.b_schedule, | |
self.fm.merged_schedule, | |
None) | |
self.assert_(isinstance(feed_merger.problem_reporter, | |
merge.MergeProblemReporterBase)) | |
def testSequence(self): | |
for i in range(10): | |
self.fm.AddMerger(TestFeedMerger.Merger(self, i)) | |
self.assert_(self.fm.MergeSchedules()) | |
self.assertEquals(self.called, range(10)) | |
def testStopsAfterError(self): | |
for i in range(10): | |
self.fm.AddMerger(TestFeedMerger.Merger(self, i, i == 5)) | |
self.assert_(not self.fm.MergeSchedules()) | |
self.assertEquals(self.called, range(6)) | |
def testRegister(self): | |
self.fm.Register(1, 2, 3) | |
self.assertEquals(self.fm.a_merge_map, {1: 3}) | |
self.assertEquals(self.fm.b_merge_map, {2: 3}) | |
def testRegisterNone(self): | |
self.fm.Register(None, 2, 3) | |
self.assertEquals(self.fm.a_merge_map, {}) | |
self.assertEquals(self.fm.b_merge_map, {2: 3}) | |
def testGenerateId_Prefix(self): | |
x = 'test' | |
a = self.fm.GenerateId(x) | |
b = self.fm.GenerateId(x) | |
self.assertNotEqual(a, b) | |
self.assert_(a.startswith(x)) | |
self.assert_(b.startswith(x)) | |
def testGenerateId_None(self): | |
a = self.fm.GenerateId(None) | |
b = self.fm.GenerateId(None) | |
self.assertNotEqual(a, b) | |
def testGenerateId_InitialCounter(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
for i in range(10): | |
agency = transitfeed.Agency(name='agency', url='http://agency', | |
timezone='Africa/Johannesburg', | |
id='agency_%d' % i) | |
if i % 2: | |
b_schedule.AddAgencyObject(agency) | |
else: | |
a_schedule.AddAgencyObject(agency) | |
feed_merger = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
TestingProblemReporter()) | |
# check that the postfix number of any generated ids are greater than | |
# the postfix numbers of any ids in the old and new schedules | |
gen_id = feed_merger.GenerateId(None) | |
postfix_num = int(gen_id[gen_id.rfind('_')+1:]) | |
self.assert_(postfix_num >= 10) | |
def testGetMerger(self): | |
class MergerA(merge.DataSetMerger): | |
pass | |
class MergerB(merge.DataSetMerger): | |
pass | |
a = MergerA(self.fm) | |
b = MergerB(self.fm) | |
self.fm.AddMerger(a) | |
self.fm.AddMerger(b) | |
self.assertEquals(self.fm.GetMerger(MergerA), a) | |
self.assertEquals(self.fm.GetMerger(MergerB), b) | |
def testGetMerger_Error(self): | |
self.assertRaises(LookupError, self.fm.GetMerger, TestFeedMerger.Merger) | |
class TestServicePeriodMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.spm = merge.ServicePeriodMerger(self.fm) | |
self.fm.AddMerger(self.spm) | |
def _AddTwoPeriods(self, start1, end1, start2, end2): | |
sp1fields = ['test1', start1, end1] + ['1']*7 | |
self.sp1 = transitfeed.ServicePeriod(field_list=sp1fields) | |
sp2fields = ['test2', start2, end2] + ['1']*7 | |
self.sp2 = transitfeed.ServicePeriod(field_list=sp2fields) | |
self.fm.a_schedule.AddServicePeriodObject(self.sp1) | |
self.fm.b_schedule.AddServicePeriodObject(self.sp2) | |
def testCheckDisjoint_True(self): | |
self._AddTwoPeriods('20071213', '20071231', | |
'20080101', '20080201') | |
self.assert_(self.spm.CheckDisjointCalendars()) | |
def testCheckDisjoint_False1(self): | |
self._AddTwoPeriods('20071213', '20080201', | |
'20080101', '20080301') | |
self.assert_(not self.spm.CheckDisjointCalendars()) | |
def testCheckDisjoint_False2(self): | |
self._AddTwoPeriods('20080101', '20090101', | |
'20070101', '20080601') | |
self.assert_(not self.spm.CheckDisjointCalendars()) | |
def testCheckDisjoint_False3(self): | |
self._AddTwoPeriods('20080301', '20080901', | |
'20080101', '20090101') | |
self.assert_(not self.spm.CheckDisjointCalendars()) | |
def testDisjoinCalendars(self): | |
self._AddTwoPeriods('20071213', '20080201', | |
'20080101', '20080301') | |
self.spm.DisjoinCalendars('20080101') | |
self.assertEquals(self.sp1.start_date, '20071213') | |
self.assertEquals(self.sp1.end_date, '20071231') | |
self.assertEquals(self.sp2.start_date, '20080101') | |
self.assertEquals(self.sp2.end_date, '20080301') | |
def testDisjoinCalendars_Dates(self): | |
self._AddTwoPeriods('20071213', '20080201', | |
'20080101', '20080301') | |
self.sp1.SetDateHasService('20071201') | |
self.sp1.SetDateHasService('20081231') | |
self.sp2.SetDateHasService('20071201') | |
self.sp2.SetDateHasService('20081231') | |
self.spm.DisjoinCalendars('20080101') | |
self.assert_('20071201' in self.sp1.date_exceptions.keys()) | |
self.assert_('20081231' not in self.sp1.date_exceptions.keys()) | |
self.assert_('20071201' not in self.sp2.date_exceptions.keys()) | |
self.assert_('20081231' in self.sp2.date_exceptions.keys()) | |
def testUnion(self): | |
self._AddTwoPeriods('20071213', '20071231', | |
'20080101', '20080201') | |
self.fm.problem_reporter.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetServicePeriodList()), 2) | |
# make fields a copy of the service period attributes except service_id | |
fields = list(transitfeed.ServicePeriod._DAYS_OF_WEEK) | |
fields += ['start_date', 'end_date'] | |
# now check that these attributes are preserved in the merge | |
CheckAttribs(self.sp1, self.fm.a_merge_map[self.sp1], fields, | |
self.assertEquals) | |
CheckAttribs(self.sp2, self.fm.b_merge_map[self.sp2], fields, | |
self.assertEquals) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testMerge_RequiredButNotDisjoint(self): | |
self._AddTwoPeriods('20070101', '20090101', | |
'20080101', '20100101') | |
self.fm.problem_reporter.ExpectProblemClass(merge.CalendarsNotDisjoint) | |
self.assertEquals(self.spm.MergeDataSets(), False) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testMerge_NotRequiredAndNotDisjoint(self): | |
self._AddTwoPeriods('20070101', '20090101', | |
'20080101', '20100101') | |
self.spm.require_disjoint_calendars = False | |
self.fm.problem_reporter.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
class TestAgencyMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.am = merge.AgencyMerger(self.fm) | |
self.fm.AddMerger(self.am) | |
self.a1 = transitfeed.Agency(id='a1', agency_name='a1', | |
agency_url='http://www.a1.com', | |
agency_timezone='Africa/Johannesburg', | |
agency_phone='123 456 78 90') | |
self.a2 = transitfeed.Agency(id='a2', agency_name='a1', | |
agency_url='http://www.a1.com', | |
agency_timezone='Africa/Johannesburg', | |
agency_phone='789 65 43 21') | |
def testMerge(self): | |
self.a2.agency_id = self.a1.agency_id | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetAgencyList()), 1) | |
self.assertEquals(merged_schedule.GetAgencyList()[0], | |
self.fm.a_merge_map[self.a1]) | |
self.assertEquals(self.fm.a_merge_map[self.a1], | |
self.fm.b_merge_map[self.a2]) | |
# differing values such as agency_phone should be taken from self.a2 | |
self.assertEquals(merged_schedule.GetAgencyList()[0], self.a2) | |
self.assertEquals(self.am.GetMergeStats(), (1, 0, 0)) | |
# check that id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.a1].agency_id, | |
self.a1.agency_id) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetAgencyList()), 2) | |
self.assert_(self.fm.a_merge_map[self.a1] in | |
merged_schedule.GetAgencyList()) | |
self.assert_(self.fm.b_merge_map[self.a2] in | |
merged_schedule.GetAgencyList()) | |
self.assertEquals(self.a1, self.fm.a_merge_map[self.a1]) | |
self.assertEquals(self.a2, self.fm.b_merge_map[self.a2]) | |
self.assertEquals(self.am.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are preserved | |
self.assertEquals(self.fm.a_merge_map[self.a1].agency_id, | |
self.a1.agency_id) | |
self.assertEquals(self.fm.b_merge_map[self.a2].agency_id, | |
self.a2.agency_id) | |
def testNoMerge_SameId(self): | |
# Force a1.agency_id to be unicode to make sure it is correctly encoded | |
# to utf-8 before concatinating to the agency_name containing non-ascii | |
# characters. | |
self.a1.agency_id = unicode(self.a1.agency_id) | |
self.a2.agency_id = str(self.a1.agency_id) | |
self.a2.agency_name = 'different \xc3\xa9' | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetAgencyList()), 2) | |
self.assertEquals(self.am.GetMergeStats(), (0, 1, 1)) | |
# check that the merged entities have different ids | |
self.assertNotEqual(self.fm.a_merge_map[self.a1].agency_id, | |
self.fm.b_merge_map[self.a2].agency_id) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
class TestStopMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.sm = merge.StopMerger(self.fm) | |
self.fm.AddMerger(self.sm) | |
self.s1 = transitfeed.Stop(30.0, 30.0, | |
u'Andr\202' , 's1') | |
self.s1.stop_desc = 'stop 1' | |
self.s1.stop_url = 'http://stop/1' | |
self.s1.zone_id = 'zone1' | |
self.s2 = transitfeed.Stop(30.0, 30.0, 's2', 's2') | |
self.s2.stop_desc = 'stop 2' | |
self.s2.stop_url = 'http://stop/2' | |
self.s2.zone_id = 'zone1' | |
def testMerge(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s1.location_type = 1 | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 1) | |
self.assertEquals(merged_schedule.GetStopList()[0], | |
self.fm.a_merge_map[self.s1]) | |
self.assertEquals(self.fm.a_merge_map[self.s1], | |
self.fm.b_merge_map[self.s2]) | |
self.assertEquals(self.sm.GetMergeStats(), (1, 0, 0)) | |
# check that the remaining attributes are taken from the new stop | |
fields = ['stop_name', 'stop_lat', 'stop_lon', 'stop_desc', 'stop_url', | |
'location_type'] | |
CheckAttribs(self.fm.a_merge_map[self.s1], self.s2, fields, | |
self.assertEquals) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].stop_id, self.s1.stop_id) | |
# check that the zone_id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].zone_id, self.s1.zone_id) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.fm.a_merge_map[self.s1] in merged_schedule.GetStopList()) | |
self.assert_(self.fm.b_merge_map[self.s2] in merged_schedule.GetStopList()) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
def testNoMerge_DifferentName(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.fm.a_merge_map[self.s1] in merged_schedule.GetStopList()) | |
self.assert_(self.fm.b_merge_map[self.s2] in merged_schedule.GetStopList()) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
def testNoMerge_FarApart(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s2.stop_lat = 40.0 | |
self.s2.stop_lon = 40.0 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.fm.a_merge_map[self.s1] in merged_schedule.GetStopList()) | |
self.assert_(self.fm.b_merge_map[self.s2] in merged_schedule.GetStopList()) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the merged ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.s1].stop_id, | |
self.fm.b_merge_map[self.s2].stop_id) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testMerge_CaseInsensitive(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name.upper() | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 1) | |
self.assertEquals(self.sm.GetMergeStats(), (1, 0, 0)) | |
def testNoMerge_ZoneId(self): | |
self.s2.zone_id = 'zone2' | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.s1.zone_id in self.fm.a_zone_map) | |
self.assert_(self.s2.zone_id in self.fm.b_zone_map) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the zones are still different | |
self.assertNotEqual(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.b_merge_map[self.s2].zone_id) | |
def testZoneId_SamePreservation(self): | |
# checks that if the zone_ids of some stops are the same before the | |
# merge, they are still the same after. | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.a_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.a_merge_map[self.s2].zone_id) | |
def testZoneId_DifferentSchedules(self): | |
# zone_ids may be the same in different schedules but unless the stops | |
# are merged, they should map to different zone_ids | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertNotEquals(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.b_merge_map[self.s2].zone_id) | |
def testZoneId_MergePreservation(self): | |
# check that if two stops are merged, the zone mapping is used for all | |
# other stops too | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
s3 = transitfeed.Stop(field_dict=self.s1) | |
s3.stop_id = 'different' | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.a_schedule.AddStopObject(s3) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.a_merge_map[s3].zone_id) | |
self.assertEquals(self.fm.a_merge_map[s3].zone_id, | |
self.fm.b_merge_map[self.s2].zone_id) | |
def testMergeStationType(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s1.location_type = 1 | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_stops = self.fm.GetMergedSchedule().GetStopList() | |
self.assertEquals(len(merged_stops), 1) | |
self.assertEquals(merged_stops[0].location_type, 1) | |
def testMergeDifferentTypes(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
try: | |
self.fm.MergeSchedules() | |
self.fail("Expecting MergeError") | |
except merge.SameIdButNotMerged, merge_error: | |
self.assertTrue(("%s" % merge_error).find("location_type") != -1) | |
def AssertS1ParentIsS2(self): | |
"""Assert that the merged s1 has parent s2.""" | |
new_s1 = self.fm.GetMergedObject(self.s1) | |
new_s2 = self.fm.GetMergedObject(self.s2) | |
self.assertEquals(new_s1.parent_station, new_s2.stop_id) | |
self.assertEquals(new_s2.parent_station, None) | |
self.assertEquals(new_s1.location_type, 0) | |
self.assertEquals(new_s2.location_type, 1) | |
def testMergeMaintainParentRelationship(self): | |
self.s2.location_type = 1 | |
self.s1.parent_station = self.s2.stop_id | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.a_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.AssertS1ParentIsS2() | |
def testParentRelationshipAfterMerge(self): | |
s3 = transitfeed.Stop(field_dict=self.s1) | |
s3.parent_station = self.s2.stop_id | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.b_schedule.AddStopObject(s3) | |
self.fm.MergeSchedules() | |
self.AssertS1ParentIsS2() | |
def testParentRelationshipWithNewParentid(self): | |
self.s2.location_type = 1 | |
self.s1.parent_station = self.s2.stop_id | |
# s3 will have a stop_id conflict with self.s2 so parent_id of the | |
# migrated self.s1 will need to be updated | |
s3 = transitfeed.Stop(field_dict=self.s2) | |
s3.stop_lat = 45 | |
self.fm.a_schedule.AddStopObject(s3) | |
self.fm.b_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertNotEquals(self.fm.GetMergedObject(s3).stop_id, | |
self.fm.GetMergedObject(self.s2).stop_id) | |
# Check that s3 got a new id | |
self.assertNotEquals(self.s2.stop_id, | |
self.fm.GetMergedObject(self.s2).stop_id) | |
self.AssertS1ParentIsS2() | |
def _AddStopsApart(self): | |
"""Adds two stops to the schedules and returns the distance between them. | |
Returns: | |
The distance between the stops in metres, a value greater than zero. | |
""" | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s2.stop_lat += 1.0e-3 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
return transitfeed.ApproximateDistanceBetweenStops(self.s1, self.s2) | |
def testSetLargestStopDistanceSmall(self): | |
largest_stop_distance = self._AddStopsApart() * 0.5 | |
self.sm.SetLargestStopDistance(largest_stop_distance) | |
self.assertEquals(self.sm.largest_stop_distance, largest_stop_distance) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetStopList()), 2) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testSetLargestStopDistanceLarge(self): | |
largest_stop_distance = self._AddStopsApart() * 2.0 | |
self.sm.SetLargestStopDistance(largest_stop_distance) | |
self.assertEquals(self.sm.largest_stop_distance, largest_stop_distance) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetStopList()), 1) | |
class TestRouteMerger(unittest.TestCase): | |
fields = ['route_short_name', 'route_long_name', 'route_type', | |
'route_url'] | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.fm.AddMerger(merge.AgencyMerger(self.fm)) | |
self.rm = merge.RouteMerger(self.fm) | |
self.fm.AddMerger(self.rm) | |
akwargs = {'id': 'a1', | |
'agency_name': 'a1', | |
'agency_url': 'http://www.a1.com', | |
'agency_timezone': 'Europe/Zurich'} | |
self.a1 = transitfeed.Agency(**akwargs) | |
self.a2 = transitfeed.Agency(**akwargs) | |
a_schedule.AddAgencyObject(self.a1) | |
b_schedule.AddAgencyObject(self.a2) | |
rkwargs = {'route_id': 'r1', | |
'agency_id': 'a1', | |
'short_name': 'r1', | |
'long_name': 'r1r1', | |
'route_type': '0'} | |
self.r1 = transitfeed.Route(**rkwargs) | |
self.r2 = transitfeed.Route(**rkwargs) | |
self.r2.route_url = 'http://route/2' | |
def testMerge(self): | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetRouteList()), 1) | |
r = merged_schedule.GetRouteList()[0] | |
self.assert_(self.fm.a_merge_map[self.r1] is r) | |
self.assert_(self.fm.b_merge_map[self.r2] is r) | |
CheckAttribs(self.r2, r, self.fields, self.assertEquals) | |
self.assertEquals(r.agency_id, self.fm.a_merge_map[self.a1].agency_id) | |
self.assertEquals(self.rm.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.r1].route_id, self.r1.route_id) | |
def testMergeNoAgency(self): | |
self.r1.agency_id = None | |
self.r2.agency_id = None | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetRouteList()), 1) | |
r = merged_schedule.GetRouteList()[0] | |
CheckAttribs(self.r2, r, self.fields, self.assertEquals) | |
# Merged route has copy of default agency_id | |
self.assertEquals(r.agency_id, self.a1.agency_id) | |
self.assertEquals(self.rm.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.r1].route_id, self.r1.route_id) | |
def testMigrateNoAgency(self): | |
self.r1.agency_id = None | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetRouteList()), 1) | |
r = merged_schedule.GetRouteList()[0] | |
CheckAttribs(self.r1, r, self.fields, self.assertEquals) | |
# Migrated route has copy of default agency_id | |
self.assertEquals(r.agency_id, self.a1.agency_id) | |
def testNoMerge_DifferentId(self): | |
self.r2.route_id = 'r2' | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetRouteList()), 2) | |
self.assertEquals(self.rm.GetMergeStats(), (0, 1, 1)) | |
def testNoMerge_SameId(self): | |
self.r2.route_short_name = 'different' | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetRouteList()), 2) | |
self.assertEquals(self.rm.GetMergeStats(), (0, 1, 1)) | |
# check that the merged ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.r1].route_id, | |
self.fm.b_merge_map[self.r2].route_id) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
class TestTripMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.fm.AddDefaultMergers() | |
self.tm = self.fm.GetMerger(merge.TripMerger) | |
akwargs = {'id': 'a1', | |
'agency_name': 'a1', | |
'agency_url': 'http://www.a1.com', | |
'agency_timezone': 'Europe/Zurich'} | |
self.a1 = transitfeed.Agency(**akwargs) | |
rkwargs = {'route_id': 'r1', | |
'agency_id': 'a1', | |
'short_name': 'r1', | |
'long_name': 'r1r1', | |
'route_type': '0'} | |
self.r1 = transitfeed.Route(**rkwargs) | |
self.s1 = transitfeed.ServicePeriod('s1') | |
self.s1.start_date = '20071201' | |
self.s1.end_date = '20071231' | |
self.s1.SetWeekdayService() | |
self.shape = transitfeed.Shape('shape1') | |
self.shape.AddPoint(30.0, 30.0) | |
self.t1 = transitfeed.Trip(service_period=self.s1, | |
route=self.r1, trip_id='t1') | |
self.t2 = transitfeed.Trip(service_period=self.s1, | |
route=self.r1, trip_id='t2') | |
# Must add self.t1 to a schedule before calling self.t1.AddStopTime | |
a_schedule.AddTripObject(self.t1, validate=False) | |
a_schedule.AddTripObject(self.t2, validate=False) | |
self.t1.block_id = 'b1' | |
self.t2.block_id = 'b1' | |
self.t1.shape_id = 'shape1' | |
self.stop = transitfeed.Stop(30.0, 30.0, stop_id='stop1') | |
self.t1.AddStopTime(self.stop, arrival_secs=0, departure_secs=0) | |
a_schedule.AddAgencyObject(self.a1) | |
a_schedule.AddStopObject(self.stop) | |
a_schedule.AddRouteObject(self.r1) | |
a_schedule.AddServicePeriodObject(self.s1) | |
a_schedule.AddShapeObject(self.shape) | |
def testMigrate(self): | |
self.fm.problem_reporter.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
r = self.fm.a_merge_map[self.r1] | |
s = self.fm.a_merge_map[self.s1] | |
shape = self.fm.a_merge_map[self.shape] | |
t1 = self.fm.a_merge_map[self.t1] | |
t2 = self.fm.a_merge_map[self.t2] | |
self.assertEquals(t1.route_id, r.route_id) | |
self.assertEquals(t1.service_id, s.service_id) | |
self.assertEquals(t1.shape_id, shape.shape_id) | |
self.assertEquals(t1.block_id, t2.block_id) | |
self.assertEquals(len(t1.GetStopTimes()), 1) | |
st = t1.GetStopTimes()[0] | |
self.assertEquals(st.stop, self.fm.a_merge_map[self.stop]) | |
def testReportsNotImplementedProblem(self): | |
self.fm.problem_reporter.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testMergeStats(self): | |
self.assert_(self.tm.GetMergeStats() is None) | |
def testConflictingTripid(self): | |
a1_in_b = transitfeed.Agency(field_dict=self.a1) | |
r1_in_b = transitfeed.Route(field_dict=self.r1) | |
t1_in_b = transitfeed.Trip(field_dict=self.t1) | |
shape_in_b = transitfeed.Shape('shape1') | |
shape_in_b.AddPoint(30.0, 30.0) | |
s_in_b = transitfeed.ServicePeriod('s1') | |
s_in_b.start_date = '20080101' | |
s_in_b.end_date = '20080131' | |
s_in_b.SetWeekdayService() | |
self.fm.b_schedule.AddAgencyObject(a1_in_b) | |
self.fm.b_schedule.AddRouteObject(r1_in_b) | |
self.fm.b_schedule.AddShapeObject(shape_in_b) | |
self.fm.b_schedule.AddTripObject(t1_in_b, validate=False) | |
self.fm.b_schedule.AddServicePeriodObject(s_in_b, validate=False) | |
self.fm.problem_reporter.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
# 3 trips moved to merged_schedule: from a_schedule t1, t2 and from | |
# b_schedule t1 | |
self.assertEquals(len(self.fm.merged_schedule.GetTripList()), 3) | |
class TestFareMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.faremerger = merge.FareMerger(self.fm) | |
self.fm.AddMerger(self.faremerger) | |
self.f1 = transitfeed.Fare('f1', '10', 'ZAR', '1', '0') | |
self.f2 = transitfeed.Fare('f2', '10', 'ZAR', '1', '0') | |
def testMerge(self): | |
self.f2.fare_id = self.f1.fare_id | |
self.fm.a_schedule.AddFareObject(self.f1) | |
self.fm.b_schedule.AddFareObject(self.f2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareList()), 1) | |
self.assertEquals(self.faremerger.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.f1].fare_id, self.f1.fare_id) | |
def testNoMerge_DifferentPrice(self): | |
self.f2.fare_id = self.f1.fare_id | |
self.f2.price = 11.0 | |
self.fm.a_schedule.AddFareObject(self.f1) | |
self.fm.b_schedule.AddFareObject(self.f2) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareList()), 2) | |
self.assertEquals(self.faremerger.GetMergeStats(), (0, 1, 1)) | |
# check that the merged ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.f1].fare_id, | |
self.fm.b_merge_map[self.f2].fare_id) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddFareObject(self.f1) | |
self.fm.b_schedule.AddFareObject(self.f2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareList()), 2) | |
self.assertEquals(self.faremerger.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are preserved | |
self.assertEquals(self.fm.a_merge_map[self.f1].fare_id, self.f1.fare_id) | |
self.assertEquals(self.fm.b_merge_map[self.f2].fare_id, self.f2.fare_id) | |
class TestShapeMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.sm = merge.ShapeMerger(self.fm) | |
self.fm.AddMerger(self.sm) | |
# setup some shapes | |
# s1 and s2 have the same endpoints but take different paths | |
# s3 has different endpoints to s1 and s2 | |
self.s1 = transitfeed.Shape('s1') | |
self.s1.AddPoint(30.0, 30.0) | |
self.s1.AddPoint(40.0, 30.0) | |
self.s1.AddPoint(50.0, 50.0) | |
self.s2 = transitfeed.Shape('s2') | |
self.s2.AddPoint(30.0, 30.0) | |
self.s2.AddPoint(40.0, 35.0) | |
self.s2.AddPoint(50.0, 50.0) | |
self.s3 = transitfeed.Shape('s3') | |
self.s3.AddPoint(31.0, 31.0) | |
self.s3.AddPoint(45.0, 35.0) | |
self.s3.AddPoint(51.0, 51.0) | |
def testMerge(self): | |
self.s2.shape_id = self.s1.shape_id | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetShapeList()), 1) | |
self.assertEquals(self.fm.merged_schedule.GetShapeList()[0], self.s2) | |
self.assertEquals(self.sm.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].shape_id, self.s1.shape_id) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetShapeList()), 2) | |
self.assertEquals(self.s1, self.fm.a_merge_map[self.s1]) | |
self.assertEquals(self.s2, self.fm.b_merge_map[self.s2]) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].shape_id, self.s1.shape_id) | |
self.assertEquals(self.fm.b_merge_map[self.s2].shape_id, self.s2.shape_id) | |
def testNoMerge_FarEndpoints(self): | |
self.s3.shape_id = self.s1.shape_id | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s3) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetShapeList()), 2) | |
self.assertEquals(self.s1, self.fm.a_merge_map[self.s1]) | |
self.assertEquals(self.s3, self.fm.b_merge_map[self.s3]) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.s1].shape_id, | |
self.fm.b_merge_map[self.s3].shape_id) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def _AddShapesApart(self): | |
"""Adds two shapes to the schedules. | |
The maximum of the distances between the endpoints is returned. | |
Returns: | |
The distance in metres, a value greater than zero. | |
""" | |
self.s3.shape_id = self.s1.shape_id | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s3) | |
distance1 = merge.ApproximateDistanceBetweenPoints( | |
self.s1.points[0][:2], self.s3.points[0][:2]) | |
distance2 = merge.ApproximateDistanceBetweenPoints( | |
self.s1.points[-1][:2], self.s3.points[-1][:2]) | |
return max(distance1, distance2) | |
def testSetLargestShapeDistanceSmall(self): | |
largest_shape_distance = self._AddShapesApart() * 0.5 | |
self.sm.SetLargestShapeDistance(largest_shape_distance) | |
self.assertEquals(self.sm.largest_shape_distance, largest_shape_distance) | |
self.fm.problem_reporter.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetShapeList()), 2) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testSetLargestShapeDistanceLarge(self): | |
largest_shape_distance = self._AddShapesApart() * 2.0 | |
self.sm.SetLargestShapeDistance(largest_shape_distance) | |
self.assertEquals(self.sm.largest_shape_distance, largest_shape_distance) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetShapeList()), 1) | |
class TestFareRuleMerger(unittest.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
TestingProblemReporter()) | |
self.fm.AddDefaultMergers() | |
self.fare_rule_merger = self.fm.GetMerger(merge.FareRuleMerger) | |
akwargs = {'id': 'a1', | |
'agency_name': 'a1', | |
'agency_url': 'http://www.a1.com', | |
'agency_timezone': 'Europe/Zurich'} | |
self.a1 = transitfeed.Agency(**akwargs) | |
self.a2 = transitfeed.Agency(**akwargs) | |
rkwargs = {'route_id': 'r1', | |
'agency_id': 'a1', | |
'short_name': 'r1', | |
'long_name': 'r1r1', | |
'route_type': '0'} | |
self.r1 = transitfeed.Route(**rkwargs) | |
self.r2 = transitfeed.Route(**rkwargs) | |
self.f1 = transitfeed.Fare('f1', '10', 'ZAR', '1', '0') | |
self.f2 = transitfeed.Fare('f1', '10', 'ZAR', '1', '0') | |
self.f3 = transitfeed.Fare('f3', '11', 'USD', '1', '0') | |
self.fr1 = transitfeed.FareRule('f1', 'r1') | |
self.fr2 = transitfeed.FareRule('f1', 'r1') | |
self.fr3 = transitfeed.FareRule('f3', 'r1') | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.a_schedule.AddFareObject(self.f1) | |
self.fm.a_schedule.AddFareObject(self.f3) | |
self.fm.a_schedule.AddFareRuleObject(self.fr1) | |
self.fm.a_schedule.AddFareRuleObject(self.fr3) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.b_schedule.AddFareObject(self.f2) | |
self.fm.b_schedule.AddFareRuleObject(self.fr2) | |
def testMerge(self): | |
self.fm.problem_reporter.ExpectProblemClass(merge.FareRulesBroken) | |
self.fm.problem_reporter.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareList()), 2) | |
fare_1 = self.fm.a_merge_map[self.f1] | |
fare_2 = self.fm.a_merge_map[self.f3] | |
self.assertEquals(len(fare_1.GetFareRuleList()), 1) | |
fare_rule_1 = fare_1.GetFareRuleList()[0] | |
self.assertEquals(len(fare_2.GetFareRuleList()), 1) | |
fare_rule_2 = fare_2.GetFareRuleList()[0] | |
self.assertEquals(fare_rule_1.fare_id, | |
self.fm.a_merge_map[self.f1].fare_id) | |
self.assertEquals(fare_rule_1.route_id, | |
self.fm.a_merge_map[self.r1].route_id) | |
self.assertEqual(fare_rule_2.fare_id, | |
self.fm.a_merge_map[self.f3].fare_id) | |
self.assertEqual(fare_rule_2.route_id, | |
self.fm.a_merge_map[self.r1].route_id) | |
self.fm.problem_reporter.assertExpectedProblemsReported(self) | |
def testMergeStats(self): | |
self.assert_(self.fare_rule_merger.GetMergeStats() is None) | |
class TestExceptionProblemReporter(unittest.TestCase): | |
def setUp(self): | |
self.dataset_merger = merge.TripMerger(None) | |
def testRaisesErrors(self): | |
problem_reporter = merge.ExceptionProblemReporter() | |
self.assertRaises(merge.CalendarsNotDisjoint, | |
problem_reporter.CalendarsNotDisjoint, | |
self.dataset_merger) | |
def testNoRaiseWarnings(self): | |
problem_reporter = merge.ExceptionProblemReporter() | |
problem_reporter.MergeNotImplemented(self.dataset_merger) | |
def testRaiseWarnings(self): | |
problem_reporter = merge.ExceptionProblemReporter(True) | |
self.assertRaises(merge.MergeNotImplemented, | |
problem_reporter.MergeNotImplemented, | |
self.dataset_merger) | |
class TestHTMLProblemReporter(unittest.TestCase): | |
def setUp(self): | |
self.problem_reporter = merge.HTMLProblemReporter() | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.feed_merger = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
self.problem_reporter) | |
self.dataset_merger = merge.TripMerger(None) | |
def testGeneratesSomeHTML(self): | |
self.problem_reporter.CalendarsNotDisjoint(self.dataset_merger) | |
self.problem_reporter.MergeNotImplemented(self.dataset_merger) | |
self.problem_reporter.FareRulesBroken(self.dataset_merger) | |
self.problem_reporter.SameIdButNotMerged(self.dataset_merger, | |
'test', 'unknown reason') | |
output_file = StringIO.StringIO() | |
old_feed_path = '/path/to/old/feed' | |
new_feed_path = '/path/to/new/feed' | |
merged_feed_path = '/path/to/merged/feed' | |
self.problem_reporter.WriteOutput(output_file, self.feed_merger, | |
old_feed_path, new_feed_path, | |
merged_feed_path) | |
html = output_file.getvalue() | |
self.assert_(html.startswith('<html>')) | |
self.assert_(html.endswith('</html>')) | |
class MergeInSubprocessTestCase(util.TempDirTestCaseBase): | |
def CopyAndModifyTestData(self, zip_path, modify_file, old, new): | |
"""Return path of zip_path copy with old replaced by new in modify_file.""" | |
zipfile_mem = StringIO.StringIO(open(zip_path, 'rb').read()) | |
new_zip_path = os.path.join(self.tempdirpath, "modified.zip") | |
zip = zipfile.ZipFile(zipfile_mem, 'a') | |
modified_contents = zip.read(modify_file).replace(old, new) | |
zip.writestr(modify_file, modified_contents) | |
zip.close() | |
open(new_zip_path, 'wb').write(zipfile_mem.getvalue()) | |
return new_zip_path | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser', | |
'IWantMyCrash', 'file2', 'fileout.zip'], | |
expected_retcode=127) | |
self.assertMatchesRegex(r'Yikes', out) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertMatchesRegex(r'For testing the merge crash handler', crashout) | |
def testMergeBadCommandLine(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser'], | |
expected_retcode=2) | |
self.assertFalse(out) | |
self.assertMatchesRegex(r'command line arguments', err) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testMergeWithWarnings(self): | |
# Make a copy of good_feed.zip which is not active until 20110101. This | |
# avoids adding another test/data file. good_feed.zip needs to remain error | |
# free so it can't start in the future. | |
future_good_feed = self.CopyAndModifyTestData( | |
self.GetPath('test/data/good_feed.zip'), 'calendar.txt', | |
'20070101', '20110101') | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser', | |
self.GetPath('test/data/unused_stop'), | |
future_good_feed, | |
os.path.join(self.tempdirpath, 'merged-warnings.zip')], | |
expected_retcode=0) | |
def testMergeWithErrors(self): | |
# Make a copy of good_feed.zip which is not active until 20110101. This | |
# avoids adding another test/data file. good_feed.zip needs to remain error | |
# free so it can't start in the future. | |
future_good_feed = self.CopyAndModifyTestData( | |
self.GetPath('test/data/good_feed.zip'), 'calendar.txt', | |
'20070101', '20110101') | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser', | |
self.GetPath('test/data/unused_stop'), | |
future_good_feed], | |
expected_retcode=2) | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testmerge.pyc and /dev/null differ
#!/usr/bin/python2.4 | |
# | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Tests for transitfeed.shapelib.py""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import math | |
from transitfeed import shapelib | |
from transitfeed.shapelib import Point | |
from transitfeed.shapelib import Poly | |
from transitfeed.shapelib import PolyCollection | |
from transitfeed.shapelib import PolyGraph | |
import unittest | |
def formatPoint(p, precision=12): | |
formatString = "(%%.%df, %%.%df, %%.%df)" % (precision, precision, precision) | |
return formatString % (p.x, p.y, p.z) | |
def formatPoints(points): | |
return "[%s]" % ", ".join([formatPoint(p, precision=4) for p in points]) | |
class ShapeLibTestBase(unittest.TestCase): | |
def assertApproxEq(self, a, b): | |
self.assertAlmostEqual(a, b, 8) | |
def assertPointApproxEq(self, a, b): | |
try: | |
self.assertApproxEq(a.x, b.x) | |
self.assertApproxEq(a.y, b.y) | |
self.assertApproxEq(a.z, b.z) | |
except AssertionError: | |
print 'ERROR: %s != %s' % (formatPoint(a), formatPoint(b)) | |
raise | |
def assertPointsApproxEq(self, points1, points2): | |
try: | |
self.assertEqual(len(points1), len(points2)) | |
except AssertionError: | |
print "ERROR: %s != %s" % (formatPoints(points1), formatPoints(points2)) | |
raise | |
for i in xrange(len(points1)): | |
try: | |
self.assertPointApproxEq(points1[i], points2[i]) | |
except AssertionError: | |
print ('ERROR: points not equal in position %d\n%s != %s' | |
% (i, formatPoints(points1), formatPoints(points2))) | |
raise | |
class TestPoints(ShapeLibTestBase): | |
def testPoints(self): | |
p = Point(1, 1, 1) | |
self.assertApproxEq(p.DotProd(p), 3) | |
self.assertApproxEq(p.Norm2(), math.sqrt(3)) | |
self.assertPointApproxEq(Point(1.5, 1.5, 1.5), | |
p.Times(1.5)) | |
norm = 1.7320508075688772 | |
self.assertPointApproxEq(p.Normalize(), | |
Point(1 / norm, | |
1 / norm, | |
1 / norm)) | |
p2 = Point(1, 0, 0) | |
self.assertPointApproxEq(p2, p2.Normalize()) | |
def testCrossProd(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 1 ,0).Normalize() | |
p1_cross_p2 = p1.CrossProd(p2) | |
self.assertApproxEq(p1_cross_p2.x, 0) | |
self.assertApproxEq(p1_cross_p2.y, 0) | |
self.assertApproxEq(p1_cross_p2.z, 1) | |
def testRobustCrossProd(self): | |
p1 = Point(1, 0, 0) | |
p2 = Point(1, 0, 0) | |
self.assertPointApproxEq(Point(0, 0, 0), | |
p1.CrossProd(p2)) | |
# only needs to be an arbitrary vector perpendicular to (1, 0, 0) | |
self.assertPointApproxEq( | |
Point(0.000000000000000, -0.998598452020993, 0.052925717957113), | |
p1.RobustCrossProd(p2)) | |
def testS2LatLong(self): | |
point = Point.FromLatLng(30, 40) | |
self.assertPointApproxEq(Point(0.663413948169, | |
0.556670399226, | |
0.5), point) | |
(lat, lng) = point.ToLatLng() | |
self.assertApproxEq(30, lat) | |
self.assertApproxEq(40, lng) | |
def testOrtho(self): | |
point = Point(1, 1, 1) | |
ortho = point.Ortho() | |
self.assertApproxEq(ortho.DotProd(point), 0) | |
def testAngle(self): | |
point1 = Point(1, 1, 0).Normalize() | |
point2 = Point(0, 1, 0) | |
self.assertApproxEq(45, point1.Angle(point2) * 360 / (2 * math.pi)) | |
self.assertApproxEq(point1.Angle(point2), point2.Angle(point1)) | |
def testGetDistanceMeters(self): | |
point1 = Point.FromLatLng(40.536895,-74.203033) | |
point2 = Point.FromLatLng(40.575239,-74.112825) | |
self.assertApproxEq(8732.623770873237, | |
point1.GetDistanceMeters(point2)) | |
class TestClosestPoint(ShapeLibTestBase): | |
def testGetClosestPoint(self): | |
x = Point(1, 1, 0).Normalize() | |
a = Point(1, 0, 0) | |
b = Point(0, 1, 0) | |
closest = shapelib.GetClosestPoint(x, a, b) | |
self.assertApproxEq(0.707106781187, closest.x) | |
self.assertApproxEq(0.707106781187, closest.y) | |
self.assertApproxEq(0.0, closest.z) | |
class TestPoly(ShapeLibTestBase): | |
def testGetClosestPointShape(self): | |
poly = Poly() | |
poly.AddPoint(Point(1, 1, 0).Normalize()) | |
self.assertPointApproxEq(Point( | |
0.707106781187, 0.707106781187, 0), poly.GetPoint(0)) | |
point = Point(0, 1, 1).Normalize() | |
self.assertPointApproxEq(Point(1, 1, 0).Normalize(), | |
poly.GetClosestPoint(point)[0]) | |
poly.AddPoint(Point(0, 1, 1).Normalize()) | |
self.assertPointApproxEq( | |
Point(0, 1, 1).Normalize(), | |
poly.GetClosestPoint(point)[0]) | |
def testCutAtClosestPoint(self): | |
poly = Poly() | |
poly.AddPoint(Point(0, 1, 0).Normalize()) | |
poly.AddPoint(Point(0, 0.5, 0.5).Normalize()) | |
poly.AddPoint(Point(0, 0, 1).Normalize()) | |
(before, after) = \ | |
poly.CutAtClosestPoint(Point(0, 0.3, 0.7).Normalize()) | |
self.assert_(2 == before.GetNumPoints()) | |
self.assert_(2 == before.GetNumPoints()) | |
self.assertPointApproxEq( | |
Point(0, 0.707106781187, 0.707106781187), before.GetPoint(1)) | |
self.assertPointApproxEq( | |
Point(0, 0.393919298579, 0.919145030018), after.GetPoint(0)) | |
poly = Poly() | |
poly.AddPoint(Point.FromLatLng(40.527035999999995, -74.191265999999999)) | |
poly.AddPoint(Point.FromLatLng(40.526859999999999, -74.191140000000004)) | |
poly.AddPoint(Point.FromLatLng(40.524681000000001, -74.189579999999992)) | |
poly.AddPoint(Point.FromLatLng(40.523128999999997, -74.188467000000003)) | |
poly.AddPoint(Point.FromLatLng(40.523054999999999, -74.188676000000001)) | |
pattern = Poly() | |
pattern.AddPoint(Point.FromLatLng(40.52713, | |
-74.191146000000003)) | |
self.assertApproxEq(14.564268281551, pattern.GreedyPolyMatchDist(poly)) | |
def testMergePolys(self): | |
poly1 = Poly(name="Foo") | |
poly1.AddPoint(Point(0, 1, 0).Normalize()) | |
poly1.AddPoint(Point(0, 0.5, 0.5).Normalize()) | |
poly1.AddPoint(Point(0, 0, 1).Normalize()) | |
poly1.AddPoint(Point(1, 1, 1).Normalize()) | |
poly2 = Poly() | |
poly3 = Poly(name="Bar") | |
poly3.AddPoint(Point(1, 1, 1).Normalize()) | |
poly3.AddPoint(Point(2, 0.5, 0.5).Normalize()) | |
merged1 = Poly.MergePolys([poly1, poly2]) | |
self.assertPointsApproxEq(poly1.GetPoints(), merged1.GetPoints()) | |
self.assertEqual("Foo;", merged1.GetName()) | |
merged2 = Poly.MergePolys([poly2, poly3]) | |
self.assertPointsApproxEq(poly3.GetPoints(), merged2.GetPoints()) | |
self.assertEqual(";Bar", merged2.GetName()) | |
merged3 = Poly.MergePolys([poly1, poly2, poly3], merge_point_threshold=0) | |
mergedPoints = poly1.GetPoints()[:] | |
mergedPoints.append(poly3.GetPoint(-1)) | |
self.assertPointsApproxEq(mergedPoints, merged3.GetPoints()) | |
self.assertEqual("Foo;;Bar", merged3.GetName()) | |
merged4 = Poly.MergePolys([poly2]) | |
self.assertEqual("", merged4.GetName()) | |
self.assertEqual(0, merged4.GetNumPoints()) | |
# test merging two nearby points | |
newPoint = poly1.GetPoint(-1).Plus(Point(0.000001, 0, 0)).Normalize() | |
poly1.AddPoint(newPoint) | |
distance = poly1.GetPoint(-1).GetDistanceMeters(poly3.GetPoint(0)) | |
self.assertTrue(distance <= 10) | |
self.assertTrue(distance > 5) | |
merged5 = Poly.MergePolys([poly1, poly2, poly3], merge_point_threshold=10) | |
mergedPoints = poly1.GetPoints()[:] | |
mergedPoints.append(poly3.GetPoint(-1)) | |
self.assertPointsApproxEq(mergedPoints, merged5.GetPoints()) | |
self.assertEqual("Foo;;Bar", merged5.GetName()) | |
merged6 = Poly.MergePolys([poly1, poly2, poly3], merge_point_threshold=5) | |
mergedPoints = poly1.GetPoints()[:] | |
mergedPoints += poly3.GetPoints() | |
self.assertPointsApproxEq(mergedPoints, merged6.GetPoints()) | |
self.assertEqual("Foo;;Bar", merged6.GetName()) | |
def testReversed(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
poly1 = Poly([p1, p2, p3]) | |
self.assertPointsApproxEq([p3, p2, p1], poly1.Reversed().GetPoints()) | |
def testLengthMeters(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
poly0 = Poly([p1]) | |
poly1 = Poly([p1, p2]) | |
poly2 = Poly([p1, p2, p3]) | |
try: | |
poly0.LengthMeters() | |
self.fail("Should have thrown AssertionError") | |
except AssertionError: | |
pass | |
p1_p2 = p1.GetDistanceMeters(p2) | |
p2_p3 = p2.GetDistanceMeters(p3) | |
self.assertEqual(p1_p2, poly1.LengthMeters()) | |
self.assertEqual(p1_p2 + p2_p3, poly2.LengthMeters()) | |
self.assertEqual(p1_p2 + p2_p3, poly2.Reversed().LengthMeters()) | |
class TestCollection(ShapeLibTestBase): | |
def testPolyMatch(self): | |
poly = Poly() | |
poly.AddPoint(Point(0, 1, 0).Normalize()) | |
poly.AddPoint(Point(0, 0.5, 0.5).Normalize()) | |
poly.AddPoint(Point(0, 0, 1).Normalize()) | |
collection = PolyCollection() | |
collection.AddPoly(poly) | |
match = collection.FindMatchingPolys(Point(0, 1, 0), | |
Point(0, 0, 1)) | |
self.assert_(len(match) == 1 and match[0] == poly) | |
match = collection.FindMatchingPolys(Point(0, 1, 0), | |
Point(0, 1, 0)) | |
self.assert_(len(match) == 0) | |
poly = Poly() | |
poly.AddPoint(Point.FromLatLng(45.585212,-122.586136)) | |
poly.AddPoint(Point.FromLatLng(45.586654,-122.587595)) | |
collection = PolyCollection() | |
collection.AddPoly(poly) | |
match = collection.FindMatchingPolys( | |
Point.FromLatLng(45.585212,-122.586136), | |
Point.FromLatLng(45.586654,-122.587595)) | |
self.assert_(len(match) == 1 and match[0] == poly) | |
match = collection.FindMatchingPolys( | |
Point.FromLatLng(45.585219,-122.586136), | |
Point.FromLatLng(45.586654,-122.587595)) | |
self.assert_(len(match) == 1 and match[0] == poly) | |
self.assertApproxEq(0.0, poly.GreedyPolyMatchDist(poly)) | |
match = collection.FindMatchingPolys( | |
Point.FromLatLng(45.587212,-122.586136), | |
Point.FromLatLng(45.586654,-122.587595)) | |
self.assert_(len(match) == 0) | |
class TestGraph(ShapeLibTestBase): | |
def testReconstructPath(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
poly1 = Poly([p1, p2]) | |
poly2 = Poly([p3, p2]) | |
came_from = { | |
p2: (p1, poly1), | |
p3: (p2, poly2) | |
} | |
graph = PolyGraph() | |
reconstructed1 = graph._ReconstructPath(came_from, p1) | |
self.assertEqual(0, reconstructed1.GetNumPoints()) | |
reconstructed2 = graph._ReconstructPath(came_from, p2) | |
self.assertPointsApproxEq([p1, p2], reconstructed2.GetPoints()) | |
reconstructed3 = graph._ReconstructPath(came_from, p3) | |
self.assertPointsApproxEq([p1, p2, p3], reconstructed3.GetPoints()) | |
def testShortestPath(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
p4 = Point(0.7, 0.7, 0.5).Normalize() | |
poly1 = Poly([p1, p2, p3], "poly1") | |
poly2 = Poly([p4, p3], "poly2") | |
poly3 = Poly([p4, p1], "poly3") | |
graph = PolyGraph() | |
graph.AddPoly(poly1) | |
graph.AddPoly(poly2) | |
graph.AddPoly(poly3) | |
path = graph.ShortestPath(p1, p4) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p1, p4], path.GetPoints()) | |
path = graph.ShortestPath(p1, p3) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p1, p4, p3], path.GetPoints()) | |
path = graph.ShortestPath(p3, p1) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p3, p4, p1], path.GetPoints()) | |
def testFindShortestMultiPointPath(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0.5, 0.5, 0).Normalize() | |
p3 = Point(0.5, 0.5, 0.1).Normalize() | |
p4 = Point(0, 1, 0).Normalize() | |
poly1 = Poly([p1, p2, p3], "poly1") | |
poly2 = Poly([p4, p3], "poly2") | |
poly3 = Poly([p4, p1], "poly3") | |
graph = PolyGraph() | |
graph.AddPoly(poly1) | |
graph.AddPoly(poly2) | |
graph.AddPoly(poly3) | |
path = graph.FindShortestMultiPointPath([p1, p3, p4]) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p1, p2, p3, p4], path.GetPoints()) | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testshapelib.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Unit tests for the transitfeed module. | |
import datetime | |
from datetime import date | |
import dircache | |
import os.path | |
import re | |
import sys | |
import tempfile | |
import time | |
import transitfeed | |
import unittest | |
import util | |
from util import RecordingProblemReporter | |
from StringIO import StringIO | |
import zipfile | |
import zlib | |
def DataPath(path): | |
here = os.path.dirname(__file__) | |
return os.path.join(here, 'data', path) | |
def GetDataPathContents(): | |
here = os.path.dirname(__file__) | |
return dircache.listdir(os.path.join(here, 'data')) | |
class ExceptionProblemReporterNoExpiration( | |
transitfeed.ExceptionProblemReporter): | |
"""Ignores feed expiration problems. | |
Use TestFailureProblemReporter in new code because it fails more cleanly, is | |
easier to extend and does more thorough checking. | |
""" | |
def __init__(self): | |
transitfeed.ExceptionProblemReporter.__init__(self, raise_warnings=True) | |
def ExpirationDate(self, expiration, context=None): | |
pass # We don't want to give errors about our test data | |
class TestFailureProblemReporter(transitfeed.ProblemReporter): | |
"""Causes a test failure immediately on any problem.""" | |
def __init__(self, test_case, ignore_types=("ExpirationDate",)): | |
transitfeed.ProblemReporter.__init__(self) | |
self.test_case = test_case | |
self._ignore_types = ignore_types or set() | |
def _Report(self, e): | |
# These should never crash | |
formatted_problem = e.FormatProblem() | |
formatted_context = e.FormatContext() | |
exception_class = e.__class__.__name__ | |
if exception_class in self._ignore_types: | |
return | |
self.test_case.fail( | |
"%s: %s\n%s" % (exception_class, formatted_problem, formatted_context)) | |
class UnrecognizedColumnRecorder(RecordingProblemReporter): | |
"""Keeps track of unrecognized column errors.""" | |
def __init__(self, test_case): | |
RecordingProblemReporter.__init__(self, test_case, | |
ignore_types=("ExpirationDate",)) | |
self.column_errors = [] | |
def UnrecognizedColumn(self, file_name, column_name, context=None): | |
self.column_errors.append((file_name, column_name)) | |
class RedirectStdOutTestCaseBase(unittest.TestCase): | |
"""Save stdout to the StringIO buffer self.this_stdout""" | |
def setUp(self): | |
self.saved_stdout = sys.stdout | |
self.this_stdout = StringIO() | |
sys.stdout = self.this_stdout | |
def tearDown(self): | |
sys.stdout = self.saved_stdout | |
self.this_stdout.close() | |
# ensure that there are no exceptions when attempting to load | |
# (so that the validator won't crash) | |
class NoExceptionTestCase(RedirectStdOutTestCaseBase): | |
def runTest(self): | |
for feed in GetDataPathContents(): | |
loader = transitfeed.Loader(DataPath(feed), | |
problems=transitfeed.ProblemReporter(), | |
extra_validation=True) | |
schedule = loader.Load() | |
schedule.Validate() | |
class EndOfLineCheckerTestCase(unittest.TestCase): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self) | |
def RunEndOfLineChecker(self, end_of_line_checker): | |
# Iterating using for calls end_of_line_checker.next() until a | |
# StopIteration is raised. EndOfLineChecker does the final check for a mix | |
# of CR LF and LF ends just before raising StopIteration. | |
for line in end_of_line_checker: | |
pass | |
def testInvalidLineEnd(self): | |
f = transitfeed.EndOfLineChecker(StringIO("line1\r\r\nline2"), | |
"<StringIO>", | |
self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.problems.PopException("InvalidLineEnd") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 1) | |
self.assertEqual(e.bad_line_end, r"\r\r\n") | |
self.problems.AssertNoMoreExceptions() | |
def testInvalidLineEndToo(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1\nline2\r\nline3\r\r\r\n"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.problems.PopException("InvalidLineEnd") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 3) | |
self.assertEqual(e.bad_line_end, r"\r\r\r\n") | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertTrue(e.description.find("consistent line end") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testEmbeddedCr(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1\rline1b"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 1) | |
self.assertEqual(e.FormatProblem(), | |
"Line contains ASCII Carriage Return 0x0D, \\r") | |
self.problems.AssertNoMoreExceptions() | |
def testEmbeddedUtf8NextLine(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1b\xc2\x85"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 1) | |
self.assertEqual(e.FormatProblem(), | |
"Line contains Unicode NEXT LINE SEPARATOR U+0085") | |
self.problems.AssertNoMoreExceptions() | |
def testEndOfLineMix(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1\nline2\r\nline3\nline4"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.FormatProblem(), | |
"Found 1 CR LF \"\\r\\n\" line end (line 2) and " | |
"2 LF \"\\n\" line ends (lines 1, 3). A file must use a " | |
"consistent line end.") | |
self.problems.AssertNoMoreExceptions() | |
def testEndOfLineManyMix(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("1\n2\n3\n4\n5\n6\n7\r\n8\r\n9\r\n10\r\n11\r\n"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.FormatProblem(), | |
"Found 5 CR LF \"\\r\\n\" line ends (lines 7, 8, 9, 10, " | |
"11) and 6 LF \"\\n\" line ends (lines 1, 2, 3, 4, 5, " | |
"...). A file must use a consistent line end.") | |
self.problems.AssertNoMoreExceptions() | |
def testLoad(self): | |
loader = transitfeed.Loader( | |
DataPath("bad_eol.zip"), problems=self.problems, extra_validation=True) | |
loader.Load() | |
e = self.problems.PopException("InvalidLineEnd") | |
self.assertEqual(e.file_name, "routes.txt") | |
self.assertEqual(e.row_num, 5) | |
self.assertTrue(e.FormatProblem().find(r"\r\r\n") != -1) | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "calendar.txt") | |
self.assertTrue(re.search( | |
r"Found 1 CR LF.* \(line 2\) and 2 LF .*\(lines 1, 3\)", | |
e.FormatProblem())) | |
e = self.problems.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "trips.txt") | |
self.assertEqual(e.row_num, 1) | |
self.assertTrue(re.search( | |
r"contains ASCII Form Feed", | |
e.FormatProblem())) | |
# TODO(Tom): avoid this duplicate error for the same issue | |
e = self.problems.PopException("CsvSyntax") | |
self.assertEqual(e.row_num, 1) | |
self.assertTrue(re.search( | |
r"header row should not contain any space char", | |
e.FormatProblem())) | |
self.problems.AssertNoMoreExceptions() | |
class LoadTestCase(unittest.TestCase): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self, ("ExpirationDate",)) | |
def Load(self, feed_name): | |
loader = transitfeed.Loader( | |
DataPath(feed_name), problems=self.problems, extra_validation=True) | |
loader.Load() | |
def ExpectInvalidValue(self, feed_name, column_name): | |
self.Load(feed_name) | |
self.problems.PopInvalidValue(column_name) | |
self.problems.AssertNoMoreExceptions() | |
def ExpectMissingFile(self, feed_name, file_name): | |
self.Load(feed_name) | |
e = self.problems.PopException("MissingFile") | |
self.assertEqual(file_name, e.file_name) | |
# Don't call AssertNoMoreExceptions() because a missing file causes | |
# many errors. | |
class LoadFromZipTestCase(unittest.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems = TestFailureProblemReporter(self), | |
extra_validation = True) | |
loader.Load() | |
# now try using Schedule.Load | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.Load(DataPath('good_feed.zip'), extra_validation=True) | |
class LoadAndRewriteFromZipTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.Load(DataPath('good_feed.zip'), extra_validation=True) | |
# Finally see if write crashes | |
schedule.WriteGoogleTransitFeed(tempfile.TemporaryFile()) | |
class LoadFromDirectoryTestCase(unittest.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed'), | |
problems = TestFailureProblemReporter(self), | |
extra_validation = True) | |
loader.Load() | |
class LoadUnknownFeedTestCase(unittest.TestCase): | |
def runTest(self): | |
feed_name = DataPath('unknown_feed') | |
loader = transitfeed.Loader( | |
feed_name, | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('FeedNotFound exception expected') | |
except transitfeed.FeedNotFound, e: | |
self.assertEqual(feed_name, e.feed_name) | |
class LoadUnknownFormatTestCase(unittest.TestCase): | |
def runTest(self): | |
feed_name = DataPath('unknown_format.zip') | |
loader = transitfeed.Loader( | |
feed_name, | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('UnknownFormat exception expected') | |
except transitfeed.UnknownFormat, e: | |
self.assertEqual(feed_name, e.feed_name) | |
class LoadUnrecognizedColumnsTestCase(unittest.TestCase): | |
def runTest(self): | |
problems = UnrecognizedColumnRecorder(self) | |
loader = transitfeed.Loader(DataPath('unrecognized_columns'), | |
problems=problems) | |
loader.Load() | |
found_errors = set(problems.column_errors) | |
expected_errors = set([ | |
('agency.txt', 'agency_lange'), | |
('stops.txt', 'stop_uri'), | |
('routes.txt', 'Route_Text_Color'), | |
('calendar.txt', 'leap_day'), | |
('calendar_dates.txt', 'leap_day'), | |
('trips.txt', 'sharpe_id'), | |
('stop_times.txt', 'shapedisttraveled'), | |
('stop_times.txt', 'drop_off_time'), | |
('fare_attributes.txt', 'transfer_time'), | |
('fare_rules.txt', 'source_id'), | |
('frequencies.txt', 'superfluous'), | |
('transfers.txt', 'to_stop') | |
]) | |
# Now make sure we got the unrecognized column errors that we expected. | |
not_expected = found_errors.difference(expected_errors) | |
self.failIf(not_expected, 'unexpected errors: %s' % str(not_expected)) | |
not_found = expected_errors.difference(found_errors) | |
self.failIf(not_found, 'expected but not found: %s' % str(not_found)) | |
class LoadExtraCellValidationTestCase(LoadTestCase): | |
"""Check that the validation detects too many cells in a row.""" | |
def runTest(self): | |
self.Load('extra_row_cells') | |
e = self.problems.PopException("OtherProblem") | |
self.assertEquals("routes.txt", e.file_name) | |
self.assertEquals(4, e.row_num) | |
self.problems.AssertNoMoreExceptions() | |
class LoadMissingCellValidationTestCase(LoadTestCase): | |
"""Check that the validation detects missing cells in a row.""" | |
def runTest(self): | |
self.Load('missing_row_cells') | |
e = self.problems.PopException("OtherProblem") | |
self.assertEquals("routes.txt", e.file_name) | |
self.assertEquals(4, e.row_num) | |
self.problems.AssertNoMoreExceptions() | |
class LoadUnknownFileTestCase(unittest.TestCase): | |
"""Check that the validation detects unknown files.""" | |
def runTest(self): | |
feed_name = DataPath('unknown_file') | |
self.problems = RecordingProblemReporter(self, ("ExpirationDate",)) | |
loader = transitfeed.Loader( | |
feed_name, | |
problems = self.problems, | |
extra_validation = True) | |
loader.Load() | |
e = self.problems.PopException('UnknownFile') | |
self.assertEqual('frecuencias.txt', e.file_name) | |
self.problems.AssertNoMoreExceptions() | |
class LoadUTF8BOMTestCase(unittest.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('utf8bom'), | |
problems = TestFailureProblemReporter(self), | |
extra_validation = True) | |
loader.Load() | |
class LoadUTF16TestCase(unittest.TestCase): | |
def runTest(self): | |
# utf16 generated by `recode utf8..utf16 *' | |
loader = transitfeed.Loader( | |
DataPath('utf16'), | |
problems = transitfeed.ExceptionProblemReporter(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
# TODO: make sure processing proceeds beyond the problem | |
self.fail('FileFormat exception expected') | |
except transitfeed.FileFormat, e: | |
# make sure these don't raise an exception | |
self.assertTrue(re.search(r'encoded in utf-16', e.FormatProblem())) | |
e.FormatContext() | |
class LoadNullTestCase(unittest.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('contains_null'), | |
problems = transitfeed.ExceptionProblemReporter(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('FileFormat exception expected') | |
except transitfeed.FileFormat, e: | |
self.assertTrue(re.search(r'contains a null', e.FormatProblem())) | |
# make sure these don't raise an exception | |
e.FormatContext() | |
class ProblemReporterTestCase(RedirectStdOutTestCaseBase): | |
# Unittest for problem reporter | |
def testContextWithBadUnicodeProblem(self): | |
pr = transitfeed.ProblemReporter() | |
# Context has valid unicode values | |
pr.SetFileContext('filename.foo', 23, | |
[u'Andr\202', u'Person \uc720 foo', None], | |
[u'1\202', u'2\202', u'3\202']) | |
pr.OtherProblem('test string') | |
pr.OtherProblem(u'\xff\xfe\x80\x88') | |
# Invalid ascii and utf-8. encode('utf-8') and decode('utf-8') will fail | |
# for this value | |
pr.OtherProblem('\xff\xfe\x80\x88') | |
self.assertTrue(re.search(r"test string", self.this_stdout.getvalue())) | |
self.assertTrue(re.search(r"filename.foo:23", self.this_stdout.getvalue())) | |
def testNoContextWithBadUnicode(self): | |
pr = transitfeed.ProblemReporter() | |
pr.OtherProblem('test string') | |
pr.OtherProblem(u'\xff\xfe\x80\x88') | |
# Invalid ascii and utf-8. encode('utf-8') and decode('utf-8') will fail | |
# for this value | |
pr.OtherProblem('\xff\xfe\x80\x88') | |
self.assertTrue(re.search(r"test string", self.this_stdout.getvalue())) | |
def testBadUnicodeContext(self): | |
pr = transitfeed.ProblemReporter() | |
pr.SetFileContext('filename.foo', 23, | |
[u'Andr\202', 'Person \xff\xfe\x80\x88 foo', None], | |
[u'1\202', u'2\202', u'3\202']) | |
pr.OtherProblem("help, my context isn't utf-8!") | |
self.assertTrue(re.search(r"help, my context", self.this_stdout.getvalue())) | |
self.assertTrue(re.search(r"filename.foo:23", self.this_stdout.getvalue())) | |
def testLongWord(self): | |
# Make sure LineWrap doesn't puke | |
pr = transitfeed.ProblemReporter() | |
pr.OtherProblem('1111untheontuhoenuthoentuhntoehuontehuntoehuntoehunto' | |
'2222oheuntheounthoeunthoeunthoeuntheontuheontuhoue') | |
self.assertTrue(re.search(r"1111.+2222", self.this_stdout.getvalue())) | |
class BadProblemReporterTestCase(RedirectStdOutTestCaseBase): | |
"""Make sure ProblemReporter doesn't crash when given bad unicode data and | |
does find some error""" | |
# tom.brown.code-utf8_weaknesses fixed a bug with problem reporter and bad | |
# utf-8 strings | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('bad_utf8'), | |
problems = transitfeed.ProblemReporter(), | |
extra_validation = True) | |
loader.Load() | |
# raises exception if not found | |
self.this_stdout.getvalue().index('Invalid value') | |
class BadUtf8TestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('bad_utf8') | |
self.problems.PopException("UnrecognizedColumn") | |
self.problems.PopInvalidValue("agency_name", "agency.txt") | |
self.problems.PopInvalidValue("stop_name", "stops.txt") | |
self.problems.PopInvalidValue("route_short_name", "routes.txt") | |
self.problems.PopInvalidValue("route_long_name", "routes.txt") | |
self.problems.PopInvalidValue("trip_headsign", "trips.txt") | |
self.problems.PopInvalidValue("stop_headsign", "stop_times.txt") | |
self.problems.AssertNoMoreExceptions() | |
class LoadMissingAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_agency', 'agency.txt') | |
class LoadMissingStopsTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_stops', 'stops.txt') | |
class LoadMissingRoutesTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_routes', 'routes.txt') | |
class LoadMissingTripsTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_trips', 'trips.txt') | |
class LoadMissingStopTimesTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_stop_times', 'stop_times.txt') | |
class LoadMissingCalendarTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_calendar', 'calendar.txt') | |
class EmptyFileTestCase(unittest.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('empty_file'), | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('EmptyFile exception expected') | |
except transitfeed.EmptyFile, e: | |
self.assertEqual('agency.txt', e.file_name) | |
class MissingColumnTestCase(unittest.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('missing_column'), | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('MissingColumn exception expected') | |
except transitfeed.MissingColumn, e: | |
self.assertEqual('agency.txt', e.file_name) | |
self.assertEqual('agency_name', e.column_name) | |
class ZeroBasedStopSequenceTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('negative_stop_sequence', 'stop_sequence') | |
class DuplicateStopTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
try: | |
schedule.Load(DataPath('duplicate_stop'), extra_validation=True) | |
self.fail('OtherProblem exception expected') | |
except transitfeed.OtherProblem: | |
pass | |
class DuplicateStopSequenceTestCase(unittest.TestCase): | |
def runTest(self): | |
problems = RecordingProblemReporter(self, ("ExpirationDate",)) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
schedule.Load(DataPath('duplicate_stop_sequence'), extra_validation=True) | |
e = problems.PopException('InvalidValue') | |
self.assertEqual('stop_sequence', e.column_name) | |
problems.AssertNoMoreExceptions() | |
class MissingEndpointTimesTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
try: | |
schedule.Load(DataPath('missing_endpoint_times'), extra_validation=True) | |
self.fail('InvalidValue exception expected') | |
except transitfeed.InvalidValue, e: | |
self.assertEqual('departure_time', e.column_name) | |
self.assertEqual('', e.value) | |
class DuplicateScheduleIDTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
try: | |
schedule.Load(DataPath('duplicate_schedule_id'), extra_validation=True) | |
self.fail('DuplicateID exception expected') | |
except transitfeed.DuplicateID: | |
pass | |
class ColorLuminanceTestCase(unittest.TestCase): | |
def runTest(self): | |
self.assertEqual(transitfeed.ColorLuminance('000000'), 0, | |
"ColorLuminance('000000') should be zero") | |
self.assertEqual(transitfeed.ColorLuminance('FFFFFF'), 255, | |
"ColorLuminance('FFFFFF') should be 255") | |
RGBmsg = ("ColorLuminance('RRGGBB') should be " | |
"0.299*<Red> + 0.587*<Green> + 0.114*<Blue>") | |
decimal_places_tested = 8 | |
self.assertAlmostEqual(transitfeed.ColorLuminance('640000'), 29.9, | |
decimal_places_tested, RGBmsg) | |
self.assertAlmostEqual(transitfeed.ColorLuminance('006400'), 58.7, | |
decimal_places_tested, RGBmsg) | |
self.assertAlmostEqual(transitfeed.ColorLuminance('000064'), 11.4, | |
decimal_places_tested, RGBmsg) | |
self.assertAlmostEqual(transitfeed.ColorLuminance('1171B3'), | |
0.299*17 + 0.587*113 + 0.114*179, | |
decimal_places_tested, RGBmsg) | |
INVALID_VALUE = Exception() | |
class ValidationTestCase(util.TestCaseAsserts): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self, ("ExpirationDate",)) | |
def ExpectNoProblems(self, object): | |
self.problems.AssertNoMoreExceptions() | |
object.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
# TODO: Get rid of Expect*Closure methods. With the | |
# RecordingProblemReporter it is now possible to replace | |
# self.ExpectMissingValueInClosure(lambda: o.method(...), foo) | |
# with | |
# o.method(...) | |
# self.ExpectMissingValueInClosure(foo) | |
# because problems don't raise an exception. This has the advantage of | |
# making it easy and clear to test the return value of o.method(...) and | |
# easier to test for a sequence of problems caused by one call. | |
def ExpectMissingValue(self, object, column_name): | |
self.ExpectMissingValueInClosure(column_name, | |
lambda: object.Validate(self.problems)) | |
def ExpectMissingValueInClosure(self, column_name, c): | |
self.problems.AssertNoMoreExceptions() | |
rv = c() | |
e = self.problems.PopException('MissingValue') | |
self.assertEqual(column_name, e.column_name) | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.problems.AssertNoMoreExceptions() | |
def ExpectInvalidValue(self, object, column_name, value=INVALID_VALUE): | |
self.ExpectInvalidValueInClosure(column_name, value, | |
lambda: object.Validate(self.problems)) | |
def ExpectInvalidValueInClosure(self, column_name, value=INVALID_VALUE, | |
c=None): | |
self.problems.AssertNoMoreExceptions() | |
rv = c() | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual(column_name, e.column_name) | |
if value != INVALID_VALUE: | |
self.assertEqual(value, e.value) | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.problems.AssertNoMoreExceptions() | |
def ExpectOtherProblem(self, object): | |
self.ExpectOtherProblemInClosure(lambda: object.Validate(self.problems)) | |
def ExpectOtherProblemInClosure(self, c): | |
self.problems.AssertNoMoreExceptions() | |
rv = c() | |
e = self.problems.PopException('OtherProblem') | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.problems.AssertNoMoreExceptions() | |
class AgencyValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='xh') | |
self.ExpectNoProblems(agency) | |
# bad agency | |
agency = transitfeed.Agency(name=' ', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA') | |
self.ExpectMissingValue(agency, 'agency_name') | |
# missing url | |
agency = transitfeed.Agency(name='Test Agency', | |
timezone='America/Los_Angeles', id='TA') | |
self.ExpectMissingValue(agency, 'agency_url') | |
# bad url | |
agency = transitfeed.Agency(name='Test Agency', url='www.example.com', | |
timezone='America/Los_Angeles', id='TA') | |
self.ExpectInvalidValue(agency, 'agency_url') | |
# bad time zone | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Alviso', id='TA') | |
agency.Validate(self.problems) | |
e = self.problems.PopInvalidValue('agency_timezone') | |
self.assertMatchesRegex('"America/Alviso" is not a common timezone', | |
e.FormatProblem()) | |
self.problems.AssertNoMoreExceptions() | |
# bad language code | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='English') | |
self.ExpectInvalidValue(agency, 'agency_lang') | |
# bad 2-letter lanugage code | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='xx') | |
self.ExpectInvalidValue(agency, 'agency_lang') | |
# capitalized language code is OK | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='EN') | |
self.ExpectNoProblems(agency) | |
# extra attribute in constructor is fine, only checked when loading a file | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', | |
agency_mission='monorail you there') | |
self.ExpectNoProblems(agency) | |
# extra attribute in assigned later is also fine | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles') | |
agency.agency_mission='monorail you there' | |
self.ExpectNoProblems(agency) | |
# Multiple problems | |
agency = transitfeed.Agency(name='Test Agency', url='www.example.com', | |
timezone='America/West Coast', id='TA') | |
self.assertEquals(False, agency.Validate(self.problems)) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual(e.column_name, 'agency_url') | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual(e.column_name, 'agency_timezone') | |
self.problems.AssertNoMoreExceptions() | |
class AgencyAttributesTestCase(ValidationTestCase): | |
def testCopy(self): | |
agency = transitfeed.Agency(field_dict={'agency_name': 'Test Agency', | |
'agency_url': 'http://example.com', | |
'timezone': 'America/Los_Angeles', | |
'agency_mission': 'get you there'}) | |
self.assertEquals(agency.agency_mission, 'get you there') | |
agency_copy = transitfeed.Agency(field_dict=agency) | |
self.assertEquals(agency_copy.agency_mission, 'get you there') | |
self.assertEquals(agency_copy['agency_mission'], 'get you there') | |
def testEq(self): | |
agency1 = transitfeed.Agency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
agency2 = transitfeed.Agency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
# Unknown columns, such as agency_mission, do affect equality | |
self.assertEquals(agency1, agency2) | |
agency1.agency_mission = "Get you there" | |
self.assertNotEquals(agency1, agency2) | |
agency2.agency_mission = "Move you" | |
self.assertNotEquals(agency1, agency2) | |
agency1.agency_mission = "Move you" | |
self.assertEquals(agency1, agency2) | |
# Private attributes don't affect equality | |
agency1._private_attr = "My private message" | |
self.assertEquals(agency1, agency2) | |
agency2._private_attr = "Another private thing" | |
self.assertEquals(agency1, agency2) | |
def testDict(self): | |
agency = transitfeed.Agency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
agency._private_attribute = "blah" | |
# Private attributes don't appear when iterating through an agency as a | |
# dict but can be directly accessed. | |
self.assertEquals("blah", agency._private_attribute) | |
self.assertEquals("blah", agency["_private_attribute"]) | |
self.assertEquals( | |
set("agency_name agency_url agency_timezone".split()), | |
set(agency.keys())) | |
self.assertEquals({"agency_name": "Test Agency", | |
"agency_url": "http://example.com", | |
"agency_timezone": "America/Los_Angeles"}, | |
dict(agency.iteritems())) | |
class StopValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
stop = transitfeed.Stop() | |
stop.stop_id = '45' | |
stop.stop_name = 'Couch AT End Table' | |
stop.stop_lat = 50.0 | |
stop.stop_lon = 50.0 | |
stop.stop_desc = 'Edge of the Couch' | |
stop.zone_id = 'A' | |
stop.stop_url = 'http://example.com' | |
stop.Validate(self.problems) | |
# latitude too large | |
stop.stop_lat = 100.0 | |
self.ExpectInvalidValue(stop, 'stop_lat') | |
stop.stop_lat = 50.0 | |
# latitude as a string works when it is valid | |
stop.stop_lat = '50.0' | |
stop.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
stop.stop_lat = '10f' | |
self.ExpectInvalidValue(stop, 'stop_lat') | |
stop.stop_lat = 50.0 | |
# longitude too large | |
stop.stop_lon = 200.0 | |
self.ExpectInvalidValue(stop, 'stop_lon') | |
stop.stop_lon = 50.0 | |
# lat, lon too close to 0, 0 | |
stop.stop_lat = 0.0 | |
stop.stop_lon = 0.0 | |
self.ExpectInvalidValue(stop, 'stop_lat') | |
stop.stop_lat = 50.0 | |
stop.stop_lon = 50.0 | |
# invalid stop_url | |
stop.stop_url = 'www.example.com' | |
self.ExpectInvalidValue(stop, 'stop_url') | |
stop.stop_url = 'http://example.com' | |
stop.stop_id = ' ' | |
self.ExpectMissingValue(stop, 'stop_id') | |
stop.stop_id = '45' | |
stop.stop_name = '' | |
self.ExpectMissingValue(stop, 'stop_name') | |
stop.stop_name = 'Couch AT End Table' | |
# description same as name | |
stop.stop_desc = 'Couch AT End Table' | |
self.ExpectInvalidValue(stop, 'stop_desc') | |
stop.stop_desc = 'Edge of the Couch' | |
self.problems.AssertNoMoreExceptions() | |
class StopAttributes(ValidationTestCase): | |
def testWithoutSchedule(self): | |
stop = transitfeed.Stop() | |
stop.Validate(self.problems) | |
for name in "stop_id stop_name stop_lat stop_lon".split(): | |
e = self.problems.PopException('MissingValue') | |
self.assertEquals(name, e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
stop = transitfeed.Stop() | |
# Test behaviour for unset and unknown attribute | |
self.assertEquals(stop['new_column'], '') | |
try: | |
t = stop.new_column | |
self.fail('Expecting AttributeError') | |
except AttributeError, e: | |
pass # Expected | |
stop.stop_id = 'a' | |
stop.stop_name = 'my stop' | |
stop.new_column = 'val' | |
stop.stop_lat = 5.909 | |
stop.stop_lon = '40.02' | |
self.assertEquals(stop.new_column, 'val') | |
self.assertEquals(stop['new_column'], 'val') | |
self.assertTrue(isinstance(stop['stop_lat'], basestring)) | |
self.assertAlmostEqual(float(stop['stop_lat']), 5.909) | |
self.assertTrue(isinstance(stop['stop_lon'], basestring)) | |
self.assertAlmostEqual(float(stop['stop_lon']), 40.02) | |
stop.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
# After validation stop.stop_lon has been converted to a float | |
self.assertAlmostEqual(stop.stop_lat, 5.909) | |
self.assertAlmostEqual(stop.stop_lon, 40.02) | |
self.assertEquals(stop.new_column, 'val') | |
self.assertEquals(stop['new_column'], 'val') | |
def testBlankAttributeName(self): | |
stop1 = transitfeed.Stop(field_dict={"": "a"}) | |
stop2 = transitfeed.Stop(field_dict=stop1) | |
self.assertEquals("a", getattr(stop1, "")) | |
# The attribute "" is treated as private and not copied | |
self.assertRaises(AttributeError, getattr, stop2, "") | |
self.assertEquals(set(), set(stop1.keys())) | |
self.assertEquals(set(), set(stop2.keys())) | |
def testWithSchedule(self): | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
stop = transitfeed.Stop(field_dict={}) | |
# AddStopObject silently fails for Stop objects without stop_id | |
schedule.AddStopObject(stop) | |
self.assertFalse(schedule.GetStopList()) | |
self.assertFalse(stop._schedule) | |
# Okay to add a stop with only stop_id | |
stop = transitfeed.Stop(field_dict={"stop_id": "b"}) | |
schedule.AddStopObject(stop) | |
stop.Validate(self.problems) | |
for name in "stop_name stop_lat stop_lon".split(): | |
e = self.problems.PopException("MissingValue") | |
self.assertEquals(name, e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
stop.new_column = "val" | |
self.assertTrue("new_column" in schedule.GetTableColumns("stops")) | |
# Adding a duplicate stop_id fails | |
schedule.AddStopObject(transitfeed.Stop(field_dict={"stop_id": "b"})) | |
self.problems.PopException("DuplicateID") | |
self.problems.AssertNoMoreExceptions() | |
class StopTimeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
stop = transitfeed.Stop() | |
self.ExpectInvalidValueInClosure('arrival_time', '1a:00:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="1a:00:00")) | |
self.ExpectInvalidValueInClosure('departure_time', '1a:00:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='1a:00:00')) | |
self.ExpectInvalidValueInClosure('pickup_type', '7.8', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='7.8', | |
drop_off_type='0')) | |
self.ExpectInvalidValueInClosure('drop_off_type', 'a', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='3', | |
drop_off_type='a')) | |
self.ExpectInvalidValueInClosure('shape_dist_traveled', '$', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='3', | |
drop_off_type='0', | |
shape_dist_traveled='$')) | |
self.ExpectInvalidValueInClosure('shape_dist_traveled', '0,53', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='3', | |
drop_off_type='0', | |
shape_dist_traveled='0,53')) | |
self.ExpectOtherProblemInClosure( | |
lambda: transitfeed.StopTime(self.problems, stop, | |
pickup_type='1', drop_off_type='1')) | |
self.ExpectInvalidValueInClosure('departure_time', '10:00:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="11:00:00", | |
departure_time="10:00:00")) | |
self.ExpectMissingValueInClosure('arrival_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
departure_time="10:00:00")) | |
self.ExpectMissingValueInClosure('arrival_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
departure_time="10:00:00", | |
arrival_time="")) | |
self.ExpectMissingValueInClosure('departure_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00")) | |
self.ExpectMissingValueInClosure('departure_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time="")) | |
self.ExpectInvalidValueInClosure('departure_time', '10:70:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time="10:70:00")) | |
self.ExpectInvalidValueInClosure('departure_time', '10:00:62', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time="10:00:62")) | |
self.ExpectInvalidValueInClosure('arrival_time', '10:00:63', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:63", | |
departure_time="10:10:00")) | |
self.ExpectInvalidValueInClosure('arrival_time', '10:60:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:60:00", | |
departure_time="11:02:00")) | |
# The following should work | |
transitfeed.StopTime(self.problems, stop, arrival_time="10:00:00", | |
departure_time="10:05:00", pickup_type='1', drop_off_type='1') | |
transitfeed.StopTime(self.problems, stop, arrival_time="1:00:00", | |
departure_time="1:05:00") | |
transitfeed.StopTime(self.problems, stop, arrival_time="24:59:00", | |
departure_time="25:05:00") | |
transitfeed.StopTime(self.problems, stop, arrival_time="101:01:00", | |
departure_time="101:21:00") | |
transitfeed.StopTime(self.problems, stop) | |
self.problems.AssertNoMoreExceptions() | |
class TooFastTravelTestCase(ValidationTestCase): | |
def setUp(self): | |
ValidationTestCase.setUp(self) | |
self.schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
self.schedule.NewDefaultAgency(agency_name="Test Agency", | |
agency_url="http://example.com", | |
agency_timezone="America/Los_Angeles") | |
self.route = self.schedule.AddRoute(short_name="54C", | |
long_name="Polish Hill", route_type=3) | |
service_period = self.schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService("20070101") | |
self.trip = self.route.AddTrip(self.schedule, 'via Polish Hill') | |
def AddStopDistanceTime(self, dist_time_list): | |
# latitude where each 0.01 degrees longitude is 1km | |
magic_lat = 26.062468289 | |
stop = self.schedule.AddStop(magic_lat, 0, "Demo Stop 0") | |
time = 0 | |
self.trip.AddStopTime(stop, arrival_secs=time, departure_secs=time) | |
for i, (dist_delta, time_delta) in enumerate(dist_time_list): | |
stop = self.schedule.AddStop( | |
magic_lat, stop.stop_lon + dist_delta * 0.00001, | |
"Demo Stop %d" % (i + 1)) | |
time += time_delta | |
self.trip.AddStopTime(stop, arrival_secs=time, departure_secs=time) | |
def testMovingTooFast(self): | |
self.AddStopDistanceTime([(1691, 60), | |
(1616, 60)]) | |
self.trip.Validate(self.problems) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex(r'1691 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'\(101 km/h\)', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
self.route.route_type = 4 # Ferry with max_speed 80 | |
self.trip.Validate(self.problems) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex(r'1691 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'\(101 km/h\)', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 1 to Demo Stop 2', e.FormatProblem()) | |
self.assertMatchesRegex(r'1616 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'97 km/h', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
# Run test without a route_type | |
self.route.route_type = None | |
self.trip.Validate(self.problems) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex(r'1691 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'101 km/h', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
def testNoTimeDelta(self): | |
# See comments where TooFastTravel is called in transitfeed.py to | |
# understand why was added. | |
# Movement more than max_speed in 1 minute with no time change is a warning. | |
self.AddStopDistanceTime([(1616, 0), | |
(1000, 120), | |
(1691, 0)]) | |
self.trip.Validate(self.problems) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 2 to Demo Stop 3', e.FormatProblem()) | |
self.assertMatchesRegex('1691 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
self.route.route_type = 4 # Ferry with max_speed 80 | |
self.trip.Validate(self.problems) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex('1616 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 2 to Demo Stop 3', e.FormatProblem()) | |
self.assertMatchesRegex('1691 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
# Run test without a route_type | |
self.route.route_type = None | |
self.trip.Validate(self.problems) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 2 to Demo Stop 3', e.FormatProblem()) | |
self.assertMatchesRegex('1691 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
def testNoTimeDeltaNotRounded(self): | |
# See comments where TooFastTravel is called in transitfeed.py to | |
# understand why was added. | |
# Any movement with no time change and times not rounded to the nearest | |
# minute causes a warning. | |
self.AddStopDistanceTime([(500, 62), | |
(10, 0)]) | |
self.trip.Validate(self.problems) | |
e = self.problems.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 1 to Demo Stop 2', e.FormatProblem()) | |
self.assertMatchesRegex('10 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
class MemoryZipTestCase(util.TestCaseAsserts): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self, ("ExpirationDate",)) | |
self.zipfile = StringIO() | |
self.zip = zipfile.ZipFile(self.zipfile, 'a') | |
self.zip.writestr( | |
"agency.txt", | |
"agency_id,agency_name,agency_url,agency_timezone\n" | |
"DTA,Demo Agency,http://google.com,America/Los_Angeles\n") | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20101231\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
self.zip.writestr( | |
"routes.txt", | |
"route_id,agency_id,route_short_name,route_long_name,route_type\n" | |
"AB,DTA,,Airport Bullfrog,3\n") | |
self.zip.writestr( | |
"trips.txt", | |
"route_id,service_id,trip_id\n" | |
"AB,FULLW,AB1\n") | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677\n") | |
self.zip.writestr( | |
"stop_times.txt", | |
"trip_id,arrival_time,departure_time,stop_id,stop_sequence\n" | |
"AB1,10:00:00,10:00:00,BEATTY_AIRPORT,1\n" | |
"AB1,10:20:00,10:20:00,BULLFROG,2\n" | |
"AB1,10:25:00,10:25:00,STAGECOACH,3\n") | |
self.loader = transitfeed.Loader( | |
problems=self.problems, | |
extra_validation=True, | |
zip=self.zip) | |
def appendToZip(self, file, arcname, s): | |
"""Append s to the arcname in the zip stored in a file object.""" | |
zip = zipfile.ZipFile(file, 'a') | |
zip.writestr(arcname, zip.read(arcname) + s) | |
zip.close() | |
class CsvDictTestCase(unittest.TestCase): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self) | |
self.zip = zipfile.ZipFile(StringIO(), 'a') | |
self.loader = transitfeed.Loader( | |
problems=self.problems, | |
zip=self.zip) | |
def testEmptyFile(self): | |
self.zip.writestr("test.txt", "") | |
results = list(self.loader._ReadCsvDict("test.txt", [], [])) | |
self.assertEquals([], results) | |
self.problems.PopException("EmptyFile") | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderOnly(self): | |
self.zip.writestr("test.txt", "test_id,test_name") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderAndNewLineOnly(self): | |
self.zip.writestr("test.txt", "test_id,test_name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderWithSpaceBefore(self): | |
self.zip.writestr("test.txt", " test_id, test_name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderWithSpaceBeforeAfter(self): | |
self.zip.writestr("test.txt", "test_id , test_name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("CsvSyntax") | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderQuoted(self): | |
self.zip.writestr("test.txt", "\"test_id\", \"test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderSpaceAfterQuoted(self): | |
self.zip.writestr("test.txt", "\"test_id\" , \"test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("CsvSyntax") | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderSpaceInQuotesAfterValue(self): | |
self.zip.writestr("test.txt", "\"test_id \",\"test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("CsvSyntax") | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderSpaceInQuotesBeforeValue(self): | |
self.zip.writestr("test.txt", "\"test_id\",\" test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("CsvSyntax") | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderEmptyColumnName(self): | |
self.zip.writestr("test.txt", 'test_id,test_name,\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("CsvSyntax") | |
self.problems.AssertNoMoreExceptions() | |
def testHeaderAllUnknownColumnNames(self): | |
self.zip.writestr("test.txt", 'id,nam\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem().find("missing the header") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testFieldWithSpaces(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1 , my name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1 ", "test_name": "my name"}, 2, | |
["test_id", "test_name"], ["id1 ","my name"])], results) | |
self.problems.AssertNoMoreExceptions() | |
def testFieldWithOnlySpaces(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1, \n") # spaces are skipped to yield empty field | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1", "test_name": ""}, 2, | |
["test_id", "test_name"], ["id1",""])], results) | |
self.problems.AssertNoMoreExceptions() | |
def testQuotedFieldWithSpaces(self): | |
self.zip.writestr("test.txt", | |
'test_id,"test_name",test_size\n' | |
'"id1" , "my name" , "234 "\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name", | |
"test_size"], [])) | |
self.assertEquals( | |
[({"test_id": "id1 ", "test_name": "my name ", "test_size": "234 "}, 2, | |
["test_id", "test_name", "test_size"], ["id1 ", "my name ", "234 "])], | |
results) | |
self.problems.AssertNoMoreExceptions() | |
def testQuotedFieldWithCommas(self): | |
self.zip.writestr("test.txt", | |
'id,name1,name2\n' | |
'"1", "brown, tom", "brown, ""tom"""\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["id", "name1", "name2"], [])) | |
self.assertEquals( | |
[({"id": "1", "name1": "brown, tom", "name2": "brown, \"tom\""}, 2, | |
["id", "name1", "name2"], ["1", "brown, tom", "brown, \"tom\""])], | |
results) | |
self.problems.AssertNoMoreExceptions() | |
def testUnknownColumn(self): | |
# A small typo (omitting '_' in a header name) is detected | |
self.zip.writestr("test.txt", "test_id,testname\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("UnrecognizedColumn") | |
self.assertEquals("testname", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testMissingRequiredColumn(self): | |
self.zip.writestr("test.txt", "test_id,test_size\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_size"], | |
["test_name"])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("MissingColumn") | |
self.assertEquals("test_name", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testRequiredNotInAllCols(self): | |
self.zip.writestr("test.txt", "test_id,test_name,test_size\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_size"], | |
["test_name"])) | |
self.assertEquals([], results) | |
e = self.problems.PopException("UnrecognizedColumn") | |
self.assertEquals("test_name", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testBlankLine(self): | |
# line_num is increased for an empty line | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"\n" | |
"id1,my name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1", "test_name": "my name"}, 3, | |
["test_id", "test_name"], ["id1","my name"])], results) | |
self.problems.AssertNoMoreExceptions() | |
def testExtraComma(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1,my name,\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1", "test_name": "my name"}, 2, | |
["test_id", "test_name"], ["id1","my name"])], | |
results) | |
e = self.problems.PopException("OtherProblem") | |
self.assertTrue(e.FormatProblem().find("too many cells") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testMissingComma(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1 my name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1 my name"}, 2, | |
["test_id", "test_name"], ["id1 my name"])], results) | |
e = self.problems.PopException("OtherProblem") | |
self.assertTrue(e.FormatProblem().find("missing cells") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testDetectsDuplicateHeaders(self): | |
self.zip.writestr( | |
"transfers.txt", | |
"from_stop_id,from_stop_id,to_stop_id,transfer_type,min_transfer_time," | |
"min_transfer_time,min_transfer_time,min_transfer_time,unknown," | |
"unknown\n" | |
"BEATTY_AIRPORT,BEATTY_AIRPORT,BULLFROG,3,,2,,,,\n" | |
"BULLFROG,BULLFROG,BEATTY_AIRPORT,2,1200,1,,,,\n") | |
list(self.loader._ReadCsvDict("transfers.txt", | |
transitfeed.Transfer._FIELD_NAMES, | |
transitfeed.Transfer._REQUIRED_FIELD_NAMES)) | |
self.problems.PopDuplicateColumn("transfers.txt","min_transfer_time",4) | |
self.problems.PopDuplicateColumn("transfers.txt","from_stop_id",2) | |
self.problems.PopDuplicateColumn("transfers.txt","unknown",2) | |
e = self.problems.PopException("UnrecognizedColumn") | |
self.assertEquals("unknown", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
class ReadCsvTestCase(unittest.TestCase): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self) | |
self.zip = zipfile.ZipFile(StringIO(), 'a') | |
self.loader = transitfeed.Loader( | |
problems=self.problems, | |
zip=self.zip) | |
def testDetectsDuplicateHeaders(self): | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date,end_date,end_date,tuesday,unknown,unknown\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20101231,,,,,\n") | |
list(self.loader._ReadCSV("calendar.txt", | |
transitfeed.ServicePeriod._FIELD_NAMES, | |
transitfeed.ServicePeriod._FIELD_NAMES_REQUIRED)) | |
self.problems.PopDuplicateColumn("calendar.txt","end_date",3) | |
self.problems.PopDuplicateColumn("calendar.txt","unknown",2) | |
self.problems.PopDuplicateColumn("calendar.txt","tuesday",2) | |
e = self.problems.PopException("UnrecognizedColumn") | |
self.assertEquals("unknown", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
class BasicMemoryZipTestCase(MemoryZipTestCase): | |
def runTest(self): | |
self.loader.Load() | |
self.problems.AssertNoMoreExceptions() | |
class ZipCompressionTestCase(MemoryZipTestCase): | |
def runTest(self): | |
schedule = self.loader.Load() | |
self.zip.close() | |
write_output = StringIO() | |
schedule.WriteGoogleTransitFeed(write_output) | |
recompressedzip = zlib.compress(write_output.getvalue()) | |
write_size = len(write_output.getvalue()) | |
recompressedzip_size = len(recompressedzip) | |
# If zlib can compress write_output it probably wasn't compressed | |
self.assertFalse( | |
recompressedzip_size < write_size * 0.60, | |
"Are you sure WriteGoogleTransitFeed wrote a compressed zip? " | |
"Orginial size: %d recompressed: %d" % | |
(write_size, recompressedzip_size)) | |
class StopHierarchyTestCase(MemoryZipTestCase): | |
def testParentAtSameLatLon(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
self.assertEquals(1, schedule.stops["STATION"].location_type) | |
self.assertEquals(0, schedule.stops["BEATTY_AIRPORT"].location_type) | |
self.problems.AssertNoMoreExceptions() | |
def testBadLocationType(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,2\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,notvalid\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
self.assertEquals(2, e.row_num) | |
self.assertEquals(1, e.type) | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
self.assertEquals(3, e.row_num) | |
self.assertEquals(0, e.type) | |
self.problems.AssertNoMoreExceptions() | |
def testBadLocationTypeAtSameLatLon(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,2,\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
self.assertEquals(3, e.row_num) | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testStationUsed(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,1\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,\n") | |
schedule = self.loader.Load() | |
self.problems.PopException("UsedStation") | |
self.problems.AssertNoMoreExceptions() | |
def testParentNotFound(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testParentIsStop(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,BULLFROG\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testParentOfEntranceIsStop(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,2,BULLFROG\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.assertTrue(e.FormatProblem().find("location_type=1") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testStationWithParent(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,STATION2\n" | |
"STATION2,Airport 2,36.868000,-116.784000,1,\n" | |
"BULLFROG,Bullfrog,36.868088,-116.784797,,STATION2\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.assertEquals(3, e.row_num) | |
self.problems.AssertNoMoreExceptions() | |
def testStationWithSelfParent(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,STATION\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.assertEquals(3, e.row_num) | |
self.problems.AssertNoMoreExceptions() | |
def testStopNearToNonParentStation(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,\n" | |
"BULLFROG,Bullfrog,36.868446,-116.784582,,\n" | |
"BULLFROG_ST,Bullfrog,36.868446,-116.784582,1,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("DifferentStationTooClose") | |
self.assertMatchesRegex( | |
"The parent_station of stop \"Bullfrog\"", e.FormatProblem()) | |
e = self.problems.PopException("StopsTooClose") | |
self.assertMatchesRegex("BEATTY_AIRPORT", e.FormatProblem()) | |
self.assertMatchesRegex("BULLFROG", e.FormatProblem()) | |
self.assertMatchesRegex("are 0.00m apart", e.FormatProblem()) | |
e = self.problems.PopException("DifferentStationTooClose") | |
self.assertMatchesRegex( | |
"The parent_station of stop \"Airport\"", e.FormatProblem()) | |
self.problems.AssertNoMoreExceptions() | |
def testStopTooFarFromParentStation(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BULLFROG_ST,Bullfrog,36.880,-116.817,1,\n" # Parent station of all. | |
"BEATTY_AIRPORT,Airport,36.880,-116.816,,BULLFROG_ST\n" # ~ 90m far | |
"BULLFROG,Bullfrog,36.881,-116.818,,BULLFROG_ST\n" # ~ 150m far | |
"STAGECOACH,Stagecoach,36.915,-116.751,,BULLFROG_ST\n") # > 3km far | |
schedule = self.loader.Load() | |
e = self.problems.PopException("StopTooFarFromParentStation") | |
self.assertEqual(1, e.type) # Warning | |
self.assertTrue(e.FormatProblem().find( | |
"Bullfrog (ID BULLFROG) is too far from its parent" | |
" station Bullfrog (ID BULLFROG_ST)") != -1) | |
e = self.problems.PopException("StopTooFarFromParentStation") | |
self.assertEqual(0, e.type) # Error | |
self.assertTrue(e.FormatProblem().find( | |
"Stagecoach (ID STAGECOACH) is too far from its parent" | |
" station Bullfrog (ID BULLFROG_ST)") != -1) | |
self.problems.AssertNoMoreExceptions() | |
#Uncomment once validation is implemented | |
#def testStationWithoutReference(self): | |
# self.zip.writestr( | |
# "stops.txt", | |
# "stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
# "BEATTY_AIRPORT,Airport,36.868446,-116.784582,,\n" | |
# "STATION,Airport,36.868446,-116.784582,1,\n" | |
# "BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
# "STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
# schedule = self.loader.Load() | |
# e = self.problems.PopException("OtherProblem") | |
# self.assertEquals("parent_station", e.column_name) | |
# self.assertEquals(2, e.row_num) | |
# self.problems.AssertNoMoreExceptions() | |
class StopSpacesTestCase(MemoryZipTestCase): | |
def testFieldsWithSpace(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_code,stop_name,stop_lat,stop_lon,stop_url,location_type," | |
"parent_station\n" | |
"BEATTY_AIRPORT, ,Airport,36.868446,-116.784582, , ,\n" | |
"BULLFROG,,Bullfrog,36.88108,-116.81797,,,\n" | |
"STAGECOACH,,Stagecoach Hotel,36.915682,-116.751677,,,\n") | |
schedule = self.loader.Load() | |
self.problems.AssertNoMoreExceptions() | |
class StopBlankHeaders(MemoryZipTestCase): | |
def testBlankHeaderValueAtEnd(self): | |
# Modify the stops.txt added by MemoryZipTestCase.setUp. This allows the | |
# original stops.txt to be changed without modifying anything in this test. | |
# Add a column to the end of every row, leaving the header name blank. | |
new = [] | |
for i, row in enumerate(self.zip.read("stops.txt").split("\n")): | |
if i == 0: | |
new.append(row + ",") | |
elif row: | |
new.append(row + "," + str(i)) # Put a junk value in data rows | |
self.zip.writestr("stops.txt", "\n".join(new)) | |
schedule = self.loader.Load() | |
e = self.problems.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem(). | |
find("header row should not contain any blank") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testBlankHeaderValueAtStart(self): | |
# Modify the stops.txt added by MemoryZipTestCase.setUp. This allows the | |
# original stops.txt to be changed without modifying anything in this test. | |
# Add a column to the start of every row, leaving the header name blank. | |
new = [] | |
for i, row in enumerate(self.zip.read("stops.txt").split("\n")): | |
if i == 0: | |
new.append("," + row) | |
elif row: | |
new.append(str(i) + "," + row) # Put a junk value in data rows | |
self.zip.writestr("stops.txt", "\n".join(new)) | |
schedule = self.loader.Load() | |
e = self.problems.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem(). | |
find("header row should not contain any blank") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testBlankHeaderValueInMiddle(self): | |
# Modify the stops.txt added by MemoryZipTestCase.setUp. This allows the | |
# original stops.txt to be changed without modifying anything in this test. | |
# Add two columns to the start of every row, leaving the second header name | |
# blank. | |
new = [] | |
for i, row in enumerate(self.zip.read("stops.txt").split("\n")): | |
if i == 0: | |
new.append("test_name,," + row) | |
elif row: | |
# Put a junk value in data rows | |
new.append(str(i) + "," + str(i) + "," + row) | |
self.zip.writestr("stops.txt", "\n".join(new)) | |
schedule = self.loader.Load() | |
e = self.problems.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem(). | |
find("header row should not contain any blank") != -1) | |
e = self.problems.PopException("UnrecognizedColumn") | |
self.assertEquals("test_name", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
class StopsNearEachOther(MemoryZipTestCase): | |
def testTooNear(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140\n" | |
"BULLFROG,Bullfrog,48.20001,140\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException('StopsTooClose') | |
self.assertTrue(e.FormatProblem().find("1.11m apart") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testJustFarEnough(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140\n" | |
"BULLFROG,Bullfrog,48.20002,140\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140\n") | |
schedule = self.loader.Load() | |
# Stops are 2.2m apart | |
self.problems.AssertNoMoreExceptions() | |
def testSameLocation(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,48.2,140\n" | |
"BULLFROG,Bullfrog,48.2,140\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException('StopsTooClose') | |
self.assertTrue(e.FormatProblem().find("0.00m apart") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testStationsTooNear(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140,,BEATTY_AIRPORT_STATION\n" | |
"BULLFROG,Bullfrog,48.20003,140,,BULLFROG_STATION\n" | |
"BEATTY_AIRPORT_STATION,Airport,48.20001,140,1,\n" | |
"BULLFROG_STATION,Bullfrog,48.20002,140,1,\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException('StationsTooClose') | |
self.assertTrue(e.FormatProblem().find("1.11m apart") != -1) | |
self.assertTrue(e.FormatProblem().find("BEATTY_AIRPORT_STATION") != -1) | |
self.problems.AssertNoMoreExceptions() | |
def testStopNearNonParentStation(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140,,\n" | |
"BULLFROG,Bullfrog,48.20005,140,,\n" | |
"BULLFROG_STATION,Bullfrog,48.20006,140,1,\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException('DifferentStationTooClose') | |
fmt = e.FormatProblem() | |
self.assertTrue(re.search( | |
r"parent_station of.*BULLFROG.*station.*BULLFROG_STATION.* 1.11m apart", | |
fmt), fmt) | |
self.problems.AssertNoMoreExceptions() | |
class BadLatLonInStopUnitTest(ValidationTestCase): | |
def runTest(self): | |
stop = transitfeed.Stop(field_dict={"stop_id": "STOP1", | |
"stop_name": "Stop one", | |
"stop_lat": "0x20", | |
"stop_lon": "140.01"}) | |
self.ExpectInvalidValue(stop, "stop_lat") | |
stop = transitfeed.Stop(field_dict={"stop_id": "STOP1", | |
"stop_name": "Stop one", | |
"stop_lat": "13.0", | |
"stop_lon": "1e2"}) | |
self.ExpectInvalidValue(stop, "stop_lon") | |
class BadLatLonInFileUnitTest(MemoryZipTestCase): | |
def runTest(self): | |
self.zip.writestr( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,0x20,140.00\n" | |
"BULLFROG,Bullfrog,48.20001,140.0123\n" | |
"STAGECOACH,Stagecoach Hotel,48.002,bogus\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException('InvalidValue') | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("stop_lat", e.column_name) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEquals(4, e.row_num) | |
self.assertEquals("stop_lon", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
class LoadUnknownFileInZipTestCase(MemoryZipTestCase): | |
def runTest(self): | |
self.zip.writestr( | |
"stpos.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException('UnknownFile') | |
self.assertEquals('stpos.txt', e.file_name) | |
self.problems.AssertNoMoreExceptions() | |
class TabDelimitedTestCase(MemoryZipTestCase): | |
def runTest(self): | |
# Create an extremely corrupt file by replacing each comma with a tab, | |
# ignoring csv quoting. | |
for arcname in self.zip.namelist(): | |
orig = self.zip.read(arcname) | |
self.zip.writestr(arcname, orig.replace(",", "\t")) | |
schedule = self.loader.Load() | |
# Don't call self.problems.AssertNoMoreExceptions() because there are lots | |
# of problems but I only care that the validator doesn't crash. In the | |
# magical future the validator will stop when the csv is obviously hosed. | |
class RouteMemoryZipTestCase(MemoryZipTestCase): | |
def assertLoadAndCheckExtraValues(self, schedule_file): | |
"""Load file-like schedule_file and check for extra route columns.""" | |
load_problems = TestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
loaded_schedule = transitfeed.Loader(schedule_file, | |
problems=load_problems, | |
extra_validation=True).Load() | |
self.assertEqual("foo", loaded_schedule.GetRoute("t")["t_foo"]) | |
self.assertEqual("", loaded_schedule.GetRoute("AB")["t_foo"]) | |
self.assertEqual("bar", loaded_schedule.GetRoute("n")["n_foo"]) | |
self.assertEqual("", loaded_schedule.GetRoute("AB")["n_foo"]) | |
# Uncomment the following lines to print the string in testExtraFileColumn | |
# print repr(zipfile.ZipFile(schedule_file).read("routes.txt")) | |
# self.fail() | |
def testExtraObjectAttribute(self): | |
"""Extra columns added to an object are preserved when writing.""" | |
schedule = self.loader.Load() | |
# Add an attribute after AddRouteObject | |
route_t = transitfeed.Route(short_name="T", route_type="Bus", route_id="t") | |
schedule.AddRouteObject(route_t) | |
route_t.t_foo = "foo" | |
# Add an attribute before AddRouteObject | |
route_n = transitfeed.Route(short_name="N", route_type="Bus", route_id="n") | |
route_n.n_foo = "bar" | |
schedule.AddRouteObject(route_n) | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.problems.AssertNoMoreExceptions() | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
def testExtraFileColumn(self): | |
"""Extra columns loaded from a file are preserved when writing.""" | |
# Uncomment the code in assertLoadAndCheckExtraValues to generate this | |
# string. | |
self.zip.writestr( | |
"routes.txt", | |
"route_id,agency_id,route_short_name,route_long_name,route_type," | |
"t_foo,n_foo\n" | |
"AB,DTA,,Airport Bullfrog,3,,\n" | |
"t,DTA,T,,3,foo,\n" | |
"n,DTA,N,,3,,bar\n") | |
load1_problems = TestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
schedule = transitfeed.Loader(problems=load1_problems, | |
extra_validation=True, | |
zip=self.zip).Load() | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
class RouteConstructorTestCase(unittest.TestCase): | |
def setUp(self): | |
self.problems = RecordingProblemReporter(self) | |
def testDefault(self): | |
route = transitfeed.Route() | |
repr(route) | |
self.assertEqual({}, dict(route)) | |
route.Validate(self.problems) | |
repr(route) | |
self.assertEqual({}, dict(route)) | |
e = self.problems.PopException('MissingValue') | |
self.assertEqual('route_id', e.column_name) | |
e = self.problems.PopException('MissingValue') | |
self.assertEqual('route_type', e.column_name) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual('route_short_name', e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testInitArgs(self): | |
# route_type name | |
route = transitfeed.Route(route_id='id1', short_name='22', route_type='Bus') | |
repr(route) | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals(3, route.route_type) # converted to an int | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '3'}, dict(route)) | |
# route_type as an int | |
route = transitfeed.Route(route_id='i1', long_name='Twenty 2', route_type=1) | |
repr(route) | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals(1, route.route_type) # kept as an int | |
self.assertEquals({'route_id': 'i1', 'route_long_name': 'Twenty 2', | |
'route_type': '1'}, dict(route)) | |
# route_type as a string | |
route = transitfeed.Route(route_id='id1', short_name='22', route_type='1') | |
repr(route) | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals(1, route.route_type) # converted to an int | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '1'}, dict(route)) | |
# route_type has undefined int value | |
route = transitfeed.Route(route_id='id1', short_name='22', | |
route_type='8') | |
repr(route) | |
route.Validate(self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual('route_type', e.column_name) | |
self.assertEqual(1, e.type) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '8'}, dict(route)) | |
# route_type that doesn't parse | |
route = transitfeed.Route(route_id='id1', short_name='22', | |
route_type='1foo') | |
repr(route) | |
route.Validate(self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual('route_type', e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '1foo'}, dict(route)) | |
# agency_id | |
route = transitfeed.Route(route_id='id1', short_name='22', route_type=1, | |
agency_id='myage') | |
repr(route) | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '1', 'agency_id': 'myage'}, dict(route)) | |
def testInitArgOrder(self): | |
"""Call Route.__init__ without any names so a change in order is noticed.""" | |
route = transitfeed.Route('short', 'long name', 'Bus', 'r1', 'a1') | |
self.assertEquals({'route_id': 'r1', 'route_short_name': 'short', | |
'route_long_name': 'long name', | |
'route_type': '3', 'agency_id': 'a1'}, dict(route)) | |
def testFieldDict(self): | |
route = transitfeed.Route(field_dict={}) | |
self.assertEquals({}, dict(route)) | |
route = transitfeed.Route(field_dict={ | |
'route_id': 'id1', 'route_short_name': '22', 'agency_id': 'myage', | |
'route_type': '1'}) | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'agency_id': 'myage', 'route_type': '1'}, dict(route)) | |
route = transitfeed.Route(field_dict={ | |
'route_id': 'id1', 'route_short_name': '22', 'agency_id': 'myage', | |
'route_type': '1', 'my_column': 'v'}) | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'agency_id': 'myage', 'route_type': '1', | |
'my_column':'v'}, dict(route)) | |
route._private = 0.3 # Isn't copied | |
route_copy = transitfeed.Route(field_dict=route) | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'agency_id': 'myage', 'route_type': '1', | |
'my_column':'v'}, dict(route_copy)) | |
class RouteValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
route = transitfeed.Route() | |
route.route_id = '054C' | |
route.route_short_name = '54C' | |
route.route_long_name = 'South Side - North Side' | |
route.route_type = 7 | |
route.Validate(self.problems) | |
# blank short & long names | |
route.route_short_name = '' | |
route.route_long_name = ' ' | |
self.ExpectInvalidValue(route, 'route_short_name') | |
# short name too long | |
route.route_short_name = 'South Side' | |
route.route_long_name = '' | |
self.ExpectInvalidValue(route, 'route_short_name') | |
route.route_short_name = 'M7bis' # 5 is OK | |
route.Validate(self.problems) | |
# long name contains short name | |
route.route_short_name = '54C' | |
route.route_long_name = '54C South Side - North Side' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
route.route_long_name = '54C(South Side - North Side)' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
route.route_long_name = '54C-South Side - North Side' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
# long name is same as short name | |
route.route_short_name = '54C' | |
route.route_long_name = '54C' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
# route description is same as short name | |
route.route_desc = '54C' | |
route.route_short_name = '54C' | |
route.route_long_name = '' | |
self.ExpectInvalidValue(route, 'route_desc') | |
route.route_desc = None | |
# route description is same as long name | |
route.route_desc = 'South Side - North Side' | |
route.route_long_name = 'South Side - North Side' | |
self.ExpectInvalidValue(route, 'route_desc') | |
route.route_desc = None | |
# invalid route types | |
route.route_type = 8 | |
self.ExpectInvalidValue(route, 'route_type') | |
route.route_type = -1 | |
self.ExpectInvalidValue(route, 'route_type') | |
route.route_type = 7 | |
# invalid route URL | |
route.route_url = 'www.example.com' | |
self.ExpectInvalidValue(route, 'route_url') | |
route.route_url = None | |
# invalid route color | |
route.route_color = 'orange' | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = None | |
# invalid route text color | |
route.route_text_color = 'orange' | |
self.ExpectInvalidValue(route, 'route_text_color') | |
route.route_text_color = None | |
# missing route ID | |
route.route_id = None | |
self.ExpectMissingValue(route, 'route_id') | |
route.route_id = '054C' | |
# bad color contrast | |
route.route_text_color = None # black | |
route.route_color = '0000FF' # Bad | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = '00BF00' # OK | |
route.Validate(self.problems) | |
route.route_color = '005F00' # Bad | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = 'FF00FF' # OK | |
route.Validate(self.problems) | |
route.route_text_color = 'FFFFFF' # OK too | |
route.Validate(self.problems) | |
route.route_text_color = '00FF00' # think of color-blind people! | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_text_color = '007F00' | |
route.route_color = 'FF0000' | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = '00FFFF' # OK | |
route.Validate(self.problems) | |
route.route_text_color = None # black | |
route.route_color = None # white | |
route.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
class ShapeValidationTestCase(ValidationTestCase): | |
def ExpectFailedAdd(self, shape, lat, lon, dist, column_name, value): | |
self.ExpectInvalidValueInClosure( | |
column_name, value, | |
lambda: shape.AddPoint(lat, lon, dist, self.problems)) | |
def runTest(self): | |
shape = transitfeed.Shape('TEST') | |
repr(shape) # shouldn't crash | |
self.ExpectOtherProblem(shape) # no points! | |
self.ExpectFailedAdd(shape, 36.905019, -116.763207, -1, | |
'shape_dist_traveled', -1) | |
shape.AddPoint(36.915760, -116.751709, 0, self.problems) | |
shape.AddPoint(36.905018, -116.763206, 5, self.problems) | |
shape.Validate(self.problems) | |
shape.shape_id = None | |
self.ExpectMissingValue(shape, 'shape_id') | |
shape.shape_id = 'TEST' | |
self.ExpectFailedAdd(shape, 91, -116.751709, 6, 'shape_pt_lat', 91) | |
self.ExpectFailedAdd(shape, -91, -116.751709, 6, 'shape_pt_lat', -91) | |
self.ExpectFailedAdd(shape, 36.915760, -181, 6, 'shape_pt_lon', -181) | |
self.ExpectFailedAdd(shape, 36.915760, 181, 6, 'shape_pt_lon', 181) | |
self.ExpectFailedAdd(shape, 0.5, -0.5, 6, 'shape_pt_lat', 0.5) | |
self.ExpectFailedAdd(shape, 0, 0, 6, 'shape_pt_lat', 0) | |
# distance decreasing is bad, but staying the same is OK | |
shape.AddPoint(36.905019, -116.763206, 4, self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertMatchesRegex('Each subsequent point', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 5.000000.', e.FormatProblem()) | |
self.problems.AssertNoMoreExceptions() | |
shape.AddPoint(36.925019, -116.764206, 5, self.problems) | |
self.problems.AssertNoMoreExceptions() | |
class FareValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
fare = transitfeed.Fare() | |
fare.fare_id = "normal" | |
fare.price = 1.50 | |
fare.currency_type = "USD" | |
fare.payment_method = 0 | |
fare.transfers = 1 | |
fare.transfer_duration = 7200 | |
fare.Validate(self.problems) | |
fare.fare_id = None | |
self.ExpectMissingValue(fare, "fare_id") | |
fare.fare_id = '' | |
self.ExpectMissingValue(fare, "fare_id") | |
fare.fare_id = "normal" | |
fare.price = "1.50" | |
self.ExpectInvalidValue(fare, "price") | |
fare.price = 1 | |
fare.Validate(self.problems) | |
fare.price = None | |
self.ExpectMissingValue(fare, "price") | |
fare.price = 0.0 | |
fare.Validate(self.problems) | |
fare.price = -1.50 | |
self.ExpectInvalidValue(fare, "price") | |
fare.price = 1.50 | |
fare.currency_type = "" | |
self.ExpectMissingValue(fare, "currency_type") | |
fare.currency_type = None | |
self.ExpectMissingValue(fare, "currency_type") | |
fare.currency_type = "usd" | |
self.ExpectInvalidValue(fare, "currency_type") | |
fare.currency_type = "KML" | |
self.ExpectInvalidValue(fare, "currency_type") | |
fare.currency_type = "USD" | |
fare.payment_method = "0" | |
self.ExpectInvalidValue(fare, "payment_method") | |
fare.payment_method = -1 | |
self.ExpectInvalidValue(fare, "payment_method") | |
fare.payment_method = 1 | |
fare.Validate(self.problems) | |
fare.payment_method = 2 | |
self.ExpectInvalidValue(fare, "payment_method") | |
fare.payment_method = None | |
self.ExpectMissingValue(fare, "payment_method") | |
fare.payment_method = "" | |
self.ExpectMissingValue(fare, "payment_method") | |
fare.payment_method = 0 | |
fare.transfers = "1" | |
self.ExpectInvalidValue(fare, "transfers") | |
fare.transfers = -1 | |
self.ExpectInvalidValue(fare, "transfers") | |
fare.transfers = 2 | |
fare.Validate(self.problems) | |
fare.transfers = 3 | |
self.ExpectInvalidValue(fare, "transfers") | |
fare.transfers = None | |
fare.Validate(self.problems) | |
fare.transfers = 1 | |
fare.transfer_duration = 0 | |
fare.Validate(self.problems) | |
fare.transfer_duration = None | |
fare.Validate(self.problems) | |
fare.transfer_duration = -3600 | |
self.ExpectInvalidValue(fare, "transfer_duration") | |
fare.transfers = 0 # no transfers allowed but duration specified! | |
fare.transfer_duration = 3600 | |
self.ExpectInvalidValue(fare, "transfer_duration") | |
fare.transfers = 1 | |
fare.transfer_duration = "3600" | |
self.ExpectInvalidValue(fare, "transfer_duration") | |
fare.transfer_duration = 7200 | |
self.problems.AssertNoMoreExceptions() | |
class TransferValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# Totally bogus data shouldn't cause a crash | |
transfer = transitfeed.Transfer(field_dict={"ignored": "foo"}) | |
self.assertEquals(0, transfer.transfer_type) | |
transfer = transitfeed.Transfer(from_stop_id = "S1", to_stop_id = "S2", | |
transfer_type = "1", min_transfer_time = 2) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(1, transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(1, transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
self.problems.AssertNoMoreExceptions() | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "0", \ | |
"min_transfer_time": "2"}) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(0, transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(0, transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
self.problems.AssertNoMoreExceptions() | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "-4", \ | |
"min_transfer_time": "2"}) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals("-4", transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals("-4", transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "", \ | |
"min_transfer_time": "-1"}) | |
self.assertEquals(0, transfer.transfer_type) | |
self.ExpectInvalidValue(transfer, "min_transfer_time") | |
# simple successes | |
transfer = transitfeed.Transfer() | |
transfer.from_stop_id = "S1" | |
transfer.to_stop_id = "S2" | |
transfer.transfer_type = 0 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
transfer.transfer_type = 3 | |
transfer.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
# transfer_type is out of range | |
transfer.transfer_type = 4 | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
transfer.transfer_type = -1 | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
transfer.transfer_type = "text" | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
transfer.transfer_type = 2 | |
# invalid min_transfer_time | |
transfer.min_transfer_time = -1 | |
self.ExpectInvalidValue(transfer, "min_transfer_time") | |
transfer.min_transfer_time = "text" | |
self.ExpectInvalidValue(transfer, "min_transfer_time") | |
transfer.min_transfer_time = 250 | |
transfer.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
# missing stop ids | |
transfer.from_stop_id = "" | |
self.ExpectMissingValue(transfer, 'from_stop_id') | |
transfer.from_stop_id = "S1" | |
transfer.to_stop_id = None | |
self.ExpectMissingValue(transfer, 'to_stop_id') | |
transfer.to_stop_id = "S2" | |
# stops are presented in schedule case | |
schedule = transitfeed.Schedule() | |
stop1 = schedule.AddStop(57.5, 30.2, "stop 1") | |
stop2 = schedule.AddStop(57.5, 30.3, "stop 2") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop2.stop_id | |
transfer.transfer_type = 2 | |
transfer.min_transfer_time = 250 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
# stops are not presented in schedule case | |
schedule = transitfeed.Schedule() | |
stop1 = schedule.AddStop(57.5, 30.2, "stop 1") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = "unexist" | |
transfer.transfer_type = 2 | |
transfer.min_transfer_time = 250 | |
self.ExpectInvalidValue(transfer, 'to_stop_id') | |
transfer.from_stop_id = "unexist" | |
transfer.to_stop_id = stop1.stop_id | |
self.ExpectInvalidValue(transfer, "from_stop_id") | |
self.problems.AssertNoMoreExceptions() | |
# Transfer can only be added to a schedule once | |
transfer = transitfeed.Transfer() | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop1.stop_id | |
schedule.AddTransferObject(transfer) | |
self.assertRaises(AssertionError, schedule.AddTransferObject, transfer) | |
class ServicePeriodValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
period = transitfeed.ServicePeriod() | |
repr(period) # shouldn't crash | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20070101' | |
period.end_date = '20071231' | |
period.day_of_week[0] = True | |
repr(period) # shouldn't crash | |
period.Validate(self.problems) | |
# missing start_date. If one of start_date or end_date is None then | |
# ServicePeriod.Validate assumes the required column is missing and already | |
# generated an error. Instead set it to an empty string, such as when the | |
# csv cell is empty. See also comment in ServicePeriod.Validate. | |
period.start_date = '' | |
self.ExpectMissingValue(period, 'start_date') | |
period.start_date = '20070101' | |
# missing end_date | |
period.end_date = '' | |
self.ExpectMissingValue(period, 'end_date') | |
period.end_date = '20071231' | |
# invalid start_date | |
period.start_date = '2007-01-01' | |
self.ExpectInvalidValue(period, 'start_date') | |
period.start_date = '20070101' | |
# impossible start_date | |
period.start_date = '20070229' | |
self.ExpectInvalidValue(period, 'start_date') | |
period.start_date = '20070101' | |
# invalid end_date | |
period.end_date = '2007/12/31' | |
self.ExpectInvalidValue(period, 'end_date') | |
period.end_date = '20071231' | |
# start & end dates out of order | |
period.end_date = '20060101' | |
self.ExpectInvalidValue(period, 'end_date') | |
period.end_date = '20071231' | |
# no service in period | |
period.day_of_week[0] = False | |
self.ExpectOtherProblem(period) | |
period.day_of_week[0] = True | |
# invalid exception date | |
period.SetDateHasService('2007', False) | |
self.ExpectInvalidValue(period, 'date', '2007') | |
period.ResetDateToNormalService('2007') | |
period2 = transitfeed.ServicePeriod( | |
field_list=['serviceid1', '20060101', '20071231', '1', '0', 'h', '1', | |
'1', '1', '1']) | |
self.ExpectInvalidValue(period2, 'wednesday', 'h') | |
repr(period) # shouldn't crash | |
class ServicePeriodDateRangeTestCase(ValidationTestCase): | |
def runTest(self): | |
period = transitfeed.ServicePeriod() | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20070101' | |
period.end_date = '20071231' | |
period.SetWeekdayService(True) | |
period.SetDateHasService('20071231', False) | |
period.Validate(self.problems) | |
self.assertEqual(('20070101', '20071231'), period.GetDateRange()) | |
period2 = transitfeed.ServicePeriod() | |
period2.service_id = 'HOLIDAY' | |
period2.SetDateHasService('20071225', True) | |
period2.SetDateHasService('20080101', True) | |
period2.SetDateHasService('20080102', False) | |
period2.Validate(self.problems) | |
self.assertEqual(('20071225', '20080101'), period2.GetDateRange()) | |
period2.start_date = '20071201' | |
period2.end_date = '20071225' | |
period2.Validate(self.problems) | |
self.assertEqual(('20071201', '20080101'), period2.GetDateRange()) | |
period3 = transitfeed.ServicePeriod() | |
self.assertEqual((None, None), period3.GetDateRange()) | |
period4 = transitfeed.ServicePeriod() | |
period4.service_id = 'halloween' | |
period4.SetDateHasService('20051031', True) | |
self.assertEqual(('20051031', '20051031'), period4.GetDateRange()) | |
period4.Validate(self.problems) | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
self.assertEqual((None, None), schedule.GetDateRange()) | |
schedule.AddServicePeriodObject(period) | |
self.assertEqual(('20070101', '20071231'), schedule.GetDateRange()) | |
schedule.AddServicePeriodObject(period2) | |
self.assertEqual(('20070101', '20080101'), schedule.GetDateRange()) | |
schedule.AddServicePeriodObject(period4) | |
self.assertEqual(('20051031', '20080101'), schedule.GetDateRange()) | |
self.problems.AssertNoMoreExceptions() | |
class ServicePeriodTestCase(unittest.TestCase): | |
def testActive(self): | |
"""Test IsActiveOn and ActiveDates""" | |
period = transitfeed.ServicePeriod() | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20071226' | |
period.end_date = '20071231' | |
period.SetWeekdayService(True) | |
period.SetDateHasService('20071230', True) | |
period.SetDateHasService('20071231', False) | |
period.SetDateHasService('20080102', True) | |
# December 2007 | |
# Su Mo Tu We Th Fr Sa | |
# 23 24 25 26 27 28 29 | |
# 30 31 | |
# Some tests have named arguments and others do not to ensure that any | |
# (possibly unwanted) changes to the API get caught | |
# calendar_date exceptions near start date | |
self.assertFalse(period.IsActiveOn(date='20071225')) | |
self.assertFalse(period.IsActiveOn(date='20071225', | |
date_object=date(2007, 12, 25))) | |
self.assertTrue(period.IsActiveOn(date='20071226')) | |
self.assertTrue(period.IsActiveOn(date='20071226', | |
date_object=date(2007, 12, 26))) | |
# calendar_date exceptions near end date | |
self.assertTrue(period.IsActiveOn('20071230')) | |
self.assertTrue(period.IsActiveOn('20071230', date(2007, 12, 30))) | |
self.assertFalse(period.IsActiveOn('20071231')) | |
self.assertFalse(period.IsActiveOn('20071231', date(2007, 12, 31))) | |
# date just outside range, both weekday and an exception | |
self.assertFalse(period.IsActiveOn('20080101')) | |
self.assertFalse(period.IsActiveOn('20080101', date(2008, 1, 1))) | |
self.assertTrue(period.IsActiveOn('20080102')) | |
self.assertTrue(period.IsActiveOn('20080102', date(2008, 1, 2))) | |
self.assertEquals(period.ActiveDates(), | |
['20071226', '20071227', '20071228', '20071230', | |
'20080102']) | |
# Test of period without start_date, end_date | |
period_dates = transitfeed.ServicePeriod() | |
period_dates.SetDateHasService('20071230', True) | |
period_dates.SetDateHasService('20071231', False) | |
self.assertFalse(period_dates.IsActiveOn(date='20071229')) | |
self.assertFalse(period_dates.IsActiveOn(date='20071229', | |
date_object=date(2007, 12, 29))) | |
self.assertTrue(period_dates.IsActiveOn('20071230')) | |
self.assertTrue(period_dates.IsActiveOn('20071230', date(2007, 12, 30))) | |
self.assertFalse(period_dates.IsActiveOn('20071231')) | |
self.assertFalse(period_dates.IsActiveOn('20071231', date(2007, 12, 31))) | |
self.assertEquals(period_dates.ActiveDates(), ['20071230']) | |
# Test with an invalid ServicePeriod; one of start_date, end_date is set | |
period_no_end = transitfeed.ServicePeriod() | |
period_no_end.start_date = '20071226' | |
self.assertFalse(period_no_end.IsActiveOn(date='20071231')) | |
self.assertFalse(period_no_end.IsActiveOn(date='20071231', | |
date_object=date(2007, 12, 31))) | |
self.assertEquals(period_no_end.ActiveDates(), []) | |
period_no_start = transitfeed.ServicePeriod() | |
period_no_start.end_date = '20071230' | |
self.assertFalse(period_no_start.IsActiveOn('20071229')) | |
self.assertFalse(period_no_start.IsActiveOn('20071229', date(2007, 12, 29))) | |
self.assertEquals(period_no_start.ActiveDates(), []) | |
period_empty = transitfeed.ServicePeriod() | |
self.assertFalse(period_empty.IsActiveOn('20071231')) | |
self.assertFalse(period_empty.IsActiveOn('20071231', date(2007, 12, 31))) | |
self.assertEquals(period_empty.ActiveDates(), []) | |
class GetServicePeriodsActiveEachDateTestCase(unittest.TestCase): | |
def testEmpty(self): | |
schedule = transitfeed.Schedule() | |
self.assertEquals( | |
[], | |
schedule.GetServicePeriodsActiveEachDate(date(2009, 1, 1), | |
date(2009, 1, 1))) | |
self.assertEquals( | |
[(date(2008, 12, 31), []), (date(2009, 1, 1), [])], | |
schedule.GetServicePeriodsActiveEachDate(date(2008, 12, 31), | |
date(2009, 1, 2))) | |
def testOneService(self): | |
schedule = transitfeed.Schedule() | |
sp1 = transitfeed.ServicePeriod() | |
sp1.service_id = "sp1" | |
sp1.SetDateHasService("20090101") | |
sp1.SetDateHasService("20090102") | |
schedule.AddServicePeriodObject(sp1) | |
self.assertEquals( | |
[], | |
schedule.GetServicePeriodsActiveEachDate(date(2009, 1, 1), | |
date(2009, 1, 1))) | |
self.assertEquals( | |
[(date(2008, 12, 31), []), (date(2009, 1, 1), [sp1])], | |
schedule.GetServicePeriodsActiveEachDate(date(2008, 12, 31), | |
date(2009, 1, 2))) | |
def testTwoService(self): | |
schedule = transitfeed.Schedule() | |
sp1 = transitfeed.ServicePeriod() | |
sp1.service_id = "sp1" | |
sp1.SetDateHasService("20081231") | |
sp1.SetDateHasService("20090101") | |
schedule.AddServicePeriodObject(sp1) | |
sp2 = transitfeed.ServicePeriod() | |
sp2.service_id = "sp2" | |
sp2.SetStartDate("20081201") | |
sp2.SetEndDate("20081231") | |
sp2.SetWeekendService() | |
sp2.SetWeekdayService() | |
schedule.AddServicePeriodObject(sp2) | |
self.assertEquals( | |
[], | |
schedule.GetServicePeriodsActiveEachDate(date(2009, 1, 1), | |
date(2009, 1, 1))) | |
date_services = schedule.GetServicePeriodsActiveEachDate(date(2008, 12, 31), | |
date(2009, 1, 2)) | |
self.assertEquals( | |
[date(2008, 12, 31), date(2009, 1, 1)], [d for d, _ in date_services]) | |
self.assertEquals(set([sp1, sp2]), set(date_services[0][1])) | |
self.assertEquals([sp1], date_services[1][1]) | |
class TripMemoryZipTestCase(MemoryZipTestCase): | |
def assertLoadAndCheckExtraValues(self, schedule_file): | |
"""Load file-like schedule_file and check for extra trip columns.""" | |
load_problems = TestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
loaded_schedule = transitfeed.Loader(schedule_file, | |
problems=load_problems, | |
extra_validation=True).Load() | |
self.assertEqual("foo", loaded_schedule.GetTrip("AB1")["t_foo"]) | |
self.assertEqual("", loaded_schedule.GetTrip("AB2")["t_foo"]) | |
self.assertEqual("", loaded_schedule.GetTrip("AB1")["n_foo"]) | |
self.assertEqual("bar", loaded_schedule.GetTrip("AB2")["n_foo"]) | |
# Uncomment the following lines to print the string in testExtraFileColumn | |
# print repr(zipfile.ZipFile(schedule_file).read("trips.txt")) | |
# self.fail() | |
def testExtraObjectAttribute(self): | |
"""Extra columns added to an object are preserved when writing.""" | |
schedule = self.loader.Load() | |
# Add an attribute to an existing trip | |
trip1 = schedule.GetTrip("AB1") | |
trip1.t_foo = "foo" | |
# Make a copy of trip_id=AB1 and add an attribute before AddTripObject | |
trip2 = transitfeed.Trip(field_dict=trip1) | |
trip2.trip_id = "AB2" | |
trip2.t_foo = "" | |
trip2.n_foo = "bar" | |
schedule.AddTripObject(trip2) | |
trip2.AddStopTime(stop=schedule.GetStop("BULLFROG"), stop_time="09:00:00") | |
trip2.AddStopTime(stop=schedule.GetStop("STAGECOACH"), stop_time="09:30:00") | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.appendToZip(saved_schedule_file, "stop_times.txt","") | |
self.problems.AssertNoMoreExceptions() | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
def testExtraFileColumn(self): | |
"""Extra columns loaded from a file are preserved when writing.""" | |
# Uncomment the code in assertLoadAndCheckExtraValues to generate this | |
# string. | |
self.zip.writestr( | |
"trips.txt", | |
"route_id,service_id,trip_id,t_foo,n_foo\n" | |
"AB,FULLW,AB1,foo,\n" | |
"AB,FULLW,AB2,,bar\n") | |
self.zip.writestr( | |
"stop_times.txt", | |
self.zip.read("stop_times.txt") + | |
"AB2,09:00:00,09:00:00,BULLFROG,1\n" | |
"AB2,09:30:00,09:30:00,STAGECOACH,2\n") | |
load1_problems = TestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
schedule = transitfeed.Loader(problems=load1_problems, | |
extra_validation=True, | |
zip=self.zip).Load() | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
class TripValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
trip = transitfeed.Trip() | |
repr(trip) # shouldn't crash | |
schedule = transitfeed.Schedule() # Needed to find StopTimes | |
schedule.AddRouteObject( | |
transitfeed.Route(short_name="54C", long_name="", route_type="Bus", | |
route_id="054C", | |
agency_id=schedule.GetDefaultAgency().agency_id)) | |
schedule.AddServicePeriodObject(transitfeed.ServicePeriod(id="WEEK")) | |
schedule.GetDefaultServicePeriod().SetDateHasService('20070101') | |
trip = transitfeed.Trip() | |
repr(trip) # shouldn't crash | |
trip = transitfeed.Trip() | |
trip.trip_headsign = '\xBA\xDF\x0D' # Not valid ascii or utf8 | |
repr(trip) # shouldn't crash | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.trip_headsign = 'via Polish Hill' | |
trip.direction_id = '0' | |
trip.block_id = None | |
trip.shape_id = None | |
trip.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
repr(trip) # shouldn't crash | |
# missing route ID | |
trip.route_id = None | |
self.ExpectMissingValue(trip, 'route_id') | |
trip.route_id = '054C' | |
# missing service ID | |
trip.service_id = None | |
self.ExpectMissingValue(trip, 'service_id') | |
trip.service_id = 'WEEK' | |
# missing trip ID | |
trip.trip_id = None | |
self.ExpectMissingValue(trip, 'trip_id') | |
trip.trip_id = '054C-00' | |
# invalid direction ID | |
trip.direction_id = 'NORTH' | |
self.ExpectInvalidValue(trip, 'direction_id') | |
trip.direction_id = '0' | |
# AddTripObject validates that route_id, service_id, .... are found in the | |
# schedule. The Validate calls made by self.Expect... above can't make this | |
# check because trip is not in a schedule. | |
trip.route_id = '054C-notfound' | |
schedule.AddTripObject(trip, self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual('route_id', e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
trip.route_id = '054C' | |
# Make sure calling Trip.Validate validates that route_id and service_id | |
# are found in the schedule. | |
trip.service_id = 'WEEK-notfound' | |
trip.Validate(self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertEqual('service_id', e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
trip.service_id = 'WEEK' | |
trip.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
# expect no problems for non-overlapping periods | |
trip.AddHeadwayPeriod("06:00:00", "12:00:00", 600) | |
trip.AddHeadwayPeriod("01:00:00", "02:00:00", 1200) | |
trip.AddHeadwayPeriod("04:00:00", "05:00:00", 1000) | |
trip.AddHeadwayPeriod("12:00:00", "19:00:00", 700) | |
trip.Validate(self.problems) | |
self.problems.AssertNoMoreExceptions() | |
trip.ClearHeadwayPeriods() | |
# overlapping headway periods | |
trip.AddHeadwayPeriod("00:00:00", "12:00:00", 600) | |
trip.AddHeadwayPeriod("06:00:00", "18:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearHeadwayPeriods() | |
trip.AddHeadwayPeriod("12:00:00", "20:00:00", 600) | |
trip.AddHeadwayPeriod("06:00:00", "18:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearHeadwayPeriods() | |
trip.AddHeadwayPeriod("06:00:00", "12:00:00", 600) | |
trip.AddHeadwayPeriod("00:00:00", "25:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearHeadwayPeriods() | |
trip.AddHeadwayPeriod("00:00:00", "20:00:00", 600) | |
trip.AddHeadwayPeriod("06:00:00", "18:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearHeadwayPeriods() | |
self.problems.AssertNoMoreExceptions() | |
class TripSequenceValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
trip = transitfeed.Trip() | |
schedule = transitfeed.Schedule() # Needed to find StopTimes | |
route = transitfeed.Route(short_name="54C", long_name="", route_type="Bus", | |
route_id="054C") | |
route.agency_id = schedule.GetDefaultAgency().agency_id | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.trip_headsign = '\xBA\xDF\x0D' # Not valid ascii or utf8 | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.trip_headsign = 'via Polish Hill' | |
trip.direction_id = '0' | |
trip.block_id = None | |
trip.shape_id = None | |
stop1 = transitfeed.Stop(36.425288, -117.133162, "Demo Stop 1", "STOP1") | |
stop2 = transitfeed.Stop(36.425666, -117.133666, "Demo Stop 2", "STOP2") | |
stop3 = transitfeed.Stop(36.425999, -117.133999, "Demo Stop 3", "STOP3") | |
schedule.AddTripObject(trip) | |
schedule.AddStopObject(stop1) | |
schedule.AddStopObject(stop2) | |
schedule.AddStopObject(stop3) | |
stoptime1 = transitfeed.StopTime(self.problems, stop1, | |
stop_time='12:00:00', stop_sequence=1) | |
stoptime2 = transitfeed.StopTime(self.problems, stop2, | |
stop_time='11:30:00', stop_sequence=2) | |
stoptime3 = transitfeed.StopTime(self.problems, stop3, | |
stop_time='12:15:00', stop_sequence=3) | |
trip._AddStopTimeObjectUnordered(stoptime1, schedule) | |
trip._AddStopTimeObjectUnordered(stoptime2, schedule) | |
trip._AddStopTimeObjectUnordered(stoptime3, schedule) | |
trip.Validate(self.problems) | |
e = self.problems.PopException('OtherProblem') | |
self.assertTrue(e.FormatProblem().find('Timetravel detected') != -1) | |
self.assertTrue(e.FormatProblem().find('number 2 in trip 054C-00') != -1) | |
self.problems.AssertNoMoreExceptions() | |
class TripServiceIDValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
schedule.AddRouteObject( | |
transitfeed.Route("54C", "Polish Hill", 3, "054C")) | |
trip1 = transitfeed.Trip() | |
trip1.route_id = "054C" | |
trip1.service_id = "WEEKDAY" | |
trip1.trip_id = "054C_WEEK" | |
self.ExpectInvalidValueInClosure(column_name="service_id", | |
value="WEEKDAY", | |
c=lambda: schedule.AddTripObject(trip1)) | |
class TripHasStopTimeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
schedule.AddRouteObject( | |
transitfeed.Route("54C", "Polish Hill", 3, "054C")) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.trip_headsign = 'via Polish Hill' | |
trip.direction_id = '0' | |
trip.block_id = None | |
trip.shape_id = None | |
schedule.AddTripObject(trip) | |
# We should get an OtherProblem here because the trip has no stops. | |
self.ExpectOtherProblem(schedule) | |
# It should trigger a TYPE_ERROR if there are frequencies for the trip | |
# but no stops | |
trip.AddHeadwayPeriod("01:00:00","12:00:00", 600) | |
schedule.Validate(self.problems) | |
self.problems.PopException('OtherProblem') # pop first warning | |
e = self.problems.PopException('OtherProblem') # pop frequency error | |
self.assertTrue(e.FormatProblem().find('Frequencies defined, but') != -1) | |
self.assertTrue(e.FormatProblem().find('given in trip 054C-00') != -1) | |
self.assertEquals(transitfeed.TYPE_ERROR, e.type) | |
self.problems.AssertNoMoreExceptions() | |
trip.ClearHeadwayPeriods() | |
# Add a stop, but with only one stop passengers have nowhere to exit! | |
stop = transitfeed.Stop(36.425288, -117.133162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:11:00", departure_time="5:12:00") | |
self.ExpectOtherProblem(schedule) | |
# Add another stop, and then validation should be happy. | |
stop = transitfeed.Stop(36.424288, -117.133142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:15:00", departure_time="5:16:00") | |
schedule.Validate(self.problems) | |
trip.AddStopTime(stop, stop_time="05:20:00") | |
trip.AddStopTime(stop, stop_time="05:22:00") | |
# Last stop must always have a time | |
trip.AddStopTime(stop, arrival_secs=None, departure_secs=None) | |
self.ExpectInvalidValueInClosure( | |
'arrival_time', c=lambda: trip.GetEndTime(problems=self.problems)) | |
class ShapeDistTraveledOfStopTimeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
schedule.AddRouteObject( | |
transitfeed.Route("54C", "Polish Hill", 3, "054C")) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
shape = transitfeed.Shape("shape_1") | |
shape.AddPoint(36.425288, -117.133162, 0) | |
shape.AddPoint(36.424288, -117.133142, 1) | |
schedule.AddShapeObject(shape) | |
trip = transitfeed.Trip() | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.trip_headsign = 'via Polish Hill' | |
trip.direction_id = '0' | |
trip.block_id = None | |
trip.shape_id = 'shape_1' | |
schedule.AddTripObject(trip) | |
stop = transitfeed.Stop(36.425288, -117.133162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
stop = transitfeed.Stop(36.424288, -117.133142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:15:00", departure_time="5:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
stop = transitfeed.Stop(36.423288, -117.133122, "Demo Stop 3", "STOP3") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:18:00", departure_time="5:19:00", | |
stop_sequence=2, shape_dist_traveled=2) | |
self.problems.AssertNoMoreExceptions() | |
schedule.Validate(self.problems) | |
e = self.problems.PopException('OtherProblem') | |
self.assertTrue(e.FormatProblem().find('shape_dist_traveled=2') != -1) | |
self.problems.AssertNoMoreExceptions() | |
# Error if the distance decreases. | |
shape.AddPoint(36.421288, -117.133132, 2) | |
stop = transitfeed.Stop(36.421288, -117.133122, "Demo Stop 4", "STOP4") | |
schedule.AddStopObject(stop) | |
stoptime = transitfeed.StopTime(self.problems, stop, | |
arrival_time="5:29:00", | |
departure_time="5:29:00",stop_sequence=3, | |
shape_dist_traveled=1.7) | |
trip.AddStopTimeObject(stoptime, schedule=schedule) | |
self.problems.AssertNoMoreExceptions() | |
schedule.Validate(self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertMatchesRegex('stop STOP4 has', e.FormatProblem()) | |
self.assertMatchesRegex('shape_dist_traveled=1.7', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 2.0.', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_ERROR) | |
self.problems.AssertNoMoreExceptions() | |
# Warning if distance remains the same between two stop_times | |
stoptime.shape_dist_traveled = 2.0 | |
trip.ReplaceStopTimeObject(stoptime, schedule=schedule) | |
schedule.Validate(self.problems) | |
e = self.problems.PopException('InvalidValue') | |
self.assertMatchesRegex('stop STOP4 has', e.FormatProblem()) | |
self.assertMatchesRegex('shape_dist_traveled=2.0', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 2.0.', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.problems.AssertNoMoreExceptions() | |
class StopMatchWithShapeTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
schedule.AddRouteObject( | |
transitfeed.Route("54C", "Polish Hill", 3, "054C")) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetDateHasService('20070101') | |
schedule.AddServicePeriodObject(service_period) | |
shape = transitfeed.Shape("shape_1") | |
shape.AddPoint(36.425288, -117.133162, 0) | |
shape.AddPoint(36.424288, -117.143142, 1) | |
schedule.AddShapeObject(shape) | |
trip = transitfeed.Trip() | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.shape_id = 'shape_1' | |
schedule.AddTripObject(trip) | |
# Stop 1 is only 600 meters away from shape, which is allowed. | |
stop = transitfeed.Stop(36.425288, -117.139162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
# Stop 2 is more than 1000 meters away from shape, which is not allowed. | |
stop = transitfeed.Stop(36.424288, -117.158142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:15:00", departure_time="5:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
schedule.Validate(self.problems) | |
e = self.problems.PopException('StopTooFarFromShapeWithDistTraveled') | |
self.assertTrue(e.FormatProblem().find('Demo Stop 2') != -1) | |
self.assertTrue(e.FormatProblem().find('1344 meters away') != -1) | |
self.problems.AssertNoMoreExceptions() | |
class TripAddStopTimeObjectTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
schedule.AddAgency("\xc8\x8b Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod().SetDateHasService('20070101') | |
stop1 = schedule.AddStop(lng=140, lat=48.2, name="Stop 1") | |
stop2 = schedule.AddStop(lng=140.001, lat=48.201, name="Stop 2") | |
route = schedule.AddRoute("B", "Beta", "Bus") | |
trip = route.AddTrip(schedule, "bus trip") | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=10, | |
departure_secs=10), | |
schedule=schedule, problems=self.problems) | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop2, | |
arrival_secs=20, | |
departure_secs=20), | |
schedule=schedule, problems=self.problems) | |
# TODO: Factor out checks or use mock problems object | |
self.ExpectOtherProblemInClosure(lambda: | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=15, | |
departure_secs=15), | |
schedule=schedule, problems=self.problems)) | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1), | |
schedule=schedule, problems=self.problems) | |
self.ExpectOtherProblemInClosure(lambda: | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=15, | |
departure_secs=15), | |
schedule=schedule, problems=self.problems)) | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=30, | |
departure_secs=30), | |
schedule=schedule, problems=self.problems) | |
self.problems.AssertNoMoreExceptions() | |
class DuplicateTripTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule._check_duplicate_trips = True; | |
agency = transitfeed.Agency('Demo agency', 'http://google.com', | |
'America/Los_Angeles', 'agency1') | |
schedule.AddAgencyObject(agency) | |
service = schedule.GetDefaultServicePeriod() | |
service.SetDateHasService('20070101') | |
route1 = transitfeed.Route('Route1', 'route 1', 3, 'route_1', 'agency1') | |
schedule.AddRouteObject(route1) | |
route2 = transitfeed.Route('Route2', 'route 2', 3, 'route_2', 'agency1') | |
schedule.AddRouteObject(route2) | |
trip1 = transitfeed.Trip() | |
trip1.route_id = 'route_1' | |
trip1.trip_id = 't1' | |
trip1.trip_headsign = 'via Polish Hill' | |
trip1.direction_id = '0' | |
trip1.service_id = service.service_id | |
schedule.AddTripObject(trip1) | |
trip2 = transitfeed.Trip() | |
trip2.route_id = 'route_2' | |
trip2.trip_id = 't2' | |
trip2.trip_headsign = 'New' | |
trip2.direction_id = '0' | |
trip2.service_id = service.service_id | |
schedule.AddTripObject(trip2) | |
trip3 = transitfeed.Trip() | |
trip3.route_id = 'route_1' | |
trip3.trip_id = 't3' | |
trip3.trip_headsign = 'New Demo' | |
trip3.direction_id = '0' | |
trip3.service_id = service.service_id | |
schedule.AddTripObject(trip3) | |
stop1 = transitfeed.Stop(36.425288, -117.139162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop1) | |
trip1.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
trip2.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
trip3.AddStopTime(stop1, arrival_time="6:11:00", departure_time="6:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
stop2 = transitfeed.Stop(36.424288, -117.158142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop2) | |
trip1.AddStopTime(stop2, arrival_time="5:15:00", departure_time="5:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
trip2.AddStopTime(stop2, arrival_time="5:25:00", departure_time="5:26:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
trip3.AddStopTime(stop2, arrival_time="6:15:00", departure_time="6:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
schedule.Validate(self.problems) | |
e = self.problems.PopException('DuplicateTrip') | |
self.assertTrue(e.FormatProblem().find('t1 of route') != -1) | |
self.assertTrue(e.FormatProblem().find('t2 of route') != -1) | |
self.problems.AssertNoMoreExceptions() | |
class StopBelongsToBothSubwayAndBusTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule.AddAgency("Demo Agency", "http://example.com", | |
"America/Los_Angeles") | |
route1 = schedule.AddRoute(short_name="route1", long_name="route_1", | |
route_type=3) | |
route2 = schedule.AddRoute(short_name="route2", long_name="route_2", | |
route_type=1) | |
service = schedule.GetDefaultServicePeriod() | |
service.SetDateHasService("20070101") | |
trip1 = route1.AddTrip(schedule, "trip1", service, "t1") | |
trip2 = route2.AddTrip(schedule, "trip2", service, "t2") | |
stop1 = schedule.AddStop(36.425288, -117.133162, "stop1") | |
stop2 = schedule.AddStop(36.424288, -117.133142, "stop2") | |
stop3 = schedule.AddStop(36.423288, -117.134142, "stop3") | |
trip1.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00") | |
trip1.AddStopTime(stop2, arrival_time="5:21:00", departure_time="5:22:00") | |
trip2.AddStopTime(stop1, arrival_time="6:11:00", departure_time="6:12:00") | |
trip2.AddStopTime(stop3, arrival_time="6:21:00", departure_time="6:22:00") | |
schedule.Validate(self.problems) | |
e = self.problems.PopException("StopWithMultipleRouteTypes") | |
self.assertTrue(e.FormatProblem().find("Stop stop1") != -1) | |
self.assertTrue(e.FormatProblem().find("subway (ID=1)") != -1) | |
self.assertTrue(e.FormatProblem().find("bus line (ID=0)") != -1) | |
self.problems.AssertNoMoreExceptions() | |
class TripReplaceStopTimeObjectTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("\xc8\x8b Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = \ | |
schedule.GetDefaultServicePeriod().SetDateHasService('20070101') | |
stop1 = schedule.AddStop(lng=140, lat=48.2, name="Stop 1") | |
route = schedule.AddRoute("B", "Beta", "Bus") | |
trip = route.AddTrip(schedule, "bus trip") | |
stoptime = transitfeed.StopTime(transitfeed.default_problem_reporter, stop1, | |
arrival_secs=10, | |
departure_secs=10) | |
trip.AddStopTimeObject(stoptime, schedule=schedule) | |
stoptimes = trip.GetStopTimes() | |
stoptime.departure_secs = 20 | |
trip.ReplaceStopTimeObject(stoptime, schedule=schedule) | |
stoptimes = trip.GetStopTimes() | |
self.assertEqual(len(stoptimes), 1) | |
self.assertEqual(stoptimes[0].departure_secs, 20) | |
unknown_stop = schedule.AddStop(lng=140, lat=48.2, name="unknown") | |
unknown_stoptime = transitfeed.StopTime( | |
transitfeed.default_problem_reporter, unknown_stop, | |
arrival_secs=10, | |
departure_secs=10) | |
unknown_stoptime.stop_sequence = 5 | |
# Attempting to replace a non-existent StopTime raises an error | |
self.assertRaises(transitfeed.Error, trip.ReplaceStopTimeObject, | |
unknown_stoptime, schedule=schedule) | |
class TripStopTimeAccessorsTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.NewDefaultAgency(agency_name="Test Agency", | |
agency_url="http://example.com", | |
agency_timezone="America/Los_Angeles") | |
route = schedule.AddRoute(short_name="54C", long_name="Polish Hill", route_type=3) | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService("20070101") | |
trip = route.AddTrip(schedule, 'via Polish Hill') | |
stop1 = schedule.AddStop(36.425288, -117.133162, "Demo Stop 1") | |
stop2 = schedule.AddStop(36.424288, -117.133142, "Demo Stop 2") | |
trip.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00") | |
trip.AddStopTime(stop2, arrival_time="5:15:00", departure_time="5:16:00") | |
# Add some more stop times and test GetEndTime does the correct thing | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetStartTime()), | |
"05:11:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetEndTime()), | |
"05:16:00") | |
trip.AddStopTime(stop1, stop_time="05:20:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetEndTime()), | |
"05:20:00") | |
trip.AddStopTime(stop2, stop_time="05:22:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetEndTime()), | |
"05:22:00") | |
self.assertEqual(len(trip.GetStopTimesTuples()), 4) | |
self.assertEqual(trip.GetStopTimesTuples()[0], (trip.trip_id, "05:11:00", | |
"05:12:00", stop1.stop_id, | |
1, '', '', '', '')) | |
self.assertEqual(trip.GetStopTimesTuples()[3], (trip.trip_id, "05:22:00", | |
"05:22:00", stop2.stop_id, | |
4, '', '', '', '')) | |
class TripClearStopTimesTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.NewDefaultAgency(agency_name="Test Agency", | |
agency_timezone="America/Los_Angeles") | |
route = schedule.AddRoute(short_name="54C", long_name="Hill", route_type=3) | |
schedule.GetDefaultServicePeriod().SetDateHasService("20070101") | |
stop1 = schedule.AddStop(36, -117.1, "Demo Stop 1") | |
stop2 = schedule.AddStop(36, -117.2, "Demo Stop 2") | |
stop3 = schedule.AddStop(36, -117.3, "Demo Stop 3") | |
trip = route.AddTrip(schedule, "via Polish Hill") | |
trip.ClearStopTimes() | |
self.assertFalse(trip.GetStopTimes()) | |
trip.AddStopTime(stop1, stop_time="5:11:00") | |
self.assertTrue(trip.GetStopTimes()) | |
trip.ClearStopTimes() | |
self.assertFalse(trip.GetStopTimes()) | |
trip.AddStopTime(stop3, stop_time="4:00:00") # Can insert earlier time | |
trip.AddStopTime(stop2, stop_time="4:15:00") | |
trip.AddStopTime(stop1, stop_time="4:21:00") | |
old_stop_times = trip.GetStopTimes() | |
self.assertTrue(old_stop_times) | |
trip.ClearStopTimes() | |
self.assertFalse(trip.GetStopTimes()) | |
for st in old_stop_times: | |
trip.AddStopTimeObject(st) | |
self.assertEqual(trip.GetStartTime(), 4 * 3600) | |
self.assertEqual(trip.GetEndTime(), 4 * 3600 + 21 * 60) | |
class BasicParsingTestCase(unittest.TestCase): | |
"""Checks that we're getting the number of child objects that we expect.""" | |
def assertLoadedCorrectly(self, schedule): | |
"""Check that the good_feed looks correct""" | |
self.assertEqual(1, len(schedule._agencies)) | |
self.assertEqual(5, len(schedule.routes)) | |
self.assertEqual(2, len(schedule.service_periods)) | |
self.assertEqual(10, len(schedule.stops)) | |
self.assertEqual(11, len(schedule.trips)) | |
self.assertEqual(0, len(schedule.fare_zones)) | |
def assertLoadedStopTimesCorrectly(self, schedule): | |
self.assertEqual(5, len(schedule.GetTrip('CITY1').GetStopTimes())) | |
self.assertEqual('to airport', schedule.GetTrip('STBA').GetStopTimes()[0].stop_headsign) | |
self.assertEqual(2, schedule.GetTrip('CITY1').GetStopTimes()[1].pickup_type) | |
self.assertEqual(3, schedule.GetTrip('CITY1').GetStopTimes()[1].drop_off_type) | |
def test_MemoryDb(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems=TestFailureProblemReporter(self), | |
extra_validation=True, | |
memory_db=True) | |
schedule = loader.Load() | |
self.assertLoadedCorrectly(schedule) | |
self.assertLoadedStopTimesCorrectly(schedule) | |
def test_TemporaryFile(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems=TestFailureProblemReporter(self), | |
extra_validation=True, | |
memory_db=False) | |
schedule = loader.Load() | |
self.assertLoadedCorrectly(schedule) | |
self.assertLoadedStopTimesCorrectly(schedule) | |
def test_NoLoadStopTimes(self): | |
problems = TestFailureProblemReporter( | |
self, ignore_types=("ExpirationDate", "UnusedStop", "OtherProblem")) | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems=problems, | |
extra_validation=True, | |
load_stop_times=False) | |
schedule = loader.Load() | |
self.assertLoadedCorrectly(schedule) | |
self.assertEqual(0, len(schedule.GetTrip('CITY1').GetStopTimes())) | |
class RepeatedRouteNameTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('repeated_route_name', 'route_long_name') | |
class InvalidRouteAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('invalid_route_agency') | |
self.problems.PopInvalidValue("agency_id", "routes.txt") | |
self.problems.PopInvalidValue("route_id", "trips.txt") | |
self.problems.AssertNoMoreExceptions() | |
class UndefinedStopAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('undefined_stop', 'stop_id') | |
class SameShortLongNameTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('same_short_long_name', 'route_long_name') | |
class UnusedStopAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('unused_stop'), | |
e = self.problems.PopException("UnusedStop") | |
self.assertEqual("Bogus Stop (Demo)", e.stop_name) | |
self.assertEqual("BOGUS", e.stop_id) | |
self.problems.AssertNoMoreExceptions() | |
class OnlyCalendarDatesTestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('only_calendar_dates'), | |
self.problems.AssertNoMoreExceptions() | |
class DuplicateServiceIdDateWarningTestCase(MemoryZipTestCase): | |
def runTest(self): | |
# Two lines with the same value of service_id and date. | |
# Test for the warning. | |
self.zip.writestr( | |
'calendar_dates.txt', | |
'service_id,date,exception_type\n' | |
'FULLW,20100604,1\n' | |
'FULLW,20100604,2\n') | |
schedule = self.loader.Load() | |
e = self.problems.PopException('DuplicateID') | |
self.assertEquals('(service_id, date)', e.column_name) | |
self.assertEquals('(FULLW, 20100604)', e.value) | |
class AddStopTimeParametersTestCase(unittest.TestCase): | |
def runTest(self): | |
problem_reporter = TestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problem_reporter) | |
route = schedule.AddRoute(short_name="10", long_name="", route_type="Bus") | |
stop = schedule.AddStop(40, -128, "My stop") | |
# Stop must be added to schedule so that the call | |
# AddStopTime -> AddStopTimeObject -> GetStopTimes -> GetStop can work | |
trip = transitfeed.Trip() | |
trip.route_id = route.route_id | |
trip.service_id = schedule.GetDefaultServicePeriod().service_id | |
trip.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip) | |
# First stop must have time | |
trip.AddStopTime(stop, arrival_secs=300, departure_secs=360) | |
trip.AddStopTime(stop) | |
trip.AddStopTime(stop, arrival_time="00:07:00", departure_time="00:07:30") | |
trip.Validate(problem_reporter) | |
class ExpirationDateTestCase(unittest.TestCase): | |
def runTest(self): | |
problems = RecordingProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
now = time.mktime(time.localtime()) | |
seconds_per_day = 60 * 60 * 24 | |
two_weeks_ago = time.localtime(now - 14 * seconds_per_day) | |
two_weeks_from_now = time.localtime(now + 14 * seconds_per_day) | |
two_months_from_now = time.localtime(now + 60 * seconds_per_day) | |
date_format = "%Y%m%d" | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService(True) | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate(time.strftime(date_format, two_months_from_now)) | |
schedule.Validate() # should have no problems | |
problems.AssertNoMoreExceptions() | |
service_period.SetEndDate(time.strftime(date_format, two_weeks_from_now)) | |
schedule.Validate() | |
e = problems.PopException('ExpirationDate') | |
self.assertTrue(e.FormatProblem().index('will soon expire')) | |
problems.AssertNoMoreExceptions() | |
service_period.SetEndDate(time.strftime(date_format, two_weeks_ago)) | |
schedule.Validate() | |
e = problems.PopException('ExpirationDate') | |
self.assertTrue(e.FormatProblem().index('expired')) | |
problems.AssertNoMoreExceptions() | |
class FutureServiceStartDateTestCase(unittest.TestCase): | |
def runTest(self): | |
problems = RecordingProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
today = datetime.date.today() | |
yesterday = today - datetime.timedelta(days=1) | |
tomorrow = today + datetime.timedelta(days=1) | |
two_months_from_today = today + datetime.timedelta(days=60) | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService(True) | |
service_period.SetWeekendService(True) | |
service_period.SetEndDate(two_months_from_today.strftime("%Y%m%d")) | |
service_period.SetStartDate(yesterday.strftime("%Y%m%d")) | |
schedule.Validate() | |
problems.AssertNoMoreExceptions() | |
service_period.SetStartDate(today.strftime("%Y%m%d")) | |
schedule.Validate() | |
problems.AssertNoMoreExceptions() | |
service_period.SetStartDate(tomorrow.strftime("%Y%m%d")) | |
schedule.Validate() | |
problems.PopException('FutureService') | |
problems.AssertNoMoreExceptions() | |
class CalendarTxtIntegrationTestCase(MemoryZipTestCase): | |
def testBadEndDateFormat(self): | |
# A badly formatted end_date used to generate an InvalidValue report from | |
# Schedule.Validate and ServicePeriod.Validate. Test for the bug. | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20101232\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopInvalidValue('end_date') | |
self.problems.AssertNoMoreExceptions() | |
def testBadStartDateFormat(self): | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,200701xx,20101231\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopInvalidValue('start_date') | |
self.problems.AssertNoMoreExceptions() | |
def testNoStartDateAndEndDate(self): | |
"""Regression test for calendar.txt with empty start_date and end_date. | |
See http://code.google.com/p/googletransitdatafeed/issues/detail?id=41 | |
""" | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1, ,\t\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("MissingValue") | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("start_date", e.column_name) | |
e = self.problems.PopException("MissingValue") | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("end_date", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
def testNoStartDateAndBadEndDate(self): | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,,abc\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("MissingValue") | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("start_date", e.column_name) | |
e = self.problems.PopInvalidValue("end_date") | |
self.assertEquals(2, e.row_num) | |
self.problems.AssertNoMoreExceptions() | |
def testMissingEndDateColumn(self): | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101\n" | |
"WE,0,0,0,0,0,1,1,20070101\n") | |
schedule = self.loader.Load() | |
e = self.problems.PopException("MissingColumn") | |
self.assertEquals("end_date", e.column_name) | |
self.problems.AssertNoMoreExceptions() | |
class DuplicateTripIDValidationTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip1 = transitfeed.Trip() | |
trip1.route_id = "SAMPLE_ID" | |
trip1.service_id = "WEEK" | |
trip1.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip1) | |
trip2 = transitfeed.Trip() | |
trip2.route_id = "SAMPLE_ID" | |
trip2.service_id = "WEEK" | |
trip2.trip_id = "SAMPLE_TRIP" | |
try: | |
schedule.AddTripObject(trip2) | |
self.fail("Expected Duplicate ID validation failure") | |
except transitfeed.DuplicateID, e: | |
self.assertEqual("trip_id", e.column_name) | |
self.assertEqual("SAMPLE_TRIP", e.value) | |
class DuplicateStopValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = "SAMPLE_ID" | |
trip.service_id = "WEEK" | |
trip.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip) | |
stop1 = transitfeed.Stop() | |
stop1.stop_id = "STOP1" | |
stop1.stop_name = "Stop 1" | |
stop1.stop_lat = 78.243587 | |
stop1.stop_lon = 32.258937 | |
schedule.AddStopObject(stop1) | |
trip.AddStopTime(stop1, arrival_time="12:00:00", departure_time="12:00:00") | |
stop2 = transitfeed.Stop() | |
stop2.stop_id = "STOP2" | |
stop2.stop_name = "Stop 2" | |
stop2.stop_lat = 78.253587 | |
stop2.stop_lon = 32.258937 | |
schedule.AddStopObject(stop2) | |
trip.AddStopTime(stop2, arrival_time="12:05:00", departure_time="12:05:00") | |
schedule.Validate() | |
stop3 = transitfeed.Stop() | |
stop3.stop_id = "STOP3" | |
stop3.stop_name = "Stop 3" | |
stop3.stop_lat = 78.243587 | |
stop3.stop_lon = 32.268937 | |
schedule.AddStopObject(stop3) | |
trip.AddStopTime(stop3, arrival_time="12:10:00", departure_time="12:10:00") | |
schedule.Validate() | |
self.problems.AssertNoMoreExceptions() | |
stop4 = transitfeed.Stop() | |
stop4.stop_id = "STOP4" | |
stop4.stop_name = "Stop 4" | |
stop4.stop_lat = 78.243588 | |
stop4.stop_lon = 32.268936 | |
schedule.AddStopObject(stop4) | |
trip.AddStopTime(stop4, arrival_time="12:15:00", departure_time="12:15:00") | |
schedule.Validate() | |
e = self.problems.PopException('StopsTooClose') | |
self.problems.AssertNoMoreExceptions() | |
class TempFileTestCaseBase(unittest.TestCase): | |
""" | |
Subclass of TestCase which sets self.tempfilepath to a valid temporary zip | |
file name and removes the file if it exists when the test is done. | |
""" | |
def setUp(self): | |
(fd, self.tempfilepath) = tempfile.mkstemp(".zip") | |
# Open file handle causes an exception during remove in Windows | |
os.close(fd) | |
def tearDown(self): | |
if os.path.exists(self.tempfilepath): | |
os.remove(self.tempfilepath) | |
class MinimalWriteTestCase(TempFileTestCaseBase): | |
""" | |
This test case simply constructs an incomplete feed with very few | |
fields set and ensures that there are no exceptions when writing it out. | |
This is very similar to TransitFeedSampleCodeTestCase below, but that one | |
will no doubt change as the sample code is altered. | |
""" | |
def runTest(self): | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_short_name = "66" | |
route.route_long_name = "Sample Route acute letter e\202" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = "SAMPLE_ID" | |
trip.service_period = service_period | |
trip.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip) | |
stop1 = transitfeed.Stop() | |
stop1.stop_id = "STOP1" | |
stop1.stop_name = u'Stop 1 acute letter e\202' | |
stop1.stop_lat = 78.243587 | |
stop1.stop_lon = 32.258937 | |
schedule.AddStopObject(stop1) | |
trip.AddStopTime(stop1, arrival_time="12:00:00", departure_time="12:00:00") | |
stop2 = transitfeed.Stop() | |
stop2.stop_id = "STOP2" | |
stop2.stop_name = "Stop 2" | |
stop2.stop_lat = 78.253587 | |
stop2.stop_lon = 32.258937 | |
schedule.AddStopObject(stop2) | |
trip.AddStopTime(stop2, arrival_time="12:05:00", departure_time="12:05:00") | |
schedule.Validate() | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
class TransitFeedSampleCodeTestCase(unittest.TestCase): | |
""" | |
This test should simply contain the sample code printed on the page: | |
http://code.google.com/p/googletransitdatafeed/wiki/TransitFeed | |
to ensure that it doesn't cause any exceptions. | |
""" | |
def runTest(self): | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_short_name = "66" | |
route.route_long_name = "Sample Route" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = "SAMPLE_ID" | |
trip.service_period = service_period | |
trip.trip_id = "SAMPLE_TRIP" | |
trip.direction_id = "0" | |
trip.block_id = None | |
schedule.AddTripObject(trip) | |
stop1 = transitfeed.Stop() | |
stop1.stop_id = "STOP1" | |
stop1.stop_name = "Stop 1" | |
stop1.stop_lat = 78.243587 | |
stop1.stop_lon = 32.258937 | |
schedule.AddStopObject(stop1) | |
trip.AddStopTime(stop1, arrival_time="12:00:00", departure_time="12:00:00") | |
stop2 = transitfeed.Stop() | |
stop2.stop_id = "STOP2" | |
stop2.stop_name = "Stop 2" | |
stop2.stop_lat = 78.253587 | |
stop2.stop_lon = 32.258937 | |
schedule.AddStopObject(stop2) | |
trip.AddStopTime(stop2, arrival_time="12:05:00", departure_time="12:05:00") | |
schedule.Validate() # not necessary, but helpful for finding problems | |
schedule.WriteGoogleTransitFeed("new_feed.zip") | |
class AgencyIDValidationTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route" | |
# no agency defined yet, failure. | |
try: | |
schedule.AddRouteObject(route) | |
self.fail("Expected validation error") | |
except transitfeed.InvalidValue, e: | |
self.assertEqual('agency_id', e.column_name) | |
self.assertEqual(None, e.value) | |
# one agency defined, assume that the route belongs to it | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles", "TEST_AGENCY") | |
schedule.AddRouteObject(route) | |
schedule.AddAgency("Test Agency 2", "http://example.com", | |
"America/Los_Angeles", "TEST_AGENCY_2") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID_2" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route 2" | |
# multiple agencies defined, don't know what omitted agency_id should be | |
try: | |
schedule.AddRouteObject(route) | |
self.fail("Expected validation error") | |
except transitfeed.InvalidValue, e: | |
self.assertEqual('agency_id', e.column_name) | |
self.assertEqual(None, e.value) | |
# agency with no agency_id defined, matches route with no agency id | |
schedule.AddAgency("Test Agency 3", "http://example.com", | |
"America/Los_Angeles") | |
schedule.AddRouteObject(route) | |
class AddHeadwayPeriodValidationTestCase(ValidationTestCase): | |
def ExpectInvalidValue(self, start_time, end_time, headway, | |
column_name, value): | |
try: | |
trip = transitfeed.Trip() | |
trip.AddHeadwayPeriod(start_time, end_time, headway) | |
self.fail("Expected InvalidValue error on %s" % column_name) | |
except transitfeed.InvalidValue, e: | |
self.assertEqual(column_name, e.column_name) | |
self.assertEqual(value, e.value) | |
self.assertEqual(0, len(trip.GetHeadwayPeriodTuples())) | |
def ExpectMissingValue(self, start_time, end_time, headway, column_name): | |
try: | |
trip = transitfeed.Trip() | |
trip.AddHeadwayPeriod(start_time, end_time, headway) | |
self.fail("Expected MissingValue error on %s" % column_name) | |
except transitfeed.MissingValue, e: | |
self.assertEqual(column_name, e.column_name) | |
self.assertEqual(0, len(trip.GetHeadwayPeriodTuples())) | |
def runTest(self): | |
# these should work fine | |
trip = transitfeed.Trip() | |
trip.trip_id = "SAMPLE_ID" | |
trip.AddHeadwayPeriod(0, 50, 1200) | |
trip.AddHeadwayPeriod("01:00:00", "02:00:00", "600") | |
trip.AddHeadwayPeriod(u"02:00:00", u"03:00:00", u"1800") | |
headways = trip.GetHeadwayPeriodTuples() | |
self.assertEqual(3, len(headways)) | |
self.assertEqual((0, 50, 1200), headways[0]) | |
self.assertEqual((3600, 7200, 600), headways[1]) | |
self.assertEqual((7200, 10800, 1800), headways[2]) | |
self.assertEqual([("SAMPLE_ID", "00:00:00", "00:00:50", "1200"), | |
("SAMPLE_ID", "01:00:00", "02:00:00", "600"), | |
("SAMPLE_ID", "02:00:00", "03:00:00", "1800")], | |
trip.GetHeadwayPeriodOutputTuples()) | |
# now test invalid input | |
self.ExpectMissingValue(None, 50, 1200, "start_time") | |
self.ExpectMissingValue("", 50, 1200, "start_time") | |
self.ExpectInvalidValue("midnight", 50, 1200, "start_time", "midnight") | |
self.ExpectInvalidValue(-50, 50, 1200, "start_time", -50) | |
self.ExpectMissingValue(0, None, 1200, "end_time") | |
self.ExpectMissingValue(0, "", 1200, "end_time") | |
self.ExpectInvalidValue(0, "noon", 1200, "end_time", "noon") | |
self.ExpectInvalidValue(0, -50, 1200, "end_time", -50) | |
self.ExpectMissingValue(0, 600, 0, "headway_secs") | |
self.ExpectMissingValue(0, 600, None, "headway_secs") | |
self.ExpectMissingValue(0, 600, "", "headway_secs") | |
self.ExpectInvalidValue(0, 600, "test", "headway_secs", "test") | |
self.ExpectInvalidValue(0, 600, -60, "headway_secs", -60) | |
self.ExpectInvalidValue(0, 0, 1200, "end_time", 0) | |
self.ExpectInvalidValue("12:00:00", "06:00:00", 1200, "end_time", 21600) | |
class MinimalUtf8Builder(TempFileTestCaseBase): | |
def runTest(self): | |
problems = TestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
schedule.AddAgency("\xc8\x8b Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService('20070101') | |
# "u020b i with inverted accent breve" encoded in utf-8 | |
stop1 = schedule.AddStop(lng=140, lat=48.2, name="\xc8\x8b hub") | |
# "u020b i with inverted accent breve" as unicode string | |
stop2 = schedule.AddStop(lng=140.001, lat=48.201, name=u"remote \u020b station") | |
route = schedule.AddRoute(u"\u03b2", "Beta", "Bus") | |
trip = route.AddTrip(schedule, u"to remote \u020b station") | |
repr(stop1) | |
repr(stop2) | |
repr(route) | |
repr(trip) | |
trip.AddStopTime(stop1, schedule=schedule, stop_time='10:00:00') | |
trip.AddStopTime(stop2, stop_time='10:10:00') | |
schedule.Validate(problems) | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
read_schedule = \ | |
transitfeed.Loader(self.tempfilepath, problems=problems, | |
extra_validation=True).Load() | |
class ScheduleBuilderTestCase(unittest.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
self.assertTrue(service_period.service_id) | |
service_period.SetWeekdayService(has_service=True) | |
service_period.SetStartDate("20070320") | |
service_period.SetEndDate("20071231") | |
stop1 = schedule.AddStop(lng=-140.12, lat=48.921, | |
name="one forty at forty eight") | |
stop2 = schedule.AddStop(lng=-140.22, lat=48.421, name="west and south") | |
stop3 = schedule.AddStop(lng=-140.32, lat=48.121, name="more away") | |
stop4 = schedule.AddStop(lng=-140.42, lat=48.021, name="more more away") | |
route = schedule.AddRoute(short_name="R", long_name="My Route", | |
route_type="Bus") | |
self.assertTrue(route.route_id) | |
self.assertEqual(route.route_short_name, "R") | |
self.assertEqual(route.route_type, 3) | |
trip = route.AddTrip(schedule, headsign="To The End", | |
service_period=service_period) | |
trip_id = trip.trip_id | |
self.assertTrue(trip_id) | |
trip = schedule.GetTrip(trip_id) | |
self.assertEqual("To The End", trip.trip_headsign) | |
self.assertEqual(service_period, trip.service_period) | |
trip.AddStopTime(stop=stop1, arrival_secs=3600*8, departure_secs=3600*8) | |
trip.AddStopTime(stop=stop2) | |
trip.AddStopTime(stop=stop3, arrival_secs=3600*8 + 60*60, | |
departure_secs=3600*8 + 60*60) | |
trip.AddStopTime(stop=stop4, arrival_time="9:13:00", | |
departure_secs=3600*8 + 60*103, stop_headsign="Last stop", | |
pickup_type=1, drop_off_type=3) | |
schedule.Validate() | |
self.assertEqual(4, len(trip.GetTimeStops())) | |
self.assertEqual(1, len(schedule.GetRouteList())) | |
self.assertEqual(4, len(schedule.GetStopList())) | |
class WriteSampleFeedTestCase(TempFileTestCaseBase): | |
def assertEqualTimeString(self, a, b): | |
"""Assert that a and b are equal, even if they don't have the same zero | |
padding on the hour. IE 08:45:00 vs 8:45:00.""" | |
if a[1] == ':': | |
a = '0' + a | |
if b[1] == ':': | |
b = '0' + b | |
self.assertEqual(a, b) | |
def assertEqualWithDefault(self, a, b, default): | |
"""Assert that a and b are equal. Treat None and default as equal.""" | |
if a == b: | |
return | |
if a in (None, default) and b in (None, default): | |
return | |
self.assertTrue(False, "a=%s b=%s" % (a, b)) | |
def runTest(self): | |
problems = RecordingProblemReporter(self, ignore_types=("ExpirationDate",)) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
agency = transitfeed.Agency() | |
agency.agency_id = "DTA" | |
agency.agency_name = "Demo Transit Authority" | |
agency.agency_url = "http://google.com" | |
agency.agency_timezone = "America/Los_Angeles" | |
agency.agency_lang = 'en' | |
# Test that unknown columns, such as agency_mission, are preserved | |
agency.agency_mission = "Get You There" | |
schedule.AddAgencyObject(agency) | |
routes = [] | |
route_data = [ | |
("AB", "DTA", "10", "Airport - Bullfrog", 3), | |
("BFC", "DTA", "20", "Bullfrog - Furnace Creek Resort", 3), | |
("STBA", "DTA", "30", "Stagecoach - Airport Shuttle", 3), | |
("CITY", "DTA", "40", "City", 3), | |
("AAMV", "DTA", "50", "Airport - Amargosa Valley", 3) | |
] | |
for route_entry in route_data: | |
route = transitfeed.Route() | |
(route.route_id, route.agency_id, route.route_short_name, | |
route.route_long_name, route.route_type) = route_entry | |
routes.append(route) | |
schedule.AddRouteObject(route) | |
shape_data = [ | |
(36.915760, -116.751709), | |
(36.905018, -116.763206), | |
(36.902134, -116.777969), | |
(36.904091, -116.788185), | |
(36.883602, -116.814537), | |
(36.874523, -116.795593), | |
(36.873302, -116.786491), | |
(36.869202, -116.784241), | |
(36.868515, -116.784729), | |
] | |
shape = transitfeed.Shape("BFC1S") | |
for (lat, lon) in shape_data: | |
shape.AddPoint(lat, lon) | |
schedule.AddShapeObject(shape) | |
week_period = transitfeed.ServicePeriod() | |
week_period.service_id = "FULLW" | |
week_period.start_date = "20070101" | |
week_period.end_date = "20071231" | |
week_period.SetWeekdayService() | |
week_period.SetWeekendService() | |
week_period.SetDateHasService("20070604", False) | |
schedule.AddServicePeriodObject(week_period) | |
weekend_period = transitfeed.ServicePeriod() | |
weekend_period.service_id = "WE" | |
weekend_period.start_date = "20070101" | |
weekend_period.end_date = "20071231" | |
weekend_period.SetWeekendService() | |
schedule.AddServicePeriodObject(weekend_period) | |
stops = [] | |
stop_data = [ | |
("FUR_CREEK_RES", "Furnace Creek Resort (Demo)", | |
36.425288, -117.133162, "zone-a", "1234"), | |
("BEATTY_AIRPORT", "Nye County Airport (Demo)", | |
36.868446, -116.784682, "zone-a", "1235"), | |
("BULLFROG", "Bullfrog (Demo)", 36.88108, -116.81797, "zone-b", "1236"), | |
("STAGECOACH", "Stagecoach Hotel & Casino (Demo)", | |
36.915682, -116.751677, "zone-c", "1237"), | |
("NADAV", "North Ave / D Ave N (Demo)", 36.914893, -116.76821, "", ""), | |
("NANAA", "North Ave / N A Ave (Demo)", 36.914944, -116.761472, "", ""), | |
("DADAN", "Doing AVe / D Ave N (Demo)", 36.909489, -116.768242, "", ""), | |
("EMSI", "E Main St / S Irving St (Demo)", | |
36.905697, -116.76218, "", ""), | |
("AMV", "Amargosa Valley (Demo)", 36.641496, -116.40094, "", ""), | |
] | |
for stop_entry in stop_data: | |
stop = transitfeed.Stop() | |
(stop.stop_id, stop.stop_name, stop.stop_lat, stop.stop_lon, | |
stop.zone_id, stop.stop_code) = stop_entry | |
schedule.AddStopObject(stop) | |
stops.append(stop) | |
# Add a value to an unknown column and make sure it is preserved | |
schedule.GetStop("BULLFROG").stop_sound = "croak!" | |
trip_data = [ | |
("AB", "FULLW", "AB1", "to Bullfrog", "0", "1", None), | |
("AB", "FULLW", "AB2", "to Airport", "1", "2", None), | |
("STBA", "FULLW", "STBA", "Shuttle", None, None, None), | |
("CITY", "FULLW", "CITY1", None, "0", None, None), | |
("CITY", "FULLW", "CITY2", None, "1", None, None), | |
("BFC", "FULLW", "BFC1", "to Furnace Creek Resort", "0", "1", "BFC1S"), | |
("BFC", "FULLW", "BFC2", "to Bullfrog", "1", "2", None), | |
("AAMV", "WE", "AAMV1", "to Amargosa Valley", "0", None, None), | |
("AAMV", "WE", "AAMV2", "to Airport", "1", None, None), | |
("AAMV", "WE", "AAMV3", "to Amargosa Valley", "0", None, None), | |
("AAMV", "WE", "AAMV4", "to Airport", "1", None, None), | |
] | |
trips = [] | |
for trip_entry in trip_data: | |
trip = transitfeed.Trip() | |
(trip.route_id, trip.service_id, trip.trip_id, trip.trip_headsign, | |
trip.direction_id, trip.block_id, trip.shape_id) = trip_entry | |
trips.append(trip) | |
schedule.AddTripObject(trip) | |
stop_time_data = { | |
"STBA": [("6:00:00", "6:00:00", "STAGECOACH", None, None, None, None), | |
("6:20:00", "6:20:00", "BEATTY_AIRPORT", None, None, None, None)], | |
"CITY1": [("6:00:00", "6:00:00", "STAGECOACH", 1.34, 0, 0, "stop 1"), | |
("6:05:00", "6:07:00", "NANAA", 2.40, 1, 2, "stop 2"), | |
("6:12:00", "6:14:00", "NADAV", 3.0, 2, 2, "stop 3"), | |
("6:19:00", "6:21:00", "DADAN", 4, 2, 2, "stop 4"), | |
("6:26:00", "6:28:00", "EMSI", 5.78, 2, 3, "stop 5")], | |
"CITY2": [("6:28:00", "6:28:00", "EMSI", None, None, None, None), | |
("6:35:00", "6:37:00", "DADAN", None, None, None, None), | |
("6:42:00", "6:44:00", "NADAV", None, None, None, None), | |
("6:49:00", "6:51:00", "NANAA", None, None, None, None), | |
("6:56:00", "6:58:00", "STAGECOACH", None, None, None, None)], | |
"AB1": [("8:00:00", "8:00:00", "BEATTY_AIRPORT", None, None, None, None), | |
("8:10:00", "8:15:00", "BULLFROG", None, None, None, None)], | |
"AB2": [("12:05:00", "12:05:00", "BULLFROG", None, None, None, None), | |
("12:15:00", "12:15:00", "BEATTY_AIRPORT", None, None, None, None)], | |
"BFC1": [("8:20:00", "8:20:00", "BULLFROG", None, None, None, None), | |
("9:20:00", "9:20:00", "FUR_CREEK_RES", None, None, None, None)], | |
"BFC2": [("11:00:00", "11:00:00", "FUR_CREEK_RES", None, None, None, None), | |
("12:00:00", "12:00:00", "BULLFROG", None, None, None, None)], | |
"AAMV1": [("8:00:00", "8:00:00", "BEATTY_AIRPORT", None, None, None, None), | |
("9:00:00", "9:00:00", "AMV", None, None, None, None)], | |
"AAMV2": [("10:00:00", "10:00:00", "AMV", None, None, None, None), | |
("11:00:00", "11:00:00", "BEATTY_AIRPORT", None, None, None, None)], | |
"AAMV3": [("13:00:00", "13:00:00", "BEATTY_AIRPORT", None, None, None, None), | |
("14:00:00", "14:00:00", "AMV", None, None, None, None)], | |
"AAMV4": [("15:00:00", "15:00:00", "AMV", None, None, None, None), | |
("16:00:00", "16:00:00", "BEATTY_AIRPORT", None, None, None, None)], | |
} | |
for trip_id, stop_time_list in stop_time_data.items(): | |
for stop_time_entry in stop_time_list: | |
(arrival_time, departure_time, stop_id, shape_dist_traveled, | |
pickup_type, drop_off_type, stop_headsign) = stop_time_entry | |
trip = schedule.GetTrip(trip_id) | |
stop = schedule.GetStop(stop_id) | |
trip.AddStopTime(stop, arrival_time=arrival_time, | |
departure_time=departure_time, | |
shape_dist_traveled=shape_dist_traveled, | |
pickup_type=pickup_type, drop_off_type=drop_off_type, | |
stop_headsign=stop_headsign) | |
self.assertEqual(0, schedule.GetTrip("CITY1").GetStopTimes()[0].pickup_type) | |
self.assertEqual(1, schedule.GetTrip("CITY1").GetStopTimes()[1].pickup_type) | |
headway_data = [ | |
("STBA", "6:00:00", "22:00:00", 1800), | |
("CITY1", "6:00:00", "7:59:59", 1800), | |
("CITY2", "6:00:00", "7:59:59", 1800), | |
("CITY1", "8:00:00", "9:59:59", 600), | |
("CITY2", "8:00:00", "9:59:59", 600), | |
("CITY1", "10:00:00", "15:59:59", 1800), | |
("CITY2", "10:00:00", "15:59:59", 1800), | |
("CITY1", "16:00:00", "18:59:59", 600), | |
("CITY2", "16:00:00", "18:59:59", 600), | |
("CITY1", "19:00:00", "22:00:00", 1800), | |
("CITY2", "19:00:00", "22:00:00", 1800), | |
] | |
headway_trips = {} | |
for headway_entry in headway_data: | |
(trip_id, start_time, end_time, headway) = headway_entry | |
headway_trips[trip_id] = [] # adding to set to check later | |
trip = schedule.GetTrip(trip_id) | |
trip.AddHeadwayPeriod(start_time, end_time, headway, problems) | |
for trip_id in headway_trips: | |
headway_trips[trip_id] = \ | |
schedule.GetTrip(trip_id).GetHeadwayPeriodTuples() | |
fare_data = [ | |
("p", 1.25, "USD", 0, 0), | |
("a", 5.25, "USD", 0, 0), | |
] | |
fares = [] | |
for fare_entry in fare_data: | |
fare = transitfeed.Fare(fare_entry[0], fare_entry[1], fare_entry[2], | |
fare_entry[3], fare_entry[4]) | |
fares.append(fare) | |
schedule.AddFareObject(fare) | |
fare_rule_data = [ | |
("p", "AB", "zone-a", "zone-b", None), | |
("p", "STBA", "zone-a", None, "zone-c"), | |
("p", "BFC", None, "zone-b", "zone-a"), | |
("a", "AAMV", None, None, None), | |
] | |
for fare_id, route_id, orig_id, dest_id, contains_id in fare_rule_data: | |
rule = transitfeed.FareRule( | |
fare_id=fare_id, route_id=route_id, origin_id=orig_id, | |
destination_id=dest_id, contains_id=contains_id) | |
schedule.AddFareRuleObject(rule, problems) | |
schedule.Validate(problems) | |
problems.AssertNoMoreExceptions() | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
read_schedule = \ | |
transitfeed.Loader(self.tempfilepath, problems=problems, | |
extra_validation=True).Load() | |
e = problems.PopException("UnrecognizedColumn") | |
self.assertEqual(e.file_name, "agency.txt") | |
self.assertEqual(e.column_name, "agency_mission") | |
e = problems.PopException("UnrecognizedColumn") | |
self.assertEqual(e.file_name, "stops.txt") | |
self.assertEqual(e.column_name, "stop_sound") | |
problems.AssertNoMoreExceptions() | |
self.assertEqual(1, len(read_schedule.GetAgencyList())) | |
self.assertEqual(agency, read_schedule.GetAgency(agency.agency_id)) | |
self.assertEqual(len(routes), len(read_schedule.GetRouteList())) | |
for route in routes: | |
self.assertEqual(route, read_schedule.GetRoute(route.route_id)) | |
self.assertEqual(2, len(read_schedule.GetServicePeriodList())) | |
self.assertEqual(week_period, | |
read_schedule.GetServicePeriod(week_period.service_id)) | |
self.assertEqual(weekend_period, | |
read_schedule.GetServicePeriod(weekend_period.service_id)) | |
self.assertEqual(len(stops), len(read_schedule.GetStopList())) | |
for stop in stops: | |
self.assertEqual(stop, read_schedule.GetStop(stop.stop_id)) | |
self.assertEqual("croak!", read_schedule.GetStop("BULLFROG").stop_sound) | |
self.assertEqual(len(trips), len(read_schedule.GetTripList())) | |
for trip in trips: | |
self.assertEqual(trip, read_schedule.GetTrip(trip.trip_id)) | |
for trip_id in headway_trips: | |
self.assertEqual(headway_trips[trip_id], | |
read_schedule.GetTrip(trip_id).GetHeadwayPeriodTuples()) | |
for trip_id, stop_time_list in stop_time_data.items(): | |
trip = read_schedule.GetTrip(trip_id) | |
read_stoptimes = trip.GetStopTimes() | |
self.assertEqual(len(read_stoptimes), len(stop_time_list)) | |
for stop_time_entry, read_stoptime in zip(stop_time_list, read_stoptimes): | |
(arrival_time, departure_time, stop_id, shape_dist_traveled, | |
pickup_type, drop_off_type, stop_headsign) = stop_time_entry | |
self.assertEqual(stop_id, read_stoptime.stop_id) | |
self.assertEqual(read_schedule.GetStop(stop_id), read_stoptime.stop) | |
self.assertEqualTimeString(arrival_time, read_stoptime.arrival_time) | |
self.assertEqualTimeString(departure_time, read_stoptime.departure_time) | |
self.assertEqual(shape_dist_traveled, read_stoptime.shape_dist_traveled) | |
self.assertEqualWithDefault(pickup_type, read_stoptime.pickup_type, 0) | |
self.assertEqualWithDefault(drop_off_type, read_stoptime.drop_off_type, 0) | |
self.assertEqualWithDefault(stop_headsign, read_stoptime.stop_headsign, '') | |
self.assertEqual(len(fares), len(read_schedule.GetFareList())) | |
for fare in fares: | |
self.assertEqual(fare, read_schedule.GetFare(fare.fare_id)) | |
read_fare_rules_data = [] | |
for fare in read_schedule.GetFareList(): | |
for rule in fare.GetFareRuleList(): | |
self.assertEqual(fare.fare_id, rule.fare_id) | |
read_fare_rules_data.append((fare.fare_id, rule.route_id, | |
rule.origin_id, rule.destination_id, | |
rule.contains_id)) | |
fare_rule_data.sort() | |
read_fare_rules_data.sort() | |
self.assertEqual(len(read_fare_rules_data), len(fare_rule_data)) | |
for rf, f in zip(read_fare_rules_data, fare_rule_data): | |
self.assertEqual(rf, f) | |
self.assertEqual(1, len(read_schedule.GetShapeList())) | |
self.assertEqual(shape, read_schedule.GetShape(shape.shape_id)) | |
# TODO: test GetPattern | |
class DefaultAgencyTestCase(unittest.TestCase): | |
def freeAgency(self, ex=''): | |
agency = transitfeed.Agency() | |
agency.agency_id = 'agencytestid' + ex | |
agency.agency_name = 'Foo Bus Line' + ex | |
agency.agency_url = 'http://gofoo.com/' + ex | |
agency.agency_timezone='America/Los_Angeles' | |
return agency | |
def test_SetDefault(self): | |
schedule = transitfeed.Schedule() | |
agency = self.freeAgency() | |
schedule.SetDefaultAgency(agency) | |
self.assertEqual(agency, schedule.GetDefaultAgency()) | |
def test_NewDefaultAgency(self): | |
schedule = transitfeed.Schedule() | |
agency1 = schedule.NewDefaultAgency() | |
self.assertTrue(agency1.agency_id) | |
self.assertEqual(agency1.agency_id, schedule.GetDefaultAgency().agency_id) | |
self.assertEqual(1, len(schedule.GetAgencyList())) | |
agency2 = schedule.NewDefaultAgency() | |
self.assertTrue(agency2.agency_id) | |
self.assertEqual(agency2.agency_id, schedule.GetDefaultAgency().agency_id) | |
self.assertEqual(2, len(schedule.GetAgencyList())) | |
self.assertNotEqual(agency1, agency2) | |
self.assertNotEqual(agency1.agency_id, agency2.agency_id) | |
agency3 = schedule.NewDefaultAgency(agency_id='agency3', | |
agency_name='Agency 3', | |
agency_url='http://goagency') | |
self.assertEqual(agency3.agency_id, 'agency3') | |
self.assertEqual(agency3.agency_name, 'Agency 3') | |
self.assertEqual(agency3.agency_url, 'http://goagency') | |
self.assertEqual(agency3, schedule.GetDefaultAgency()) | |
self.assertEqual('agency3', schedule.GetDefaultAgency().agency_id) | |
self.assertEqual(3, len(schedule.GetAgencyList())) | |
def test_NoAgencyMakeNewDefault(self): | |
schedule = transitfeed.Schedule() | |
agency = schedule.GetDefaultAgency() | |
self.assertTrue(isinstance(agency, transitfeed.Agency)) | |
self.assertTrue(agency.agency_id) | |
self.assertEqual(1, len(schedule.GetAgencyList())) | |
self.assertEqual(agency, schedule.GetAgencyList()[0]) | |
self.assertEqual(agency.agency_id, schedule.GetAgencyList()[0].agency_id) | |
def test_AssumeSingleAgencyIsDefault(self): | |
schedule = transitfeed.Schedule() | |
agency1 = self.freeAgency() | |
schedule.AddAgencyObject(agency1) | |
agency2 = self.freeAgency('2') # don't add to schedule | |
# agency1 is default because it is the only Agency in schedule | |
self.assertEqual(agency1, schedule.GetDefaultAgency()) | |
def test_MultipleAgencyCausesNoDefault(self): | |
schedule = transitfeed.Schedule() | |
agency1 = self.freeAgency() | |
schedule.AddAgencyObject(agency1) | |
agency2 = self.freeAgency('2') | |
schedule.AddAgencyObject(agency2) | |
self.assertEqual(None, schedule.GetDefaultAgency()) | |
def test_OverwriteExistingAgency(self): | |
schedule = transitfeed.Schedule() | |
agency1 = self.freeAgency() | |
agency1.agency_id = '1' | |
schedule.AddAgencyObject(agency1) | |
agency2 = schedule.NewDefaultAgency() | |
# Make sure agency1 was not overwritten by the new default | |
self.assertEqual(agency1, schedule.GetAgency(agency1.agency_id)) | |
self.assertNotEqual('1', agency2.agency_id) | |
class FindUniqueIdTestCase(unittest.TestCase): | |
def test_simple(self): | |
d = {} | |
for i in range(0, 5): | |
d[transitfeed.FindUniqueId(d)] = 1 | |
k = d.keys() | |
k.sort() | |
self.assertEqual(('0', '1', '2', '3', '4'), tuple(k)) | |
def test_AvoidCollision(self): | |
d = {'1': 1} | |
d[transitfeed.FindUniqueId(d)] = 1 | |
self.assertEqual(2, len(d)) | |
self.assertFalse('2' in d, "Ops, next statement should add something to d") | |
d['2'] = None | |
d[transitfeed.FindUniqueId(d)] = 1 | |
self.assertEqual(4, len(d)) | |
class DefaultServicePeriodTestCase(unittest.TestCase): | |
def test_SetDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = transitfeed.ServicePeriod() | |
service1.SetDateHasService('20070101', True) | |
service1.service_id = 'SERVICE1' | |
schedule.SetDefaultServicePeriod(service1) | |
self.assertEqual(service1, schedule.GetDefaultServicePeriod()) | |
self.assertEqual(service1, schedule.GetServicePeriod(service1.service_id)) | |
def test_NewDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = schedule.NewDefaultServicePeriod() | |
self.assertTrue(service1.service_id) | |
schedule.GetServicePeriod(service1.service_id) | |
service1.SetDateHasService('20070101', True) # Make service1 different | |
service2 = schedule.NewDefaultServicePeriod() | |
schedule.GetServicePeriod(service2.service_id) | |
self.assertTrue(service1.service_id) | |
self.assertTrue(service2.service_id) | |
self.assertNotEqual(service1, service2) | |
self.assertNotEqual(service1.service_id, service2.service_id) | |
def test_NoServicesMakesNewDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = schedule.GetDefaultServicePeriod() | |
self.assertEqual(service1, schedule.GetServicePeriod(service1.service_id)) | |
def test_AssumeSingleServiceIsDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = transitfeed.ServicePeriod() | |
service1.SetDateHasService('20070101', True) | |
service1.service_id = 'SERVICE1' | |
schedule.AddServicePeriodObject(service1) | |
self.assertEqual(service1, schedule.GetDefaultServicePeriod()) | |
self.assertEqual(service1.service_id, schedule.GetDefaultServicePeriod().service_id) | |
def test_MultipleServicesCausesNoDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = transitfeed.ServicePeriod() | |
service1.service_id = 'SERVICE1' | |
service1.SetDateHasService('20070101', True) | |
schedule.AddServicePeriodObject(service1) | |
service2 = transitfeed.ServicePeriod() | |
service2.service_id = 'SERVICE2' | |
service2.SetDateHasService('20070201', True) | |
schedule.AddServicePeriodObject(service2) | |
service_d = schedule.GetDefaultServicePeriod() | |
self.assertEqual(service_d, None) | |
class GetTripTimeTestCase(unittest.TestCase): | |
"""Test for GetStopTimeTrips and GetTimeInterpolatedStops""" | |
def setUp(self): | |
problems = TestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
self.schedule = schedule | |
schedule.AddAgency("Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService('20070101') | |
self.stop1 = schedule.AddStop(lng=140.01, lat=0, name="140.01,0") | |
self.stop2 = schedule.AddStop(lng=140.02, lat=0, name="140.02,0") | |
self.stop3 = schedule.AddStop(lng=140.03, lat=0, name="140.03,0") | |
self.stop4 = schedule.AddStop(lng=140.04, lat=0, name="140.04,0") | |
self.stop5 = schedule.AddStop(lng=140.05, lat=0, name="140.05,0") | |
self.route1 = schedule.AddRoute("1", "One", "Bus") | |
self.trip1 = self.route1.AddTrip(schedule, "trip 1", trip_id='trip1') | |
self.trip1.AddStopTime(self.stop1, schedule=schedule, departure_secs=100, arrival_secs=100) | |
self.trip1.AddStopTime(self.stop2, schedule=schedule) | |
self.trip1.AddStopTime(self.stop3, schedule=schedule) | |
# loop back to stop2 to test that interpolated stops work ok even when | |
# a stop between timepoints is further from the timepoint than the | |
# preceding | |
self.trip1.AddStopTime(self.stop2, schedule=schedule) | |
self.trip1.AddStopTime(self.stop4, schedule=schedule, departure_secs=400, arrival_secs=400) | |
self.trip2 = self.route1.AddTrip(schedule, "trip 2", trip_id='trip2') | |
self.trip2.AddStopTime(self.stop2, schedule=schedule, departure_secs=500, arrival_secs=500) | |
self.trip2.AddStopTime(self.stop3, schedule=schedule, departure_secs=600, arrival_secs=600) | |
self.trip2.AddStopTime(self.stop4, schedule=schedule, departure_secs=700, arrival_secs=700) | |
self.trip2.AddStopTime(self.stop3, schedule=schedule, departure_secs=800, arrival_secs=800) | |
self.trip3 = self.route1.AddTrip(schedule, "trip 3", trip_id='trip3') | |
def testGetTimeInterpolatedStops(self): | |
rv = self.trip1.GetTimeInterpolatedStops() | |
self.assertEqual(5, len(rv)) | |
(secs, stoptimes, istimepoints) = tuple(zip(*rv)) | |
self.assertEqual((100, 160, 220, 280, 400), secs) | |
self.assertEqual(("140.01,0", "140.02,0", "140.03,0", "140.02,0", "140.04,0"), | |
tuple([st.stop.stop_name for st in stoptimes])) | |
self.assertEqual((True, False, False, False, True), istimepoints) | |
self.assertEqual([], self.trip3.GetTimeInterpolatedStops()) | |
def testGetTimeInterpolatedStopsUntimedEnd(self): | |
self.trip2.AddStopTime(self.stop3, schedule=self.schedule) | |
self.assertRaises(ValueError, self.trip2.GetTimeInterpolatedStops) | |
def testGetTimeInterpolatedStopsUntimedStart(self): | |
# Temporarily replace the problem reporter so that adding the first | |
# StopTime without a time doesn't throw an exception. | |
old_problems = self.schedule.problem_reporter | |
self.schedule.problem_reporter = TestFailureProblemReporter( | |
self, ("OtherProblem",)) | |
self.trip3.AddStopTime(self.stop3, schedule=self.schedule) | |
self.schedule.problem_reporter = old_problems | |
self.trip3.AddStopTime(self.stop2, schedule=self.schedule, | |
departure_secs=500, arrival_secs=500) | |
self.assertRaises(ValueError, self.trip3.GetTimeInterpolatedStops) | |
def testGetTimeInterpolatedStopsSingleStopTime(self): | |
self.trip3.AddStopTime(self.stop3, schedule=self.schedule, | |
departure_secs=500, arrival_secs=500) | |
rv = self.trip3.GetTimeInterpolatedStops() | |
self.assertEqual(1, len(rv)) | |
self.assertEqual(500, rv[0][0]) | |
self.assertEqual(True, rv[0][2]) | |
def testGetStopTimeTrips(self): | |
stopa = self.schedule.GetNearestStops(lon=140.03, lat=0)[0] | |
self.assertEqual("140.03,0", stopa.stop_name) # Got stop3? | |
rv = stopa.GetStopTimeTrips(self.schedule) | |
self.assertEqual(3, len(rv)) | |
(secs, trip_index, istimepoints) = tuple(zip(*rv)) | |
self.assertEqual((220, 600, 800), secs) | |
self.assertEqual(("trip1", "trip2", "trip2"), tuple([ti[0].trip_id for ti in trip_index])) | |
self.assertEqual((2, 1, 3), tuple([ti[1] for ti in trip_index])) | |
self.assertEqual((False, True, True), istimepoints) | |
def testStopTripIndex(self): | |
trip_index = self.stop3.trip_index | |
trip_ids = [t.trip_id for t, i in trip_index] | |
self.assertEqual(["trip1", "trip2", "trip2"], trip_ids) | |
self.assertEqual([2, 1, 3], [i for t, i in trip_index]) | |
def testGetTrips(self): | |
self.assertEqual(set([t.trip_id for t in self.stop1.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop2.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id, self.trip2.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop3.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id, self.trip2.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop4.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id, self.trip2.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop5.GetTrips(self.schedule)]), | |
set()) | |
class ApproximateDistanceBetweenStopsTestCase(unittest.TestCase): | |
def testEquator(self): | |
stop1 = transitfeed.Stop(lat=0, lng=100, | |
name='Stop one', stop_id='1') | |
stop2 = transitfeed.Stop(lat=0.01, lng=100.01, | |
name='Stop two', stop_id='2') | |
self.assertAlmostEqual( | |
transitfeed.ApproximateDistanceBetweenStops(stop1, stop2), | |
1570, -1) # Compare first 3 digits | |
def testWhati(self): | |
stop1 = transitfeed.Stop(lat=63.1, lng=-117.2, | |
name='Stop whati one', stop_id='1') | |
stop2 = transitfeed.Stop(lat=63.102, lng=-117.201, | |
name='Stop whati two', stop_id='2') | |
self.assertAlmostEqual( | |
transitfeed.ApproximateDistanceBetweenStops(stop1, stop2), | |
228, 0) | |
class TimeConversionHelpersTestCase(unittest.TestCase): | |
def testTimeToSecondsSinceMidnight(self): | |
self.assertEqual(transitfeed.TimeToSecondsSinceMidnight("01:02:03"), 3723) | |
self.assertEqual(transitfeed.TimeToSecondsSinceMidnight("00:00:00"), 0) | |
self.assertEqual(transitfeed.TimeToSecondsSinceMidnight("25:24:23"), 91463) | |
try: | |
transitfeed.TimeToSecondsSinceMidnight("10:15:00am") | |
except transitfeed.Error: | |
pass # expected | |
else: | |
self.fail("Should have thrown Error") | |
def testFormatSecondsSinceMidnight(self): | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(3723), "01:02:03") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(0), "00:00:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(91463), "25:24:23") | |
def testDateStringToDateObject(self): | |
self.assertEqual(transitfeed.DateStringToDateObject("20080901"), | |
datetime.date(2008, 9, 1)) | |
try: | |
transitfeed.DateStringToDateObject("20080841") | |
except ValueError: | |
pass # expected | |
else: | |
self.fail("Should have thrown ValueError") | |
class NonNegIntStringToIntTestCase(unittest.TestCase): | |
def runTest(self): | |
self.assertEqual(0, transitfeed.NonNegIntStringToInt("0")) | |
self.assertEqual(0, transitfeed.NonNegIntStringToInt(u"0")) | |
self.assertEqual(1, transitfeed.NonNegIntStringToInt("1")) | |
self.assertEqual(2, transitfeed.NonNegIntStringToInt("2")) | |
self.assertEqual(10, transitfeed.NonNegIntStringToInt("10")) | |
self.assertEqual(1234567890123456789, | |
transitfeed.NonNegIntStringToInt("1234567890123456789")) | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "-1") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "+1") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "01") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "00") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "0x1") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "1.0") | |
self.assertRaises(ValueError, transitfeed.NonNegIntStringToInt, "1e1") | |
self.assertRaises(TypeError, transitfeed.NonNegIntStringToInt, 1) | |
self.assertRaises(TypeError, transitfeed.NonNegIntStringToInt, None) | |
class GetHeadwayTimesTestCase(unittest.TestCase): | |
"""Test for GetHeadwayStartTimes and GetHeadwayStopTimes""" | |
def setUp(self): | |
problems = TestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
self.schedule = schedule | |
schedule.AddAgency("Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate("20080101") | |
service_period.SetEndDate("20090101") | |
service_period.SetWeekdayService(True) | |
self.stop1 = schedule.AddStop(lng=140.01, lat=0, name="140.01,0") | |
self.stop2 = schedule.AddStop(lng=140.02, lat=0, name="140.02,0") | |
self.stop3 = schedule.AddStop(lng=140.03, lat=0, name="140.03,0") | |
self.stop4 = schedule.AddStop(lng=140.04, lat=0, name="140.04,0") | |
self.stop5 = schedule.AddStop(lng=140.05, lat=0, name="140.05,0") | |
self.route1 = schedule.AddRoute("1", "One", "Bus") | |
self.trip1 = self.route1.AddTrip(schedule, "trip 1", trip_id="trip1") | |
# add different types of stop times | |
self.trip1.AddStopTime(self.stop1, arrival_time="17:00:00", departure_time="17:01:00") # both arrival and departure time | |
self.trip1.AddStopTime(self.stop2, schedule=schedule) # non timed | |
self.trip1.AddStopTime(self.stop3, stop_time="17:45:00") # only stop_time | |
# add headways starting before the trip | |
self.trip1.AddHeadwayPeriod("16:00:00","18:00:00",1800) # each 30 min | |
self.trip1.AddHeadwayPeriod("18:00:00","20:00:00",2700) # each 45 min | |
def testGetHeadwayStartTimes(self): | |
start_times = self.trip1.GetHeadwayStartTimes() | |
self.assertEqual( | |
["16:00:00", "16:30:00", "17:00:00", "17:30:00", | |
"18:00:00", "18:45:00", "19:30:00"], | |
[transitfeed.FormatSecondsSinceMidnight(secs) for secs in start_times]) | |
def testGetHeadwayStopTimes(self): | |
stoptimes_list = self.trip1.GetHeadwayStopTimes() | |
arrival_secs = [] | |
departure_secs = [] | |
for stoptimes in stoptimes_list: | |
arrival_secs.append([st.arrival_secs for st in stoptimes]) | |
departure_secs.append([st.departure_secs for st in stoptimes]) | |
self.assertEqual(([57600,None,60300],[59400,None,62100],[61200,None,63900], | |
[63000,None,65700],[64800,None,67500],[67500,None,70200], | |
[70200,None,72900]), | |
tuple(arrival_secs)) | |
self.assertEqual(([57660,None,60300],[59460,None,62100],[61260,None,63900], | |
[63060,None,65700],[64860,None,67500],[67560,None,70200], | |
[70260,None,72900]), | |
tuple(departure_secs)) | |
# test if stoptimes are created with same parameters than the ones from the original trip | |
stoptimes = self.trip1.GetStopTimes() | |
for stoptimes_clone in stoptimes_list: | |
self.assertEqual(len(stoptimes_clone), len(stoptimes)) | |
for st_clone, st in zip(stoptimes_clone, stoptimes): | |
for name in st.__slots__: | |
if name not in ('arrival_secs', 'departure_secs'): | |
self.assertEqual(getattr(st, name), getattr(st_clone, name)) | |
class ServiceGapsTestCase(MemoryZipTestCase): | |
def setUp(self): | |
super(ServiceGapsTestCase, self).setUp() | |
self.zip.writestr("calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday," | |
"saturday,sunday,start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20090601,20090610\n" | |
"WE,0,0,0,0,0,1,1,20090718,20101231\n") | |
self.zip.writestr("calendar_dates.txt", | |
"service_id,date,exception_type\n" | |
"WE,20090815,2\n" | |
"WE,20090816,2\n" | |
"WE,20090822,2\n" | |
# The following two lines are a 12-day service gap. | |
# Shouldn't issue a warning | |
"WE,20090829,2\n" | |
"WE,20090830,2\n" | |
"WE,20100102,2\n" | |
"WE,20100103,2\n" | |
"WE,20100109,2\n" | |
"WE,20100110,2\n" | |
"WE,20100612,2\n" | |
"WE,20100613,2\n" | |
"WE,20100619,2\n" | |
"WE,20100620,2\n") | |
self.zip.writestr("trips.txt", | |
"route_id,service_id,trip_id\n" | |
"AB,WE,AB1\n" | |
"AB,FULLW,AB2\n") | |
self.zip.writestr( | |
"stop_times.txt", | |
"trip_id,arrival_time,departure_time,stop_id,stop_sequence\n" | |
"AB1,10:00:00,10:00:00,BEATTY_AIRPORT,1\n" | |
"AB1,10:20:00,10:20:00,BULLFROG,2\n" | |
"AB2,10:25:00,10:25:00,STAGECOACH,1\n" | |
"AB2,10:55:00,10:55:00,BULLFROG,2\n") | |
loader = transitfeed.Loader( | |
problems=self.problems, | |
extra_validation=False, | |
zip=self.zip) | |
self.schedule = loader.Load() | |
# If there is a service gap starting before today, and today has no service, | |
# it should be found - even if tomorrow there is service | |
def testServiceGapBeforeTodayIsDiscovered(self): | |
self.schedule.Validate(today=date(2009, 7, 17), | |
service_gap_interval=13) | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 7, 5), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
self.AssertCommonExceptions(date(2010, 6, 25)) | |
# If today has service past service gaps should not appear | |
def testNoServiceGapBeforeTodayIfTodayHasService(self): | |
self.schedule.Validate(today=date(2009, 7, 18), | |
service_gap_interval=13) | |
self.AssertCommonExceptions(date(2010, 6, 25)) | |
# If the feed starts today NO previous service gap should be found | |
# even if today does not have service | |
def testNoServiceGapBeforeTodayIfTheFeedStartsToday(self): | |
self.schedule.Validate(today=date(2009, 06, 01), | |
service_gap_interval=13) | |
# This service gap is the one between FULLW and WE | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 6, 11), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
# The one-year period ends before the June 2010 gap, so that last | |
# service gap should _not_ be found | |
self.AssertCommonExceptions(None) | |
# If there is a gap at the end of the one-year period we should find it | |
def testGapAtTheEndOfTheOneYearPeriodIsDiscovered(self): | |
self.schedule.Validate(today=date(2009, 06, 22), | |
service_gap_interval=13) | |
# This service gap is the one between FULLW and WE | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 6, 11), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
self.AssertCommonExceptions(date(2010, 6, 21)) | |
# If we are right in the middle of a big service gap it should be | |
# report as starting on "today - 12 days" and lasting until | |
# service resumes | |
def testCurrentServiceGapIsDiscovered(self): | |
self.schedule.Validate(today=date(2009, 6, 30), | |
service_gap_interval=13) | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 6, 18), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
self.AssertCommonExceptions(date(2010, 6, 25)) | |
# Asserts the service gaps that appear towards the end of the calendar | |
# and which are common to all the tests | |
def AssertCommonExceptions(self, last_exception_date): | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 8, 10), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 8, 22), | |
exception.last_day_without_service) | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 12, 28), | |
exception.first_day_without_service) | |
self.assertEquals(date(2010, 1, 15), | |
exception.last_day_without_service) | |
if last_exception_date is not None: | |
exception = self.problems.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2010, 6, 7), | |
exception.first_day_without_service) | |
self.assertEquals(last_exception_date, | |
exception.last_day_without_service) | |
self.problems.AssertNoMoreExceptions() | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testtransitfeed.pyc and /dev/null differ
#!/usr/bin/python2.4 | |
# | |
# Copyright (C) 2009 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Tests for unusual_trip_filter.py""" | |
__author__ = 'Jiri Semecky <jiri.semecky@gmail.com>' | |
import unusual_trip_filter | |
import transitfeed | |
import unittest | |
import util | |
class UnusualTripFilterTestCase(util.TempDirTestCaseBase): | |
"""Test of unusual trip filter functionality.""" | |
def testFilter(self): | |
"""Test if filtering works properly.""" | |
expected_values = { | |
'CITY1':0, 'CITY2':0, 'CITY3':0, 'CITY4' :0, 'CITY5' :0, 'CITY6' :0, | |
'CITY7':0, 'CITY8':0, 'CITY9':0, 'CITY10':0, 'CITY11':1, 'CITY12':1, | |
} | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
for trip_id, expected_trip_type in expected_values.items(): | |
actual_trip_type = schedule.trips[trip_id]['trip_type'] | |
try: | |
self.assertEquals(int(actual_trip_type), expected_trip_type) | |
except ValueError: | |
self.assertEquals(actual_trip_type, '') | |
def testFilterNoForceFilter(self): | |
"""Test that force==False doesn't set default values""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, force=False, quiet=True) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
schedule.trips['CITY2'].trip_type = 'odd-trip' | |
filter.filter(schedule) | |
trip1 = schedule.trips['CITY1'] | |
self.assertEquals(trip1['trip_type'], '') | |
trip2 = schedule.trips['CITY2'] | |
self.assertEquals(trip2['trip_type'], 'odd-trip') | |
def testFilterForceFilter(self): | |
"""Test that force==True does set default values""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, force=True, quiet=False) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
schedule.trips['CITY2'].trip_type = 'odd-trip' | |
filter.filter(schedule) | |
trip1 = schedule.trips['CITY1'] | |
self.assertEquals(trip1['trip_type'], '0') | |
trip2 = schedule.trips['CITY2'] | |
self.assertEquals(trip2['trip_type'], '0') | |
def testFilterAppliedForSpecifiedRouteType(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type=3) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '1') | |
def testFilterNotAppliedForUnspecifiedRouteType(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type=2) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '') | |
def testFilterAppliedForRouteTypeSpecifiedByName(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type='Bus') | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '1') | |
def testFilterNotAppliedForDifferentRouteTypeSpecifiedByName(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type='Ferry') | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '') | |
if __name__ == '__main__': | |
unittest.main() | |
Binary files a/origin-src/transitfeed-1.2.5/test/testunusual_trip_filter.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Code shared between tests. | |
import os | |
import os.path | |
import re | |
import cStringIO as StringIO | |
import shutil | |
import subprocess | |
import sys | |
import tempfile | |
import traceback | |
import transitfeed | |
import unittest | |
def check_call(cmd, expected_retcode=0, stdin_str="", **kwargs): | |
"""Convenience function that is in the docs for subprocess but not | |
installed on my system. Raises an Exception if the return code is not | |
expected_retcode. Returns a tuple of strings, (stdout, stderr).""" | |
try: | |
if 'stdout' in kwargs or 'stderr' in kwargs or 'stdin' in kwargs: | |
raise Exception("Don't pass stdout or stderr") | |
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, | |
stderr=subprocess.PIPE, stdin=subprocess.PIPE, | |
**kwargs) | |
(out, err) = p.communicate(stdin_str) | |
retcode = p.returncode | |
except Exception, e: | |
raise Exception("When running %s: %s" % (cmd, e)) | |
if retcode < 0: | |
raise Exception( | |
"Child '%s' was terminated by signal %d. Output:\n%s\n%s\n" % | |
(cmd, -retcode, out, err)) | |
elif retcode != expected_retcode: | |
raise Exception( | |
"Child '%s' returned %d. Output:\n%s\n%s\n" % | |
(cmd, retcode, out, err)) | |
return (out, err) | |
class TestCaseAsserts(unittest.TestCase): | |
def assertMatchesRegex(self, regex, string): | |
"""Assert that regex is found in string.""" | |
if not re.search(regex, string): | |
self.fail("string %r did not match regex %r" % (string, regex)) | |
class GetPathTestCase(TestCaseAsserts): | |
"""TestCase with method to get paths to files in the distribution.""" | |
def setUp(self): | |
TestCaseAsserts.setUp(self) | |
self._origcwd = os.getcwd() | |
def GetExamplePath(self, name): | |
"""Return the full path of a file in the examples directory""" | |
return self.GetPath('examples', name) | |
def GetTestDataPath(self, *path): | |
"""Return the full path of a file in the test/data directory""" | |
return self.GetPath('test', 'data', *path) | |
def GetPath(self, *path): | |
"""Return absolute path of path. path is relative main source directory.""" | |
here = os.path.dirname(__file__) # Relative to _origcwd | |
return os.path.join(self._origcwd, here, '..', *path) | |
class TempDirTestCaseBase(GetPathTestCase): | |
"""Make a temporary directory the current directory before running the test | |
and remove it after the test. | |
""" | |
def setUp(self): | |
GetPathTestCase.setUp(self) | |
self.tempdirpath = tempfile.mkdtemp() | |
os.chdir(self.tempdirpath) | |
def tearDown(self): | |
os.chdir(self._origcwd) | |
shutil.rmtree(self.tempdirpath) | |
GetPathTestCase.tearDown(self) | |
def CheckCallWithPath(self, cmd, expected_retcode=0, stdin_str=""): | |
"""Run python script cmd[0] with args cmd[1:], making sure 'import | |
transitfeed' will use the module in this source tree. Raises an Exception | |
if the return code is not expected_retcode. Returns a tuple of strings, | |
(stdout, stderr).""" | |
tf_path = transitfeed.__file__ | |
# Path of the directory containing transitfeed. When this is added to | |
# sys.path importing transitfeed should work independent of if | |
# transitfeed.__file__ is <parent>/transitfeed.py or | |
# <parent>/transitfeed/__init__.py | |
transitfeed_parent = tf_path[:tf_path.rfind("transitfeed")] | |
transitfeed_parent = transitfeed_parent.replace("\\", "/").rstrip("/") | |
script_path = cmd[0].replace("\\", "/") | |
script_args = cmd[1:] | |
# Propogate sys.path of this process to the subprocess. This is done | |
# because I assume that if this process has a customized sys.path it is | |
# meant to be used for all processes involved in the tests. The downside | |
# of this is that the subprocess is no longer a clean version of what you | |
# get when running "python" after installing transitfeed. Hopefully if this | |
# process uses a customized sys.path you know what you are doing. | |
env = {"PYTHONPATH": ":".join(sys.path)} | |
# Instead of directly running the script make sure that the transitfeed | |
# module in this source directory is at the front of sys.path. Then | |
# adjust sys.argv so it looks like the script was run directly. This lets | |
# OptionParser use the correct value for %proj. | |
cmd = [sys.executable, "-c", | |
"import sys; " | |
"sys.path.insert(0,'%s'); " | |
"sys.argv = ['%s'] + sys.argv[1:]; " | |
"exec(open('%s'))" % | |
(transitfeed_parent, script_path, script_path)] + script_args | |
return check_call(cmd, expected_retcode=expected_retcode, shell=False, | |
env=env, stdin_str=stdin_str) | |
class RecordingProblemReporter(transitfeed.ProblemReporterBase): | |
"""Save all problems for later inspection. | |
Args: | |
test_case: a unittest.TestCase object on which to report problems | |
ignore_types: sequence of string type names that will be ignored by the | |
ProblemReporter""" | |
def __init__(self, test_case, ignore_types=None): | |
transitfeed.ProblemReporterBase.__init__(self) | |
self.exceptions = [] | |
self._test_case = test_case | |
self._ignore_types = ignore_types or set() | |
def _Report(self, e): | |
# Ensure that these don't crash | |
e.FormatProblem() | |
e.FormatContext() | |
if e.__class__.__name__ in self._ignore_types: | |
return | |
# Keep the 7 nearest stack frames. This should be enough to identify | |
# the code path that created the exception while trimming off most of the | |
# large test framework's stack. | |
traceback_list = traceback.format_list(traceback.extract_stack()[-7:-1]) | |
self.exceptions.append((e, ''.join(traceback_list))) | |
def PopException(self, type_name): | |
"""Return the first exception, which must be a type_name.""" | |
e = self.exceptions.pop(0) | |
e_name = e[0].__class__.__name__ | |
self._test_case.assertEqual(e_name, type_name, | |
"%s != %s\n%s" % | |
(e_name, type_name, self.FormatException(*e))) | |
return e[0] | |
def FormatException(self, exce, tb): | |
return ("%s\nwith gtfs file context %s\nand traceback\n%s" % | |
(exce.FormatProblem(), exce.FormatContext(), tb)) | |
def AssertNoMoreExceptions(self): | |
exceptions_as_text = [] | |
for e, tb in self.exceptions: | |
exceptions_as_text.append(self.FormatException(e, tb)) | |
self._test_case.assertFalse(self.exceptions, "\n".join(exceptions_as_text)) | |
def PopInvalidValue(self, column_name, file_name=None): | |
e = self.PopException("InvalidValue") | |
self._test_case.assertEquals(column_name, e.column_name) | |
if file_name: | |
self._test_case.assertEquals(file_name, e.file_name) | |
return e | |
def PopMissingValue(self, column_name, file_name=None): | |
e = self.PopException("MissingValue") | |
self._test_case.assertEquals(column_name, e.column_name) | |
if file_name: | |
self._test_case.assertEquals(file_name, e.file_name) | |
return e | |
def PopDuplicateColumn(self, file_name, header, count): | |
e = self.PopException("DuplicateColumn") | |
self._test_case.assertEquals(file_name, e.file_name) | |
self._test_case.assertEquals(header, e.header) | |
self._test_case.assertEquals(count, e.count) | |
return e | |
Binary files a/origin-src/transitfeed-1.2.5/test/util.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Expose some modules in this package. | |
Before transitfeed version 1.2.4 all our library code was distributed in a | |
one file module, transitfeed.py, and could be used as | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
At that time the module (one file, transitfeed.py) was converted into a | |
package (a directory named transitfeed containing __init__.py and multiple .py | |
files). Classes and attributes exposed by the old module may still be imported | |
in the same way. Indeed, code that depends on the library <em>should</em> | |
continue to use import commands such as the above and ignore _transitfeed. | |
""" | |
from _transitfeed import * | |
__version__ = _transitfeed.__version__ | |
Binary files a/origin-src/transitfeed-1.2.5/transitfeed/__init__.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Easy interface for handling a Google Transit Feed file. | |
Do not import this module directly. Thanks to __init__.py you should do | |
something like: | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
... | |
This module is a library to help you create, read and write Google | |
Transit Feed files. Refer to the feed specification, available at | |
http://code.google.com/transit/spec/transit_feed_specification.htm, for a | |
complete description how the transit feed represents a transit schedule. This | |
library supports all required parts of the specification but does not yet | |
support all optional parts. Patches welcome! | |
The specification describes several tables such as stops, routes and trips. | |
In a feed file these are stored as comma separeted value files. This library | |
represents each row of these tables with a single Python object. This object has | |
attributes for each value on the row. For example, schedule.AddStop returns a | |
Stop object which has attributes such as stop_lat and stop_name. | |
Schedule: Central object of the parser | |
GenericGTFSObject: A base class for each of the objects below | |
Route: Represents a single route | |
Trip: Represents a single trip | |
Stop: Represents a single stop | |
ServicePeriod: Represents a single service, a set of dates | |
Agency: Represents the agency in this feed | |
Transfer: Represents a single transfer rule | |
TimeToSecondsSinceMidnight(): Convert HH:MM:SS into seconds since midnight. | |
FormatSecondsSinceMidnight(s): Formats number of seconds past midnight into a string | |
""" | |
# TODO: Preserve arbitrary columns? | |
import bisect | |
import cStringIO as StringIO | |
import codecs | |
from transitfeed.util import defaultdict | |
import csv | |
import datetime | |
import logging | |
import math | |
import os | |
import random | |
try: | |
import sqlite3 as sqlite | |
except ImportError: | |
from pysqlite2 import dbapi2 as sqlite | |
import re | |
import tempfile | |
import time | |
import warnings | |
# Objects in a schedule (Route, Trip, etc) should not keep a strong reference | |
# to the Schedule object to avoid a reference cycle. Schedule needs to use | |
# __del__ to cleanup its temporary file. The garbage collector can't handle | |
# reference cycles containing objects with custom cleanup code. | |
import weakref | |
import zipfile | |
OUTPUT_ENCODING = 'utf-8' | |
MAX_DISTANCE_FROM_STOP_TO_SHAPE = 1000 | |
MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING = 100.0 | |
MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR = 1000.0 | |
__version__ = '1.2.5' | |
def EncodeUnicode(text): | |
""" | |
Optionally encode text and return it. The result should be safe to print. | |
""" | |
if type(text) == type(u''): | |
return text.encode(OUTPUT_ENCODING) | |
else: | |
return text | |
# These are used to distinguish between errors (not allowed by the spec) | |
# and warnings (not recommended) when reporting issues. | |
TYPE_ERROR = 0 | |
TYPE_WARNING = 1 | |
class ProblemReporterBase: | |
"""Base class for problem reporters. Tracks the current context and creates | |
an exception object for each problem. Subclasses must implement | |
_Report(self, e)""" | |
def __init__(self): | |
self.ClearContext() | |
def ClearContext(self): | |
"""Clear any previous context.""" | |
self._context = None | |
def SetFileContext(self, file_name, row_num, row, headers): | |
"""Save the current context to be output with any errors. | |
Args: | |
file_name: string | |
row_num: int | |
row: list of strings | |
headers: list of column headers, its order corresponding to row's | |
""" | |
self._context = (file_name, row_num, row, headers) | |
def FeedNotFound(self, feed_name, context=None): | |
e = FeedNotFound(feed_name=feed_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def UnknownFormat(self, feed_name, context=None): | |
e = UnknownFormat(feed_name=feed_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def FileFormat(self, problem, context=None): | |
e = FileFormat(problem=problem, context=context, | |
context2=self._context) | |
self._Report(e) | |
def MissingFile(self, file_name, context=None): | |
e = MissingFile(file_name=file_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def UnknownFile(self, file_name, context=None): | |
e = UnknownFile(file_name=file_name, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def EmptyFile(self, file_name, context=None): | |
e = EmptyFile(file_name=file_name, context=context, | |
context2=self._context) | |
self._Report(e) | |
def MissingColumn(self, file_name, column_name, context=None): | |
e = MissingColumn(file_name=file_name, column_name=column_name, | |
context=context, context2=self._context) | |
self._Report(e) | |
def UnrecognizedColumn(self, file_name, column_name, context=None): | |
e = UnrecognizedColumn(file_name=file_name, column_name=column_name, | |
context=context, context2=self._context, | |
type=TYPE_WARNING) | |
self._Report(e) | |
def CsvSyntax(self, description=None, context=None, type=TYPE_ERROR): | |
e = CsvSyntax(description=description, context=context, | |
context2=self._context, type=type) | |
self._Report(e) | |
def DuplicateColumn(self, file_name, header, count, type=TYPE_ERROR, | |
context=None): | |
e = DuplicateColumn(file_name=file_name, | |
header=header, | |
count=count, | |
type=type, | |
context=context, | |
context2=self._context) | |
self._Report(e) | |
def MissingValue(self, column_name, reason=None, context=None): | |
e = MissingValue(column_name=column_name, reason=reason, context=context, | |
context2=self._context) | |
self._Report(e) | |
def InvalidValue(self, column_name, value, reason=None, context=None, | |
type=TYPE_ERROR): | |
e = InvalidValue(column_name=column_name, value=value, reason=reason, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def DuplicateID(self, column_names, values, context=None, type=TYPE_ERROR): | |
if isinstance(column_names, tuple): | |
column_names = '(' + ', '.join(column_names) + ')' | |
if isinstance(values, tuple): | |
values = '(' + ', '.join(values) + ')' | |
e = DuplicateID(column_name=column_names, value=values, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def UnusedStop(self, stop_id, stop_name, context=None): | |
e = UnusedStop(stop_id=stop_id, stop_name=stop_name, | |
context=context, context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def UsedStation(self, stop_id, stop_name, context=None): | |
e = UsedStation(stop_id=stop_id, stop_name=stop_name, | |
context=context, context2=self._context, type=TYPE_ERROR) | |
self._Report(e) | |
def StopTooFarFromParentStation(self, stop_id, stop_name, parent_stop_id, | |
parent_stop_name, distance, | |
type=TYPE_WARNING, context=None): | |
e = StopTooFarFromParentStation( | |
stop_id=stop_id, stop_name=stop_name, | |
parent_stop_id=parent_stop_id, | |
parent_stop_name=parent_stop_name, distance=distance, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def StopsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b, | |
distance, type=TYPE_WARNING, context=None): | |
e = StopsTooClose( | |
stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b, | |
stop_id_b=stop_id_b, distance=distance, context=context, | |
context2=self._context, type=type) | |
self._Report(e) | |
def StationsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b, | |
distance, type=TYPE_WARNING, context=None): | |
e = StationsTooClose( | |
stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b, | |
stop_id_b=stop_id_b, distance=distance, context=context, | |
context2=self._context, type=type) | |
self._Report(e) | |
def DifferentStationTooClose(self, stop_name, stop_id, | |
station_stop_name, station_stop_id, | |
distance, type=TYPE_WARNING, context=None): | |
e = DifferentStationTooClose( | |
stop_name=stop_name, stop_id=stop_id, | |
station_stop_name=station_stop_name, station_stop_id=station_stop_id, | |
distance=distance, context=context, context2=self._context, type=type) | |
self._Report(e) | |
def StopTooFarFromShapeWithDistTraveled(self, trip_id, stop_name, stop_id, | |
shape_dist_traveled, shape_id, | |
distance, max_distance, | |
type=TYPE_WARNING): | |
e = StopTooFarFromShapeWithDistTraveled( | |
trip_id=trip_id, stop_name=stop_name, stop_id=stop_id, | |
shape_dist_traveled=shape_dist_traveled, shape_id=shape_id, | |
distance=distance, max_distance=max_distance, type=type) | |
self._Report(e) | |
def ExpirationDate(self, expiration, context=None): | |
e = ExpirationDate(expiration=expiration, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def FutureService(self, start_date, context=None): | |
e = FutureService(start_date=start_date, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def InvalidLineEnd(self, bad_line_end, context=None): | |
"""bad_line_end is a human readable string.""" | |
e = InvalidLineEnd(bad_line_end=bad_line_end, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def TooFastTravel(self, trip_id, prev_stop, next_stop, dist, time, speed, | |
type=TYPE_ERROR): | |
e = TooFastTravel(trip_id=trip_id, prev_stop=prev_stop, | |
next_stop=next_stop, time=time, dist=dist, speed=speed, | |
context=None, context2=self._context, type=type) | |
self._Report(e) | |
def StopWithMultipleRouteTypes(self, stop_name, stop_id, route_id1, route_id2, | |
context=None): | |
e = StopWithMultipleRouteTypes(stop_name=stop_name, stop_id=stop_id, | |
route_id1=route_id1, route_id2=route_id2, | |
context=context, context2=self._context, | |
type=TYPE_WARNING) | |
self._Report(e) | |
def DuplicateTrip(self, trip_id1, route_id1, trip_id2, route_id2, | |
context=None): | |
e = DuplicateTrip(trip_id1=trip_id1, route_id1=route_id1, trip_id2=trip_id2, | |
route_id2=route_id2, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self._Report(e) | |
def OtherProblem(self, description, context=None, type=TYPE_ERROR): | |
e = OtherProblem(description=description, | |
context=context, context2=self._context, type=type) | |
self._Report(e) | |
def TooManyDaysWithoutService(self, | |
first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service, | |
context=None, | |
type=TYPE_WARNING): | |
e = TooManyDaysWithoutService( | |
first_day_without_service=first_day_without_service, | |
last_day_without_service=last_day_without_service, | |
consecutive_days_without_service=consecutive_days_without_service, | |
context=context, | |
context2=self._context, | |
type=type) | |
self._Report(e) | |
class ProblemReporter(ProblemReporterBase): | |
"""This is a basic problem reporter that just prints to console.""" | |
def _Report(self, e): | |
context = e.FormatContext() | |
if context: | |
print context | |
print EncodeUnicode(self._LineWrap(e.FormatProblem(), 78)) | |
@staticmethod | |
def _LineWrap(text, width): | |
""" | |
A word-wrap function that preserves existing line breaks | |
and most spaces in the text. Expects that existing line | |
breaks are posix newlines (\n). | |
Taken from: | |
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/148061 | |
""" | |
return reduce(lambda line, word, width=width: '%s%s%s' % | |
(line, | |
' \n'[(len(line) - line.rfind('\n') - 1 + | |
len(word.split('\n', 1)[0]) >= width)], | |
word), | |
text.split(' ') | |
) | |
class ExceptionWithContext(Exception): | |
def __init__(self, context=None, context2=None, **kwargs): | |
"""Initialize an exception object, saving all keyword arguments in self. | |
context and context2, if present, must be a tuple of (file_name, row_num, | |
row, headers). context2 comes from ProblemReporter.SetFileContext. context | |
was passed in with the keyword arguments. context2 is ignored if context | |
is present.""" | |
Exception.__init__(self) | |
if context: | |
self.__dict__.update(self.ContextTupleToDict(context)) | |
elif context2: | |
self.__dict__.update(self.ContextTupleToDict(context2)) | |
self.__dict__.update(kwargs) | |
if ('type' in kwargs) and (kwargs['type'] == TYPE_WARNING): | |
self._type = TYPE_WARNING | |
else: | |
self._type = TYPE_ERROR | |
def GetType(self): | |
return self._type | |
def IsError(self): | |
return self._type == TYPE_ERROR | |
def IsWarning(self): | |
return self._type == TYPE_WARNING | |
CONTEXT_PARTS = ['file_name', 'row_num', 'row', 'headers'] | |
@staticmethod | |
def ContextTupleToDict(context): | |
"""Convert a tuple representing a context into a dict of (key, value) pairs""" | |
d = {} | |
if not context: | |
return d | |
for k, v in zip(ExceptionWithContext.CONTEXT_PARTS, context): | |
if v != '' and v != None: # Don't ignore int(0), a valid row_num | |
d[k] = v | |
return d | |
def __str__(self): | |
return self.FormatProblem() | |
def GetDictToFormat(self): | |
"""Return a copy of self as a dict, suitable for passing to FormatProblem""" | |
d = {} | |
for k, v in self.__dict__.items(): | |
# TODO: Better handling of unicode/utf-8 within Schedule objects. | |
# Concatinating a unicode and utf-8 str object causes an exception such | |
# as "UnicodeDecodeError: 'ascii' codec can't decode byte ..." as python | |
# tries to convert the str to a unicode. To avoid that happening within | |
# the problem reporter convert all unicode attributes to utf-8. | |
# Currently valid utf-8 fields are converted to unicode in _ReadCsvDict. | |
# Perhaps all fields should be left as utf-8. | |
d[k] = EncodeUnicode(v) | |
return d | |
def FormatProblem(self, d=None): | |
"""Return a text string describing the problem. | |
Args: | |
d: map returned by GetDictToFormat with with formatting added | |
""" | |
if not d: | |
d = self.GetDictToFormat() | |
output_error_text = self.__class__.ERROR_TEXT % d | |
if ('reason' in d) and d['reason']: | |
return '%s\n%s' % (output_error_text, d['reason']) | |
else: | |
return output_error_text | |
def FormatContext(self): | |
"""Return a text string describing the context""" | |
text = '' | |
if hasattr(self, 'feed_name'): | |
text += "In feed '%s': " % self.feed_name | |
if hasattr(self, 'file_name'): | |
text += self.file_name | |
if hasattr(self, 'row_num'): | |
text += ":%i" % self.row_num | |
if hasattr(self, 'column_name'): | |
text += " column %s" % self.column_name | |
return text | |
def __cmp__(self, y): | |
"""Return an int <0/0/>0 when self is more/same/less significant than y. | |
Subclasses should define this if exceptions should be listed in something | |
other than the order they are reported. | |
Args: | |
y: object to compare to self | |
Returns: | |
An int which is negative if self is more significant than y, 0 if they | |
are similar significance and positive if self is less significant than | |
y. Returning a float won't work. | |
Raises: | |
TypeError by default, meaning objects of the type can not be compared. | |
""" | |
raise TypeError("__cmp__ not defined") | |
class MissingFile(ExceptionWithContext): | |
ERROR_TEXT = "File %(file_name)s is not found" | |
class EmptyFile(ExceptionWithContext): | |
ERROR_TEXT = "File %(file_name)s is empty" | |
class UnknownFile(ExceptionWithContext): | |
ERROR_TEXT = 'The file named %(file_name)s was not expected.\n' \ | |
'This may be a misspelled file name or the file may be ' \ | |
'included in a subdirectory. Please check spellings and ' \ | |
'make sure that there are no subdirectories within the feed' | |
class FeedNotFound(ExceptionWithContext): | |
ERROR_TEXT = 'Couldn\'t find a feed named %(feed_name)s' | |
class UnknownFormat(ExceptionWithContext): | |
ERROR_TEXT = 'The feed named %(feed_name)s had an unknown format:\n' \ | |
'feeds should be either .zip files or directories.' | |
class FileFormat(ExceptionWithContext): | |
ERROR_TEXT = 'Files must be encoded in utf-8 and may not contain ' \ | |
'any null bytes (0x00). %(file_name)s %(problem)s.' | |
class MissingColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Missing column %(column_name)s in file %(file_name)s' | |
class UnrecognizedColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Unrecognized column %(column_name)s in file %(file_name)s. ' \ | |
'This might be a misspelled column name (capitalization ' \ | |
'matters!). Or it could be extra information (such as a ' \ | |
'proposed feed extension) that the validator doesn\'t know ' \ | |
'about yet. Extra information is fine; this warning is here ' \ | |
'to catch misspelled optional column names.' | |
class CsvSyntax(ExceptionWithContext): | |
ERROR_TEXT = '%(description)s' | |
class DuplicateColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Column %(header)s appears %(count)i times in file %(file_name)s' | |
class MissingValue(ExceptionWithContext): | |
ERROR_TEXT = 'Missing value for column %(column_name)s' | |
class InvalidValue(ExceptionWithContext): | |
ERROR_TEXT = 'Invalid value %(value)s in field %(column_name)s' | |
class DuplicateID(ExceptionWithContext): | |
ERROR_TEXT = 'Duplicate ID %(value)s in column %(column_name)s' | |
class UnusedStop(ExceptionWithContext): | |
ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) isn't used in any trips" | |
class UsedStation(ExceptionWithContext): | |
ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) has location_type=1 " \ | |
"(station) so it should not appear in stop_times" | |
class StopTooFarFromParentStation(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"%(stop_name)s (ID %(stop_id)s) is too far from its parent station " | |
"%(parent_stop_name)s (ID %(parent_stop_id)s) : %(distance).2f meters.") | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. | |
return cmp(y.distance, self.distance) | |
class StopsTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The stops \"%(stop_name_a)s\" (ID %(stop_id_a)s) and \"%(stop_name_b)s\"" | |
" (ID %(stop_id_b)s) are %(distance)0.2fm apart and probably represent " | |
"the same location.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class StationsTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The stations \"%(stop_name_a)s\" (ID %(stop_id_a)s) and " | |
"\"%(stop_name_b)s\" (ID %(stop_id_b)s) are %(distance)0.2fm apart and " | |
"probably represent the same location.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class DifferentStationTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The parent_station of stop \"%(stop_name)s\" (ID %(stop_id)s) is not " | |
"station \"%(station_stop_name)s\" (ID %(station_stop_id)s) but they are " | |
"only %(distance)0.2fm apart.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class StopTooFarFromShapeWithDistTraveled(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"For trip %(trip_id)s the stop \"%(stop_name)s\" (ID %(stop_id)s) is " | |
"%(distance).0f meters away from the corresponding point " | |
"(shape_dist_traveled: %(shape_dist_traveled)f) on shape %(shape_id)s. " | |
"It should be closer than %(max_distance).0f meters.") | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. | |
return cmp(y.distance, self.distance) | |
class TooManyDaysWithoutService(ExceptionWithContext): | |
ERROR_TEXT = "There are %(consecutive_days_without_service)i consecutive"\ | |
" days, from %(first_day_without_service)s to" \ | |
" %(last_day_without_service)s, without any scheduled service." \ | |
" Please ensure this is intentional." | |
class ExpirationDate(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
expiration = d['expiration'] | |
formatted_date = time.strftime("%B %d, %Y", | |
time.localtime(expiration)) | |
if (expiration < time.mktime(time.localtime())): | |
return "This feed expired on %s" % formatted_date | |
else: | |
return "This feed will soon expire, on %s" % formatted_date | |
class FutureService(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
formatted_date = time.strftime("%B %d, %Y", time.localtime(d['start_date'])) | |
return ("The earliest service date in this feed is in the future, on %s. " | |
"Published feeds must always include the current date." % | |
formatted_date) | |
class InvalidLineEnd(ExceptionWithContext): | |
ERROR_TEXT = "Each line must end with CR LF or LF except for the last line " \ | |
"of the file. This line ends with \"%(bad_line_end)s\"." | |
class StopWithMultipleRouteTypes(ExceptionWithContext): | |
ERROR_TEXT = "Stop %(stop_name)s (ID=%(stop_id)s) belongs to both " \ | |
"subway (ID=%(route_id1)s) and bus line (ID=%(route_id2)s)." | |
class TooFastTravel(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
if not d['speed']: | |
return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \ | |
" to %(next_stop)s. %(dist).0f meters in %(time)d seconds." % d | |
else: | |
return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \ | |
" to %(next_stop)s. %(dist).0f meters in %(time)d seconds." \ | |
" (%(speed).0f km/h)." % d | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. We | |
# can't sort by speed because not all TooFastTravel objects have a speed. | |
return cmp(y.dist, self.dist) | |
class DuplicateTrip(ExceptionWithContext): | |
ERROR_TEXT = "Trip %(trip_id1)s of route %(route_id1)s might be duplicated " \ | |
"with trip %(trip_id2)s of route %(route_id2)s. They go " \ | |
"through the same stops with same service." | |
class OtherProblem(ExceptionWithContext): | |
ERROR_TEXT = '%(description)s' | |
class ExceptionProblemReporter(ProblemReporter): | |
def __init__(self, raise_warnings=False): | |
ProblemReporterBase.__init__(self) | |
self.raise_warnings = raise_warnings | |
def _Report(self, e): | |
if self.raise_warnings or e.IsError(): | |
raise e | |
else: | |
ProblemReporter._Report(self, e) | |
default_problem_reporter = ExceptionProblemReporter() | |
# Add a default handler to send log messages to console | |
console = logging.StreamHandler() | |
console.setLevel(logging.WARNING) | |
log = logging.getLogger("schedule_builder") | |
log.addHandler(console) | |
class Error(Exception): | |
pass | |
def IsValidURL(url): | |
"""Checks the validity of a URL value.""" | |
# TODO: Add more thorough checking of URL | |
return url.startswith(u'http://') or url.startswith(u'https://') | |
def IsValidColor(color): | |
"""Checks the validity of a hex color value.""" | |
return not re.match('^[0-9a-fA-F]{6}$', color) == None | |
def ColorLuminance(color): | |
"""Compute the brightness of an sRGB color using the formula from | |
http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast. | |
Args: | |
color: a string of six hex digits in the format verified by IsValidColor(). | |
Returns: | |
A floating-point number between 0.0 (black) and 255.0 (white). """ | |
r = int(color[0:2], 16) | |
g = int(color[2:4], 16) | |
b = int(color[4:6], 16) | |
return (299*r + 587*g + 114*b) / 1000.0 | |
def IsEmpty(value): | |
return value is None or (isinstance(value, basestring) and not value.strip()) | |
def FindUniqueId(dic): | |
"""Return a string not used as a key in the dictionary dic""" | |
name = str(len(dic)) | |
while name in dic: | |
name = str(random.randint(1, 999999999)) | |
return name | |
def TimeToSecondsSinceMidnight(time_string): | |
"""Convert HHH:MM:SS into seconds since midnight. | |
For example "01:02:03" returns 3723. The leading zero of the hours may be | |
omitted. HH may be more than 23 if the time is on the following day.""" | |
m = re.match(r'(\d{1,3}):([0-5]\d):([0-5]\d)$', time_string) | |
# ignored: matching for leap seconds | |
if not m: | |
raise Error, 'Bad HH:MM:SS "%s"' % time_string | |
return int(m.group(1)) * 3600 + int(m.group(2)) * 60 + int(m.group(3)) | |
def FormatSecondsSinceMidnight(s): | |
"""Formats an int number of seconds past midnight into a string | |
as "HH:MM:SS".""" | |
return "%02d:%02d:%02d" % (s / 3600, (s / 60) % 60, s % 60) | |
def DateStringToDateObject(date_string): | |
"""Return a date object for a string "YYYYMMDD".""" | |
# If this becomes a bottleneck date objects could be cached | |
return datetime.date(int(date_string[0:4]), int(date_string[4:6]), | |
int(date_string[6:8])) | |
def FloatStringToFloat(float_string): | |
"""Convert a float as a string to a float or raise an exception""" | |
# Will raise TypeError unless a string | |
if not re.match(r"^[+-]?\d+(\.\d+)?$", float_string): | |
raise ValueError() | |
return float(float_string) | |
def NonNegIntStringToInt(int_string): | |
"""Convert an non-negative integer string to an int or raise an exception""" | |
# Will raise TypeError unless a string | |
if not re.match(r"^(?:0|[1-9]\d*)$", int_string): | |
raise ValueError() | |
return int(int_string) | |
EARTH_RADIUS = 6378135 # in meters | |
def ApproximateDistance(degree_lat1, degree_lng1, degree_lat2, degree_lng2): | |
"""Compute approximate distance between two points in meters. Assumes the | |
Earth is a sphere.""" | |
# TODO: change to ellipsoid approximation, such as | |
# http://www.codeguru.com/Cpp/Cpp/algorithms/article.php/c5115/ | |
lat1 = math.radians(degree_lat1) | |
lng1 = math.radians(degree_lng1) | |
lat2 = math.radians(degree_lat2) | |
lng2 = math.radians(degree_lng2) | |
dlat = math.sin(0.5 * (lat2 - lat1)) | |
dlng = math.sin(0.5 * (lng2 - lng1)) | |
x = dlat * dlat + dlng * dlng * math.cos(lat1) * math.cos(lat2) | |
return EARTH_RADIUS * (2 * math.atan2(math.sqrt(x), | |
math.sqrt(max(0.0, 1.0 - x)))) | |
def ApproximateDistanceBetweenStops(stop1, stop2): | |
"""Compute approximate distance between two stops in meters. Assumes the | |
Earth is a sphere.""" | |
return ApproximateDistance(stop1.stop_lat, stop1.stop_lon, | |
stop2.stop_lat, stop2.stop_lon) | |
class GenericGTFSObject(object): | |
"""Object with arbitrary attributes which may be added to a schedule. | |
This class should be used as the base class for GTFS objects which may | |
be stored in a Schedule. It defines some methods for reading and writing | |
attributes. If self._schedule is None than the object is not in a Schedule. | |
Subclasses must: | |
* define an __init__ method which sets the _schedule member to None or a | |
weakref to a Schedule | |
* Set the _TABLE_NAME class variable to a name such as 'stops', 'agency', ... | |
* define methods to validate objects of that type | |
""" | |
def __getitem__(self, name): | |
"""Return a unicode or str representation of name or "" if not set.""" | |
if name in self.__dict__ and self.__dict__[name] is not None: | |
return "%s" % self.__dict__[name] | |
else: | |
return "" | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method is only called when name is not found in __dict__. | |
""" | |
if name in self.__class__._FIELD_NAMES: | |
return None | |
else: | |
raise AttributeError(name) | |
def iteritems(self): | |
"""Return a iterable for (name, value) pairs of public attributes.""" | |
for name, value in self.__dict__.iteritems(): | |
if (not name) or name[0] == "_": | |
continue | |
yield name, value | |
def __setattr__(self, name, value): | |
"""Set an attribute, adding name to the list of columns as needed.""" | |
object.__setattr__(self, name, value) | |
if name[0] != '_' and self._schedule: | |
self._schedule.AddTableColumn(self.__class__._TABLE_NAME, name) | |
def __eq__(self, other): | |
"""Return true iff self and other are equivalent""" | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
for k in self.keys().union(other.keys()): | |
# use __getitem__ which returns "" for missing columns values | |
if self[k] != other[k]: | |
return False | |
return True | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<%s %s>" % (self.__class__.__name__, sorted(self.iteritems())) | |
def keys(self): | |
"""Return iterable of columns used by this object.""" | |
columns = set() | |
for name in vars(self): | |
if (not name) or name[0] == "_": | |
continue | |
columns.add(name) | |
return columns | |
def _ColumnNames(self): | |
return self.keys() | |
class Stop(GenericGTFSObject): | |
"""Represents a single stop. A stop must have a latitude, longitude and name. | |
Callers may assign arbitrary values to instance attributes. | |
Stop.ParseAttributes validates attributes according to GTFS and converts some | |
into native types. ParseAttributes may delete invalid attributes. | |
Accessing an attribute that is a column in GTFS will return None if this | |
object does not have a value or it is ''. | |
A Stop object acts like a dict with string values. | |
Attributes: | |
stop_lat: a float representing the latitude of the stop | |
stop_lon: a float representing the longitude of the stop | |
All other attributes are strings. | |
""" | |
_REQUIRED_FIELD_NAMES = ['stop_id', 'stop_name', 'stop_lat', 'stop_lon'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + \ | |
['stop_desc', 'zone_id', 'stop_url', 'stop_code', | |
'location_type', 'parent_station'] | |
_TABLE_NAME = 'stops' | |
def __init__(self, lat=None, lng=None, name=None, stop_id=None, | |
field_dict=None, stop_code=None): | |
"""Initialize a new Stop object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
lat: a float, ignored when field_dict is present | |
lng: a float, ignored when field_dict is present | |
name: a string, ignored when field_dict is present | |
stop_id: a string, ignored when field_dict is present | |
stop_code: a string, ignored when field_dict is present | |
""" | |
self._schedule = None | |
if field_dict: | |
if isinstance(field_dict, Stop): | |
# Special case so that we don't need to re-parse the attributes to | |
# native types iteritems returns all attributes that don't start with _ | |
for k, v in field_dict.iteritems(): | |
self.__dict__[k] = v | |
else: | |
self.__dict__.update(field_dict) | |
else: | |
if lat is not None: | |
self.stop_lat = lat | |
if lng is not None: | |
self.stop_lon = lng | |
if name is not None: | |
self.stop_name = name | |
if stop_id is not None: | |
self.stop_id = stop_id | |
if stop_code is not None: | |
self.stop_code = stop_code | |
def GetTrips(self, schedule=None): | |
"""Return iterable containing trips that visit this stop.""" | |
return [trip for trip, ss in self._GetTripSequence(schedule)] | |
def _GetTripSequence(self, schedule=None): | |
"""Return a list of (trip, stop_sequence) for all trips visiting this stop. | |
A trip may be in the list multiple times with different index. | |
stop_sequence is an integer. | |
Args: | |
schedule: Deprecated, do not use. | |
""" | |
if schedule is None: | |
schedule = getattr(self, "_schedule", None) | |
if schedule is None: | |
warnings.warn("No longer supported. _schedule attribute is used to get " | |
"stop_times table", DeprecationWarning) | |
cursor = schedule._connection.cursor() | |
cursor.execute("SELECT trip_id,stop_sequence FROM stop_times " | |
"WHERE stop_id=?", | |
(self.stop_id, )) | |
return [(schedule.GetTrip(row[0]), row[1]) for row in cursor] | |
def _GetTripIndex(self, schedule=None): | |
"""Return a list of (trip, index). | |
trip: a Trip object | |
index: an offset in trip.GetStopTimes() | |
""" | |
trip_index = [] | |
for trip, sequence in self._GetTripSequence(schedule): | |
for index, st in enumerate(trip.GetStopTimes()): | |
if st.stop_sequence == sequence: | |
trip_index.append((trip, index)) | |
break | |
else: | |
raise RuntimeError("stop_sequence %d not found in trip_id %s" % | |
sequence, trip.trip_id) | |
return trip_index | |
def GetStopTimeTrips(self, schedule=None): | |
"""Return a list of (time, (trip, index), is_timepoint). | |
time: an integer. It might be interpolated. | |
trip: a Trip object. | |
index: the offset of this stop in trip.GetStopTimes(), which may be | |
different from the stop_sequence. | |
is_timepoint: a bool | |
""" | |
time_trips = [] | |
for trip, index in self._GetTripIndex(schedule): | |
secs, stoptime, is_timepoint = trip.GetTimeInterpolatedStops()[index] | |
time_trips.append((secs, (trip, index), is_timepoint)) | |
return time_trips | |
def ParseAttributes(self, problems): | |
"""Parse all attributes, calling problems as needed.""" | |
# Need to use items() instead of iteritems() because _CheckAndSetAttr may | |
# modify self.__dict__ | |
for name, value in vars(self).items(): | |
if name[0] == "_": | |
continue | |
self._CheckAndSetAttr(name, value, problems) | |
def _CheckAndSetAttr(self, name, value, problems): | |
"""If value is valid for attribute name store it. | |
If value is not valid call problems. Return a new value of the correct type | |
or None if value couldn't be converted. | |
""" | |
if name == 'stop_lat': | |
try: | |
if isinstance(value, (float, int)): | |
self.stop_lat = value | |
else: | |
self.stop_lat = FloatStringToFloat(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('stop_lat', value) | |
del self.stop_lat | |
else: | |
if self.stop_lat > 90 or self.stop_lat < -90: | |
problems.InvalidValue('stop_lat', value) | |
elif name == 'stop_lon': | |
try: | |
if isinstance(value, (float, int)): | |
self.stop_lon = value | |
else: | |
self.stop_lon = FloatStringToFloat(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('stop_lon', value) | |
del self.stop_lon | |
else: | |
if self.stop_lon > 180 or self.stop_lon < -180: | |
problems.InvalidValue('stop_lon', value) | |
elif name == 'stop_url': | |
if value and not IsValidURL(value): | |
problems.InvalidValue('stop_url', value) | |
del self.stop_url | |
elif name == 'location_type': | |
if value == '': | |
self.location_type = 0 | |
else: | |
try: | |
self.location_type = int(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('location_type', value) | |
del self.location_type | |
else: | |
if self.location_type not in (0, 1): | |
problems.InvalidValue('location_type', value, type=TYPE_WARNING) | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method is only called when name is not found in __dict__. | |
""" | |
if name == "location_type": | |
return 0 | |
elif name == "trip_index": | |
return self._GetTripIndex() | |
elif name in Stop._FIELD_NAMES: | |
return None | |
else: | |
raise AttributeError(name) | |
def Validate(self, problems=default_problem_reporter): | |
# First check that all required fields are present because ParseAttributes | |
# may remove invalid attributes. | |
for required in Stop._REQUIRED_FIELD_NAMES: | |
if IsEmpty(getattr(self, required, None)): | |
# TODO: For now I'm keeping the API stable but it would be cleaner to | |
# treat whitespace stop_id as invalid, instead of missing | |
problems.MissingValue(required) | |
# Check individual values and convert to native types | |
self.ParseAttributes(problems) | |
# Check that this object is consistent with itself | |
if (self.stop_lat is not None and self.stop_lon is not None and | |
abs(self.stop_lat) < 1.0) and (abs(self.stop_lon) < 1.0): | |
problems.InvalidValue('stop_lat', self.stop_lat, | |
'Stop location too close to 0, 0', | |
type=TYPE_WARNING) | |
if (self.stop_desc is not None and self.stop_name is not None and | |
self.stop_desc and self.stop_name and | |
not IsEmpty(self.stop_desc) and | |
self.stop_name.strip().lower() == self.stop_desc.strip().lower()): | |
problems.InvalidValue('stop_desc', self.stop_desc, | |
'stop_desc should not be the same as stop_name') | |
if self.parent_station and self.location_type == 1: | |
problems.InvalidValue('parent_station', self.parent_station, | |
'Stop row with location_type=1 (a station) must ' | |
'not have a parent_station') | |
class Route(GenericGTFSObject): | |
"""Represents a single route.""" | |
_REQUIRED_FIELD_NAMES = [ | |
'route_id', 'route_short_name', 'route_long_name', 'route_type' | |
] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + [ | |
'agency_id', 'route_desc', 'route_url', 'route_color', 'route_text_color' | |
] | |
_ROUTE_TYPES = { | |
0: {'name':'Tram', 'max_speed':100}, | |
1: {'name':'Subway', 'max_speed':150}, | |
2: {'name':'Rail', 'max_speed':300}, | |
3: {'name':'Bus', 'max_speed':100}, | |
4: {'name':'Ferry', 'max_speed':80}, | |
5: {'name':'Cable Car', 'max_speed':50}, | |
6: {'name':'Gondola', 'max_speed':50}, | |
7: {'name':'Funicular', 'max_speed':50}, | |
} | |
# Create a reverse lookup dict of route type names to route types. | |
_ROUTE_TYPE_IDS = set(_ROUTE_TYPES.keys()) | |
_ROUTE_TYPE_NAMES = dict((v['name'], k) for k, v in _ROUTE_TYPES.items()) | |
_TABLE_NAME = 'routes' | |
def __init__(self, short_name=None, long_name=None, route_type=None, | |
route_id=None, agency_id=None, field_dict=None): | |
self._schedule = None | |
self._trips = [] | |
if not field_dict: | |
field_dict = {} | |
if short_name is not None: | |
field_dict['route_short_name'] = short_name | |
if long_name is not None: | |
field_dict['route_long_name'] = long_name | |
if route_type is not None: | |
if route_type in Route._ROUTE_TYPE_NAMES: | |
self.route_type = Route._ROUTE_TYPE_NAMES[route_type] | |
else: | |
field_dict['route_type'] = route_type | |
if route_id is not None: | |
field_dict['route_id'] = route_id | |
if agency_id is not None: | |
field_dict['agency_id'] = agency_id | |
self.__dict__.update(field_dict) | |
def AddTrip(self, schedule, headsign, service_period=None, trip_id=None): | |
""" Adds a trip to this route. | |
Args: | |
headsign: headsign of the trip as a string | |
Returns: | |
a new Trip object | |
""" | |
if trip_id is None: | |
trip_id = unicode(len(schedule.trips)) | |
if service_period is None: | |
service_period = schedule.GetDefaultServicePeriod() | |
trip = Trip(route=self, headsign=headsign, service_period=service_period, | |
trip_id=trip_id) | |
schedule.AddTripObject(trip) | |
return trip | |
def _AddTripObject(self, trip): | |
# Only class Schedule may call this. Users of the API should call | |
# Route.AddTrip or schedule.AddTripObject. | |
self._trips.append(trip) | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method overrides GenericGTFSObject.__getattr__ to provide backwards | |
compatible access to trips. | |
""" | |
if name == 'trips': | |
return self._trips | |
else: | |
return GenericGTFSObject.__getattr__(self, name) | |
def GetPatternIdTripDict(self): | |
"""Return a dictionary that maps pattern_id to a list of Trip objects.""" | |
d = {} | |
for t in self._trips: | |
d.setdefault(t.pattern_id, []).append(t) | |
return d | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.route_id): | |
problems.MissingValue('route_id') | |
if IsEmpty(self.route_type): | |
problems.MissingValue('route_type') | |
if IsEmpty(self.route_short_name) and IsEmpty(self.route_long_name): | |
problems.InvalidValue('route_short_name', | |
self.route_short_name, | |
'Both route_short_name and ' | |
'route_long name are blank.') | |
if self.route_short_name and len(self.route_short_name) > 6: | |
problems.InvalidValue('route_short_name', | |
self.route_short_name, | |
'This route_short_name is relatively long, which ' | |
'probably means that it contains a place name. ' | |
'You should only use this field to hold a short ' | |
'code that riders use to identify a route. ' | |
'If this route doesn\'t have such a code, it\'s ' | |
'OK to leave this field empty.', type=TYPE_WARNING) | |
if self.route_short_name and self.route_long_name: | |
short_name = self.route_short_name.strip().lower() | |
long_name = self.route_long_name.strip().lower() | |
if (long_name.startswith(short_name + ' ') or | |
long_name.startswith(short_name + '(') or | |
long_name.startswith(short_name + '-')): | |
problems.InvalidValue('route_long_name', | |
self.route_long_name, | |
'route_long_name shouldn\'t contain ' | |
'the route_short_name value, as both ' | |
'fields are often displayed ' | |
'side-by-side.', type=TYPE_WARNING) | |
if long_name == short_name: | |
problems.InvalidValue('route_long_name', | |
self.route_long_name, | |
'route_long_name shouldn\'t be the same ' | |
'the route_short_name value, as both ' | |
'fields are often displayed ' | |
'side-by-side. It\'s OK to omit either the ' | |
'short or long name (but not both).', | |
type=TYPE_WARNING) | |
if (self.route_desc and | |
((self.route_desc == self.route_short_name) or | |
(self.route_desc == self.route_long_name))): | |
problems.InvalidValue('route_desc', | |
self.route_desc, | |
'route_desc shouldn\'t be the same as ' | |
'route_short_name or route_long_name') | |
if self.route_type is not None: | |
try: | |
if not isinstance(self.route_type, int): | |
self.route_type = NonNegIntStringToInt(self.route_type) | |
except (TypeError, ValueError): | |
problems.InvalidValue('route_type', self.route_type) | |
else: | |
if self.route_type not in Route._ROUTE_TYPE_IDS: | |
problems.InvalidValue('route_type', | |
self.route_type, | |
type=TYPE_WARNING) | |
if self.route_url and not IsValidURL(self.route_url): | |
problems.InvalidValue('route_url', self.route_url) | |
txt_lum = ColorLuminance('000000') # black (default) | |
bg_lum = ColorLuminance('ffffff') # white (default) | |
if self.route_color: | |
if IsValidColor(self.route_color): | |
bg_lum = ColorLuminance(self.route_color) | |
else: | |
problems.InvalidValue('route_color', self.route_color, | |
'route_color should be a valid color description ' | |
'which consists of 6 hexadecimal characters ' | |
'representing the RGB values. Example: 44AA06') | |
if self.route_text_color: | |
if IsValidColor(self.route_text_color): | |
txt_lum = ColorLuminance(self.route_text_color) | |
else: | |
problems.InvalidValue('route_text_color', self.route_text_color, | |
'route_text_color should be a valid color ' | |
'description, which consists of 6 hexadecimal ' | |
'characters representing the RGB values. ' | |
'Example: 44AA06') | |
if abs(txt_lum - bg_lum) < 510/7.: | |
# http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast recommends | |
# a threshold of 125, but that is for normal text and too harsh for | |
# big colored logos like line names, so we keep the original threshold | |
# from r541 (but note that weight has shifted between RGB components). | |
problems.InvalidValue('route_color', self.route_color, | |
'The route_text_color and route_color should ' | |
'be set to contrasting colors, as they are used ' | |
'as the text and background color (respectively) ' | |
'for displaying route names. When left blank, ' | |
'route_text_color defaults to 000000 (black) and ' | |
'route_color defaults to FFFFFF (white). A common ' | |
'source of issues here is setting route_color to ' | |
'a dark color, while leaving route_text_color set ' | |
'to black. In this case, route_text_color should ' | |
'be set to a lighter color like FFFFFF to ensure ' | |
'a legible contrast between the two.', | |
type=TYPE_WARNING) | |
def SortListOfTripByTime(trips): | |
trips.sort(key=Trip.GetStartTime) | |
class StopTime(object): | |
""" | |
Represents a single stop of a trip. StopTime contains most of the columns | |
from the stop_times.txt file. It does not contain trip_id, which is implied | |
by the Trip used to access it. | |
See the Google Transit Feed Specification for the semantic details. | |
stop: A Stop object | |
arrival_time: str in the form HH:MM:SS; readonly after __init__ | |
departure_time: str in the form HH:MM:SS; readonly after __init__ | |
arrival_secs: int number of seconds since midnight | |
departure_secs: int number of seconds since midnight | |
stop_headsign: str | |
pickup_type: int | |
drop_off_type: int | |
shape_dist_traveled: float | |
stop_id: str; readonly | |
stop_time: The only time given for this stop. If present, it is used | |
for both arrival and departure time. | |
stop_sequence: int | |
""" | |
_REQUIRED_FIELD_NAMES = ['trip_id', 'arrival_time', 'departure_time', | |
'stop_id', 'stop_sequence'] | |
_OPTIONAL_FIELD_NAMES = ['stop_headsign', 'pickup_type', | |
'drop_off_type', 'shape_dist_traveled'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + _OPTIONAL_FIELD_NAMES | |
_SQL_FIELD_NAMES = ['trip_id', 'arrival_secs', 'departure_secs', | |
'stop_id', 'stop_sequence', 'stop_headsign', | |
'pickup_type', 'drop_off_type', 'shape_dist_traveled'] | |
__slots__ = ('arrival_secs', 'departure_secs', 'stop_headsign', 'stop', | |
'stop_headsign', 'pickup_type', 'drop_off_type', | |
'shape_dist_traveled', 'stop_sequence') | |
def __init__(self, problems, stop, | |
arrival_time=None, departure_time=None, | |
stop_headsign=None, pickup_type=None, drop_off_type=None, | |
shape_dist_traveled=None, arrival_secs=None, | |
departure_secs=None, stop_time=None, stop_sequence=None): | |
if stop_time != None: | |
arrival_time = departure_time = stop_time | |
if arrival_secs != None: | |
self.arrival_secs = arrival_secs | |
elif arrival_time in (None, ""): | |
self.arrival_secs = None # Untimed | |
arrival_time = None | |
else: | |
try: | |
self.arrival_secs = TimeToSecondsSinceMidnight(arrival_time) | |
except Error: | |
problems.InvalidValue('arrival_time', arrival_time) | |
self.arrival_secs = None | |
if departure_secs != None: | |
self.departure_secs = departure_secs | |
elif departure_time in (None, ""): | |
self.departure_secs = None | |
departure_time = None | |
else: | |
try: | |
self.departure_secs = TimeToSecondsSinceMidnight(departure_time) | |
except Error: | |
problems.InvalidValue('departure_time', departure_time) | |
self.departure_secs = None | |
if not isinstance(stop, Stop): | |
# Not quite correct, but better than letting the problem propagate | |
problems.InvalidValue('stop', stop) | |
self.stop = stop | |
self.stop_headsign = stop_headsign | |
if pickup_type in (None, ""): | |
self.pickup_type = None | |
else: | |
try: | |
pickup_type = int(pickup_type) | |
except ValueError: | |
problems.InvalidValue('pickup_type', pickup_type) | |
else: | |
if pickup_type < 0 or pickup_type > 3: | |
problems.InvalidValue('pickup_type', pickup_type) | |
self.pickup_type = pickup_type | |
if drop_off_type in (None, ""): | |
self.drop_off_type = None | |
else: | |
try: | |
drop_off_type = int(drop_off_type) | |
except ValueError: | |
problems.InvalidValue('drop_off_type', drop_off_type) | |
else: | |
if drop_off_type < 0 or drop_off_type > 3: | |
problems.InvalidValue('drop_off_type', drop_off_type) | |
self.drop_off_type = drop_off_type | |
if (self.pickup_type == 1 and self.drop_off_type == 1 and | |
self.arrival_secs == None and self.departure_secs == None): | |
problems.OtherProblem('This stop time has a pickup_type and ' | |
'drop_off_type of 1, indicating that riders ' | |
'can\'t get on or off here. Since it doesn\'t ' | |
'define a timepoint either, this entry serves no ' | |
'purpose and should be excluded from the trip.', | |
type=TYPE_WARNING) | |
if ((self.arrival_secs != None) and (self.departure_secs != None) and | |
(self.departure_secs < self.arrival_secs)): | |
problems.InvalidValue('departure_time', departure_time, | |
'The departure time at this stop (%s) is before ' | |
'the arrival time (%s). This is often caused by ' | |
'problems in the feed exporter\'s time conversion') | |
# If the caller passed a valid arrival time but didn't attempt to pass a | |
# departure time complain | |
if (self.arrival_secs != None and | |
self.departure_secs == None and departure_time == None): | |
# self.departure_secs might be None because departure_time was invalid, | |
# so we need to check both | |
problems.MissingValue('departure_time', | |
'arrival_time and departure_time should either ' | |
'both be provided or both be left blank. ' | |
'It\'s OK to set them both to the same value.') | |
# If the caller passed a valid departure time but didn't attempt to pass a | |
# arrival time complain | |
if (self.departure_secs != None and | |
self.arrival_secs == None and arrival_time == None): | |
problems.MissingValue('arrival_time', | |
'arrival_time and departure_time should either ' | |
'both be provided or both be left blank. ' | |
'It\'s OK to set them both to the same value.') | |
if shape_dist_traveled in (None, ""): | |
self.shape_dist_traveled = None | |
else: | |
try: | |
self.shape_dist_traveled = float(shape_dist_traveled) | |
except ValueError: | |
problems.InvalidValue('shape_dist_traveled', shape_dist_traveled) | |
if stop_sequence is not None: | |
self.stop_sequence = stop_sequence | |
def GetFieldValuesTuple(self, trip_id): | |
"""Return a tuple that outputs a row of _FIELD_NAMES. | |
trip must be provided because it is not stored in StopTime. | |
""" | |
result = [] | |
for fn in StopTime._FIELD_NAMES: | |
if fn == 'trip_id': | |
result.append(trip_id) | |
else: | |
result.append(getattr(self, fn) or '' ) | |
return tuple(result) | |
def GetSqlValuesTuple(self, trip_id): | |
result = [] | |
for fn in StopTime._SQL_FIELD_NAMES: | |
if fn == 'trip_id': | |
result.append(trip_id) | |
else: | |
# This might append None, which will be inserted into SQLite as NULL | |
result.append(getattr(self, fn)) | |
return tuple(result) | |
def GetTimeSecs(self): | |
"""Return the first of arrival_secs and departure_secs that is not None. | |
If both are None return None.""" | |
if self.arrival_secs != None: | |
return self.arrival_secs | |
elif self.departure_secs != None: | |
return self.departure_secs | |
else: | |
return None | |
def __getattr__(self, name): | |
if name == 'stop_id': | |
return self.stop.stop_id | |
elif name == 'arrival_time': | |
return (self.arrival_secs != None and | |
FormatSecondsSinceMidnight(self.arrival_secs) or '') | |
elif name == 'departure_time': | |
return (self.departure_secs != None and | |
FormatSecondsSinceMidnight(self.departure_secs) or '') | |
elif name == 'shape_dist_traveled': | |
return '' | |
raise AttributeError(name) | |
class Trip(GenericGTFSObject): | |
_REQUIRED_FIELD_NAMES = ['route_id', 'service_id', 'trip_id'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + [ | |
'trip_headsign', 'direction_id', 'block_id', 'shape_id' | |
] | |
_FIELD_NAMES_HEADWAY = ['trip_id', 'start_time', 'end_time', 'headway_secs'] | |
_TABLE_NAME= "trips" | |
def __init__(self, headsign=None, service_period=None, | |
route=None, trip_id=None, field_dict=None): | |
self._schedule = None | |
self._headways = [] # [(start_time, end_time, headway_secs)] | |
if not field_dict: | |
field_dict = {} | |
if headsign is not None: | |
field_dict['trip_headsign'] = headsign | |
if route: | |
field_dict['route_id'] = route.route_id | |
if trip_id is not None: | |
field_dict['trip_id'] = trip_id | |
if service_period is not None: | |
field_dict['service_id'] = service_period.service_id | |
# Earlier versions of transitfeed.py assigned self.service_period here | |
# and allowed the caller to set self.service_id. Schedule.Validate | |
# checked the service_id attribute if it was assigned and changed it to a | |
# service_period attribute. Now only the service_id attribute is used and | |
# it is validated by Trip.Validate. | |
if service_period is not None: | |
# For backwards compatibility | |
self.service_id = service_period.service_id | |
self.__dict__.update(field_dict) | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) or '' for fn in Trip._FIELD_NAMES] | |
def AddStopTime(self, stop, problems=None, schedule=None, **kwargs): | |
"""Add a stop to this trip. Stops must be added in the order visited. | |
Args: | |
stop: A Stop object | |
kwargs: remaining keyword args passed to StopTime.__init__ | |
Returns: | |
None | |
""" | |
if problems is None: | |
# TODO: delete this branch when StopTime.__init__ doesn't need a | |
# ProblemReporter | |
problems = default_problem_reporter | |
stoptime = StopTime(problems=problems, stop=stop, **kwargs) | |
self.AddStopTimeObject(stoptime, schedule) | |
def _AddStopTimeObjectUnordered(self, stoptime, schedule): | |
"""Add StopTime object to this trip. | |
The trip isn't checked for duplicate sequence numbers so it must be | |
validated later.""" | |
cursor = schedule._connection.cursor() | |
insert_query = "INSERT INTO stop_times (%s) VALUES (%s);" % ( | |
','.join(StopTime._SQL_FIELD_NAMES), | |
','.join(['?'] * len(StopTime._SQL_FIELD_NAMES))) | |
cursor = schedule._connection.cursor() | |
cursor.execute( | |
insert_query, stoptime.GetSqlValuesTuple(self.trip_id)) | |
def ReplaceStopTimeObject(self, stoptime, schedule=None): | |
"""Replace a StopTime object from this trip with the given one. | |
Keys the StopTime object to be replaced by trip_id, stop_sequence | |
and stop_id as 'stoptime', with the object 'stoptime'. | |
""" | |
if schedule is None: | |
schedule = self._schedule | |
new_secs = stoptime.GetTimeSecs() | |
cursor = schedule._connection.cursor() | |
cursor.execute("DELETE FROM stop_times WHERE trip_id=? and " | |
"stop_sequence=? and stop_id=?", | |
(self.trip_id, stoptime.stop_sequence, stoptime.stop_id)) | |
if cursor.rowcount == 0: | |
raise Error, 'Attempted replacement of StopTime object which does not exist' | |
self._AddStopTimeObjectUnordered(stoptime, schedule) | |
def AddStopTimeObject(self, stoptime, schedule=None, problems=None): | |
"""Add a StopTime object to the end of this trip. | |
Args: | |
stoptime: A StopTime object. Should not be reused in multiple trips. | |
schedule: Schedule object containing this trip which must be | |
passed to Trip.__init__ or here | |
problems: ProblemReporter object for validating the StopTime in its new | |
home | |
Returns: | |
None | |
""" | |
if schedule is None: | |
schedule = self._schedule | |
if schedule is None: | |
warnings.warn("No longer supported. _schedule attribute is used to get " | |
"stop_times table", DeprecationWarning) | |
if problems is None: | |
problems = schedule.problem_reporter | |
new_secs = stoptime.GetTimeSecs() | |
cursor = schedule._connection.cursor() | |
cursor.execute("SELECT max(stop_sequence), max(arrival_secs), " | |
"max(departure_secs) FROM stop_times WHERE trip_id=?", | |
(self.trip_id,)) | |
row = cursor.fetchone() | |
if row[0] is None: | |
# This is the first stop_time of the trip | |
stoptime.stop_sequence = 1 | |
if new_secs == None: | |
problems.OtherProblem( | |
'No time for first StopTime of trip_id "%s"' % (self.trip_id,)) | |
else: | |
stoptime.stop_sequence = row[0] + 1 | |
prev_secs = max(row[1], row[2]) | |
if new_secs != None and new_secs < prev_secs: | |
problems.OtherProblem( | |
'out of order stop time for stop_id=%s trip_id=%s %s < %s' % | |
(EncodeUnicode(stoptime.stop_id), EncodeUnicode(self.trip_id), | |
FormatSecondsSinceMidnight(new_secs), | |
FormatSecondsSinceMidnight(prev_secs))) | |
self._AddStopTimeObjectUnordered(stoptime, schedule) | |
def GetTimeStops(self): | |
"""Return a list of (arrival_secs, departure_secs, stop) tuples. | |
Caution: arrival_secs and departure_secs may be 0, a false value meaning a | |
stop at midnight or None, a false value meaning the stop is untimed.""" | |
return [(st.arrival_secs, st.departure_secs, st.stop) for st in | |
self.GetStopTimes()] | |
def GetCountStopTimes(self): | |
"""Return the number of stops made by this trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT count(*) FROM stop_times WHERE trip_id=?', (self.trip_id,)) | |
return cursor.fetchone()[0] | |
def GetTimeInterpolatedStops(self): | |
"""Return a list of (secs, stoptime, is_timepoint) tuples. | |
secs will always be an int. If the StopTime object does not have explict | |
times this method guesses using distance. stoptime is a StopTime object and | |
is_timepoint is a bool. | |
Raises: | |
ValueError if this trip does not have the times needed to interpolate | |
""" | |
rv = [] | |
stoptimes = self.GetStopTimes() | |
# If there are no stoptimes [] is the correct return value but if the start | |
# or end are missing times there is no correct return value. | |
if not stoptimes: | |
return [] | |
if (stoptimes[0].GetTimeSecs() is None or | |
stoptimes[-1].GetTimeSecs() is None): | |
raise ValueError("%s must have time at first and last stop" % (self)) | |
cur_timepoint = None | |
next_timepoint = None | |
distance_between_timepoints = 0 | |
distance_traveled_between_timepoints = 0 | |
for i, st in enumerate(stoptimes): | |
if st.GetTimeSecs() != None: | |
cur_timepoint = st | |
distance_between_timepoints = 0 | |
distance_traveled_between_timepoints = 0 | |
if i + 1 < len(stoptimes): | |
k = i + 1 | |
distance_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop) | |
while stoptimes[k].GetTimeSecs() == None: | |
k += 1 | |
distance_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop) | |
next_timepoint = stoptimes[k] | |
rv.append( (st.GetTimeSecs(), st, True) ) | |
else: | |
distance_traveled_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[i-1].stop, st.stop) | |
distance_percent = distance_traveled_between_timepoints / distance_between_timepoints | |
total_time = next_timepoint.GetTimeSecs() - cur_timepoint.GetTimeSecs() | |
time_estimate = distance_percent * total_time + cur_timepoint.GetTimeSecs() | |
rv.append( (int(round(time_estimate)), st, False) ) | |
return rv | |
def ClearStopTimes(self): | |
"""Remove all stop times from this trip. | |
StopTime objects previously returned by GetStopTimes are unchanged but are | |
no longer associated with this trip. | |
""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute('DELETE FROM stop_times WHERE trip_id=?', (self.trip_id,)) | |
def GetStopTimes(self, problems=None): | |
"""Return a sorted list of StopTime objects for this trip.""" | |
# In theory problems=None should be safe because data from database has been | |
# validated. See comment in _LoadStopTimes for why this isn't always true. | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs,stop_headsign,pickup_type,' | |
'drop_off_type,shape_dist_traveled,stop_id,stop_sequence FROM ' | |
'stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence', (self.trip_id,)) | |
stop_times = [] | |
for row in cursor.fetchall(): | |
stop = self._schedule.GetStop(row[6]) | |
stop_times.append(StopTime(problems=problems, stop=stop, arrival_secs=row[0], | |
departure_secs=row[1], | |
stop_headsign=row[2], | |
pickup_type=row[3], | |
drop_off_type=row[4], | |
shape_dist_traveled=row[5], | |
stop_sequence=row[7])) | |
return stop_times | |
def GetHeadwayStopTimes(self, problems=None): | |
"""Return a list of StopTime objects for each headway-based run. | |
Returns: | |
a list of list of StopTime objects. Each list of StopTime objects | |
represents one run. If this trip doesn't have headways returns an empty | |
list. | |
""" | |
stoptimes_list = [] # list of stoptime lists to be returned | |
stoptime_pattern = self.GetStopTimes() | |
first_secs = stoptime_pattern[0].arrival_secs # first time of the trip | |
# for each start time of a headway run | |
for run_secs in self.GetHeadwayStartTimes(): | |
# stop time list for a headway run | |
stoptimes = [] | |
# go through the pattern and generate stoptimes | |
for st in stoptime_pattern: | |
arrival_secs, departure_secs = None, None # default value if the stoptime is not timepoint | |
if st.arrival_secs != None: | |
arrival_secs = st.arrival_secs - first_secs + run_secs | |
if st.departure_secs != None: | |
departure_secs = st.departure_secs - first_secs + run_secs | |
# append stoptime | |
stoptimes.append(StopTime(problems=problems, stop=st.stop, | |
arrival_secs=arrival_secs, | |
departure_secs=departure_secs, | |
stop_headsign=st.stop_headsign, | |
pickup_type=st.pickup_type, | |
drop_off_type=st.drop_off_type, | |
shape_dist_traveled=st.shape_dist_traveled, | |
stop_sequence=st.stop_sequence)) | |
# add stoptimes to the stoptimes_list | |
stoptimes_list.append ( stoptimes ) | |
return stoptimes_list | |
def GetStartTime(self, problems=default_problem_reporter): | |
"""Return the first time of the trip. TODO: For trips defined by frequency | |
return the first time of the first trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs FROM stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence LIMIT 1', (self.trip_id,)) | |
(arrival_secs, departure_secs) = cursor.fetchone() | |
if arrival_secs != None: | |
return arrival_secs | |
elif departure_secs != None: | |
return departure_secs | |
else: | |
problems.InvalidValue('departure_time', '', | |
'The first stop_time in trip %s is missing ' | |
'times.' % self.trip_id) | |
def GetHeadwayStartTimes(self): | |
"""Return a list of start time for each headway-based run. | |
Returns: | |
a sorted list of seconds since midnight, the start time of each run. If | |
this trip doesn't have headways returns an empty list.""" | |
start_times = [] | |
# for each headway period of the trip | |
for start_secs, end_secs, headway_secs in self.GetHeadwayPeriodTuples(): | |
# reset run secs to the start of the timeframe | |
run_secs = start_secs | |
while run_secs < end_secs: | |
start_times.append(run_secs) | |
# increment current run secs by headway secs | |
run_secs += headway_secs | |
return start_times | |
def GetEndTime(self, problems=default_problem_reporter): | |
"""Return the last time of the trip. TODO: For trips defined by frequency | |
return the last time of the last trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs FROM stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence DESC LIMIT 1', (self.trip_id,)) | |
(arrival_secs, departure_secs) = cursor.fetchone() | |
if departure_secs != None: | |
return departure_secs | |
elif arrival_secs != None: | |
return arrival_secs | |
else: | |
problems.InvalidValue('arrival_time', '', | |
'The last stop_time in trip %s is missing ' | |
'times.' % self.trip_id) | |
def _GenerateStopTimesTuples(self): | |
"""Generator for rows of the stop_times file""" | |
stoptimes = self.GetStopTimes() | |
for i, st in enumerate(stoptimes): | |
yield st.GetFieldValuesTuple(self.trip_id) | |
def GetStopTimesTuples(self): | |
results = [] | |
for time_tuple in self._GenerateStopTimesTuples(): | |
results.append(time_tuple) | |
return results | |
def GetPattern(self): | |
"""Return a tuple of Stop objects, in the order visited""" | |
stoptimes = self.GetStopTimes() | |
return tuple(st.stop for st in stoptimes) | |
def AddHeadwayPeriod(self, start_time, end_time, headway_secs, | |
problem_reporter=default_problem_reporter): | |
"""Adds a period to this trip during which the vehicle travels | |
at regular intervals (rather than specifying exact times for each stop). | |
Args: | |
start_time: The time at which this headway period starts, either in | |
numerical seconds since midnight or as "HH:MM:SS" since midnight. | |
end_time: The time at which this headway period ends, either in | |
numerical seconds since midnight or as "HH:MM:SS" since midnight. | |
This value should be larger than start_time. | |
headway_secs: The amount of time, in seconds, between occurences of | |
this trip. | |
problem_reporter: Optional parameter that can be used to select | |
how any errors in the other input parameters will be reported. | |
Returns: | |
None | |
""" | |
if start_time == None or start_time == '': # 0 is OK | |
problem_reporter.MissingValue('start_time') | |
return | |
if isinstance(start_time, basestring): | |
try: | |
start_time = TimeToSecondsSinceMidnight(start_time) | |
except Error: | |
problem_reporter.InvalidValue('start_time', start_time) | |
return | |
elif start_time < 0: | |
problem_reporter.InvalidValue('start_time', start_time) | |
if end_time == None or end_time == '': | |
problem_reporter.MissingValue('end_time') | |
return | |
if isinstance(end_time, basestring): | |
try: | |
end_time = TimeToSecondsSinceMidnight(end_time) | |
except Error: | |
problem_reporter.InvalidValue('end_time', end_time) | |
return | |
elif end_time < 0: | |
problem_reporter.InvalidValue('end_time', end_time) | |
return | |
if not headway_secs: | |
problem_reporter.MissingValue('headway_secs') | |
return | |
try: | |
headway_secs = int(headway_secs) | |
except ValueError: | |
problem_reporter.InvalidValue('headway_secs', headway_secs) | |
return | |
if headway_secs <= 0: | |
problem_reporter.InvalidValue('headway_secs', headway_secs) | |
return | |
if end_time <= start_time: | |
problem_reporter.InvalidValue('end_time', end_time, | |
'should be greater than start_time') | |
self._headways.append((start_time, end_time, headway_secs)) | |
def ClearHeadwayPeriods(self): | |
self._headways = [] | |
def _HeadwayOutputTuple(self, headway): | |
return (self.trip_id, | |
FormatSecondsSinceMidnight(headway[0]), | |
FormatSecondsSinceMidnight(headway[1]), | |
unicode(headway[2])) | |
def GetHeadwayPeriodOutputTuples(self): | |
tuples = [] | |
for headway in self._headways: | |
tuples.append(self._HeadwayOutputTuple(headway)) | |
return tuples | |
def GetHeadwayPeriodTuples(self): | |
return self._headways | |
def __getattr__(self, name): | |
if name == 'service_period': | |
assert self._schedule, "Must be in a schedule to get service_period" | |
return self._schedule.GetServicePeriod(self.service_id) | |
elif name == 'pattern_id': | |
if '_pattern_id' not in self.__dict__: | |
self.__dict__['_pattern_id'] = hash(self.GetPattern()) | |
return self.__dict__['_pattern_id'] | |
else: | |
return GenericGTFSObject.__getattr__(self, name) | |
def Validate(self, problems, validate_children=True): | |
"""Validate attributes of this object. | |
Check that this object has all required values set to a valid value without | |
reference to the rest of the schedule. If the _schedule attribute is set | |
then check that references such as route_id and service_id are correct. | |
Args: | |
problems: A ProblemReporter object | |
validate_children: if True and the _schedule attribute is set than call | |
ValidateChildren | |
""" | |
if IsEmpty(self.route_id): | |
problems.MissingValue('route_id') | |
if 'service_period' in self.__dict__: | |
# Some tests assign to the service_period attribute. Patch up self before | |
# proceeding with validation. See also comment in Trip.__init__. | |
self.service_id = self.__dict__['service_period'].service_id | |
del self.service_period | |
if IsEmpty(self.service_id): | |
problems.MissingValue('service_id') | |
if IsEmpty(self.trip_id): | |
problems.MissingValue('trip_id') | |
if hasattr(self, 'direction_id') and (not IsEmpty(self.direction_id)) and \ | |
(self.direction_id != '0') and (self.direction_id != '1'): | |
problems.InvalidValue('direction_id', self.direction_id, | |
'direction_id must be "0" or "1"') | |
if self._schedule: | |
if self.shape_id and self.shape_id not in self._schedule._shapes: | |
problems.InvalidValue('shape_id', self.shape_id) | |
if self.route_id and self.route_id not in self._schedule.routes: | |
problems.InvalidValue('route_id', self.route_id) | |
if (self.service_id and | |
self.service_id not in self._schedule.service_periods): | |
problems.InvalidValue('service_id', self.service_id) | |
if validate_children: | |
self.ValidateChildren(problems) | |
def ValidateChildren(self, problems): | |
"""Validate StopTimes and headways of this trip.""" | |
assert self._schedule, "Trip must be in a schedule to ValidateChildren" | |
# TODO: validate distance values in stop times (if applicable) | |
cursor = self._schedule._connection.cursor() | |
cursor.execute("SELECT COUNT(stop_sequence) AS a FROM stop_times " | |
"WHERE trip_id=? GROUP BY stop_sequence HAVING a > 1", | |
(self.trip_id,)) | |
for row in cursor: | |
problems.InvalidValue('stop_sequence', row[0], | |
'Duplicate stop_sequence in trip_id %s' % | |
self.trip_id) | |
stoptimes = self.GetStopTimes(problems) | |
if stoptimes: | |
if stoptimes[0].arrival_time is None and stoptimes[0].departure_time is None: | |
problems.OtherProblem( | |
'No time for start of trip_id "%s""' % (self.trip_id)) | |
if stoptimes[-1].arrival_time is None and stoptimes[-1].departure_time is None: | |
problems.OtherProblem( | |
'No time for end of trip_id "%s""' % (self.trip_id)) | |
# Sorts the stoptimes by sequence and then checks that the arrival time | |
# for each time point is after the departure time of the previous. | |
stoptimes.sort(key=lambda x: x.stop_sequence) | |
prev_departure = 0 | |
prev_stop = None | |
prev_distance = None | |
try: | |
route_type = self._schedule.GetRoute(self.route_id).route_type | |
max_speed = Route._ROUTE_TYPES[route_type]['max_speed'] | |
except KeyError, e: | |
# If route_type cannot be found, assume it is 0 (Tram) for checking | |
# speeds between stops. | |
max_speed = Route._ROUTE_TYPES[0]['max_speed'] | |
for timepoint in stoptimes: | |
# Distance should be a nonnegative float number, so it should be | |
# always larger than None. | |
distance = timepoint.shape_dist_traveled | |
if distance is not None: | |
if distance > prev_distance and distance >= 0: | |
prev_distance = distance | |
else: | |
if distance == prev_distance: | |
type = TYPE_WARNING | |
else: | |
type = TYPE_ERROR | |
problems.InvalidValue('stoptimes.shape_dist_traveled', distance, | |
'For the trip %s the stop %s has shape_dist_traveled=%s, ' | |
'which should be larger than the previous ones. In this ' | |
'case, the previous distance was %s.' % | |
(self.trip_id, timepoint.stop_id, distance, prev_distance), | |
type=type) | |
if timepoint.arrival_secs is not None: | |
self._CheckSpeed(prev_stop, timepoint.stop, prev_departure, | |
timepoint.arrival_secs, max_speed, problems) | |
if timepoint.arrival_secs >= prev_departure: | |
prev_departure = timepoint.departure_secs | |
prev_stop = timepoint.stop | |
else: | |
problems.OtherProblem('Timetravel detected! Arrival time ' | |
'is before previous departure ' | |
'at sequence number %s in trip %s' % | |
(timepoint.stop_sequence, self.trip_id)) | |
if self.shape_id and self.shape_id in self._schedule._shapes: | |
shape = self._schedule.GetShape(self.shape_id) | |
max_shape_dist = shape.max_distance | |
st = stoptimes[-1] | |
if (st.shape_dist_traveled and | |
st.shape_dist_traveled > max_shape_dist): | |
problems.OtherProblem( | |
'In stop_times.txt, the stop with trip_id=%s and ' | |
'stop_sequence=%d has shape_dist_traveled=%f, which is larger ' | |
'than the max shape_dist_traveled=%f of the corresponding ' | |
'shape (shape_id=%s)' % | |
(self.trip_id, st.stop_sequence, st.shape_dist_traveled, | |
max_shape_dist, self.shape_id), type=TYPE_WARNING) | |
# shape_dist_traveled is valid in shape if max_shape_dist larger than | |
# 0. | |
if max_shape_dist > 0: | |
for st in stoptimes: | |
if st.shape_dist_traveled is None: | |
continue | |
pt = shape.GetPointWithDistanceTraveled(st.shape_dist_traveled) | |
if pt: | |
stop = self._schedule.GetStop(st.stop_id) | |
distance = ApproximateDistance(stop.stop_lat, stop.stop_lon, | |
pt[0], pt[1]) | |
if distance > MAX_DISTANCE_FROM_STOP_TO_SHAPE: | |
problems.StopTooFarFromShapeWithDistTraveled( | |
self.trip_id, stop.stop_name, stop.stop_id, pt[2], | |
self.shape_id, distance, MAX_DISTANCE_FROM_STOP_TO_SHAPE) | |
# O(n^2), but we don't anticipate many headway periods per trip | |
for headway_index, headway in enumerate(self._headways[0:-1]): | |
for other in self._headways[headway_index + 1:]: | |
if (other[0] < headway[1]) and (other[1] > headway[0]): | |
problems.OtherProblem('Trip contains overlapping headway periods ' | |
'%s and %s' % | |
(self._HeadwayOutputTuple(headway), | |
self._HeadwayOutputTuple(other))) | |
def _CheckSpeed(self, prev_stop, next_stop, depart_time, | |
arrive_time, max_speed, problems): | |
# Checks that the speed between two stops is not faster than max_speed | |
if prev_stop != None: | |
try: | |
time_between_stops = arrive_time - depart_time | |
except TypeError: | |
return | |
try: | |
dist_between_stops = \ | |
ApproximateDistanceBetweenStops(next_stop, prev_stop) | |
except TypeError, e: | |
return | |
if time_between_stops == 0: | |
# HASTUS makes it hard to output GTFS with times to the nearest second; | |
# it rounds times to the nearest minute. Therefore stop_times at the | |
# same time ending in :00 are fairly common. These times off by no more | |
# than 30 have not caused a problem. See | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=193 | |
# Show a warning if times are not rounded to the nearest minute or | |
# distance is more than max_speed for one minute. | |
if depart_time % 60 != 0 or dist_between_stops / 1000 * 60 > max_speed: | |
problems.TooFastTravel(self.trip_id, | |
prev_stop.stop_name, | |
next_stop.stop_name, | |
dist_between_stops, | |
time_between_stops, | |
speed=None, | |
type=TYPE_WARNING) | |
return | |
# This needs floating point division for precision. | |
speed_between_stops = ((float(dist_between_stops) / 1000) / | |
(float(time_between_stops) / 3600)) | |
if speed_between_stops > max_speed: | |
problems.TooFastTravel(self.trip_id, | |
prev_stop.stop_name, | |
next_stop.stop_name, | |
dist_between_stops, | |
time_between_stops, | |
speed_between_stops, | |
type=TYPE_WARNING) | |
# TODO: move these into a separate file | |
class ISO4217(object): | |
"""Represents the set of currencies recognized by the ISO-4217 spec.""" | |
codes = { # map of alpha code to numerical code | |
'AED': 784, 'AFN': 971, 'ALL': 8, 'AMD': 51, 'ANG': 532, 'AOA': 973, | |
'ARS': 32, 'AUD': 36, 'AWG': 533, 'AZN': 944, 'BAM': 977, 'BBD': 52, | |
'BDT': 50, 'BGN': 975, 'BHD': 48, 'BIF': 108, 'BMD': 60, 'BND': 96, | |
'BOB': 68, 'BOV': 984, 'BRL': 986, 'BSD': 44, 'BTN': 64, 'BWP': 72, | |
'BYR': 974, 'BZD': 84, 'CAD': 124, 'CDF': 976, 'CHE': 947, 'CHF': 756, | |
'CHW': 948, 'CLF': 990, 'CLP': 152, 'CNY': 156, 'COP': 170, 'COU': 970, | |
'CRC': 188, 'CUP': 192, 'CVE': 132, 'CYP': 196, 'CZK': 203, 'DJF': 262, | |
'DKK': 208, 'DOP': 214, 'DZD': 12, 'EEK': 233, 'EGP': 818, 'ERN': 232, | |
'ETB': 230, 'EUR': 978, 'FJD': 242, 'FKP': 238, 'GBP': 826, 'GEL': 981, | |
'GHC': 288, 'GIP': 292, 'GMD': 270, 'GNF': 324, 'GTQ': 320, 'GYD': 328, | |
'HKD': 344, 'HNL': 340, 'HRK': 191, 'HTG': 332, 'HUF': 348, 'IDR': 360, | |
'ILS': 376, 'INR': 356, 'IQD': 368, 'IRR': 364, 'ISK': 352, 'JMD': 388, | |
'JOD': 400, 'JPY': 392, 'KES': 404, 'KGS': 417, 'KHR': 116, 'KMF': 174, | |
'KPW': 408, 'KRW': 410, 'KWD': 414, 'KYD': 136, 'KZT': 398, 'LAK': 418, | |
'LBP': 422, 'LKR': 144, 'LRD': 430, 'LSL': 426, 'LTL': 440, 'LVL': 428, | |
'LYD': 434, 'MAD': 504, 'MDL': 498, 'MGA': 969, 'MKD': 807, 'MMK': 104, | |
'MNT': 496, 'MOP': 446, 'MRO': 478, 'MTL': 470, 'MUR': 480, 'MVR': 462, | |
'MWK': 454, 'MXN': 484, 'MXV': 979, 'MYR': 458, 'MZN': 943, 'NAD': 516, | |
'NGN': 566, 'NIO': 558, 'NOK': 578, 'NPR': 524, 'NZD': 554, 'OMR': 512, | |
'PAB': 590, 'PEN': 604, 'PGK': 598, 'PHP': 608, 'PKR': 586, 'PLN': 985, | |
'PYG': 600, 'QAR': 634, 'ROL': 642, 'RON': 946, 'RSD': 941, 'RUB': 643, | |
'RWF': 646, 'SAR': 682, 'SBD': 90, 'SCR': 690, 'SDD': 736, 'SDG': 938, | |
'SEK': 752, 'SGD': 702, 'SHP': 654, 'SKK': 703, 'SLL': 694, 'SOS': 706, | |
'SRD': 968, 'STD': 678, 'SYP': 760, 'SZL': 748, 'THB': 764, 'TJS': 972, | |
'TMM': 795, 'TND': 788, 'TOP': 776, 'TRY': 949, 'TTD': 780, 'TWD': 901, | |
'TZS': 834, 'UAH': 980, 'UGX': 800, 'USD': 840, 'USN': 997, 'USS': 998, | |
'UYU': 858, 'UZS': 860, 'VEB': 862, 'VND': 704, 'VUV': 548, 'WST': 882, | |
'XAF': 950, 'XAG': 961, 'XAU': 959, 'XBA': 955, 'XBB': 956, 'XBC': 957, | |
'XBD': 958, 'XCD': 951, 'XDR': 960, 'XFO': None, 'XFU': None, 'XOF': 952, | |
'XPD': 964, 'XPF': 953, 'XPT': 962, 'XTS': 963, 'XXX': 999, 'YER': 886, | |
'ZAR': 710, 'ZMK': 894, 'ZWD': 716, | |
} | |
class Fare(object): | |
"""Represents a fare type.""" | |
_REQUIRED_FIELD_NAMES = ['fare_id', 'price', 'currency_type', | |
'payment_method', 'transfers'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['transfer_duration'] | |
def __init__(self, | |
fare_id=None, price=None, currency_type=None, | |
payment_method=None, transfers=None, transfer_duration=None, | |
field_list=None): | |
self.rules = [] | |
(self.fare_id, self.price, self.currency_type, self.payment_method, | |
self.transfers, self.transfer_duration) = \ | |
(fare_id, price, currency_type, payment_method, | |
transfers, transfer_duration) | |
if field_list: | |
(self.fare_id, self.price, self.currency_type, self.payment_method, | |
self.transfers, self.transfer_duration) = field_list | |
try: | |
self.price = float(self.price) | |
except (TypeError, ValueError): | |
pass | |
try: | |
self.payment_method = int(self.payment_method) | |
except (TypeError, ValueError): | |
pass | |
if self.transfers == None or self.transfers == "": | |
self.transfers = None | |
else: | |
try: | |
self.transfers = int(self.transfers) | |
except (TypeError, ValueError): | |
pass | |
if self.transfer_duration == None or self.transfer_duration == "": | |
self.transfer_duration = None | |
else: | |
try: | |
self.transfer_duration = int(self.transfer_duration) | |
except (TypeError, ValueError): | |
pass | |
def GetFareRuleList(self): | |
return self.rules | |
def ClearFareRules(self): | |
self.rules = [] | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in Fare._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
if self.GetFieldValuesTuple() != other.GetFieldValuesTuple(): | |
return False | |
self_rules = [r.GetFieldValuesTuple() for r in self.GetFareRuleList()] | |
self_rules.sort() | |
other_rules = [r.GetFieldValuesTuple() for r in other.GetFareRuleList()] | |
other_rules.sort() | |
return self_rules == other_rules | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.fare_id): | |
problems.MissingValue("fare_id") | |
if self.price == None: | |
problems.MissingValue("price") | |
elif not isinstance(self.price, float) and not isinstance(self.price, int): | |
problems.InvalidValue("price", self.price) | |
elif self.price < 0: | |
problems.InvalidValue("price", self.price) | |
if IsEmpty(self.currency_type): | |
problems.MissingValue("currency_type") | |
elif self.currency_type not in ISO4217.codes: | |
problems.InvalidValue("currency_type", self.currency_type) | |
if self.payment_method == "" or self.payment_method == None: | |
problems.MissingValue("payment_method") | |
elif (not isinstance(self.payment_method, int) or | |
self.payment_method not in range(0, 2)): | |
problems.InvalidValue("payment_method", self.payment_method) | |
if not ((self.transfers == None) or | |
(isinstance(self.transfers, int) and | |
self.transfers in range(0, 3))): | |
problems.InvalidValue("transfers", self.transfers) | |
if ((self.transfer_duration != None) and | |
not isinstance(self.transfer_duration, int)): | |
problems.InvalidValue("transfer_duration", self.transfer_duration) | |
if self.transfer_duration and (self.transfer_duration < 0): | |
problems.InvalidValue("transfer_duration", self.transfer_duration) | |
if (self.transfer_duration and (self.transfer_duration > 0) and | |
self.transfers == 0): | |
problems.InvalidValue("transfer_duration", self.transfer_duration, | |
"can't have a nonzero transfer_duration for " | |
"a fare that doesn't allow transfers!") | |
class FareRule(object): | |
"""This class represents a rule that determines which itineraries a | |
fare rule applies to.""" | |
_REQUIRED_FIELD_NAMES = ['fare_id'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['route_id', | |
'origin_id', 'destination_id', | |
'contains_id'] | |
def __init__(self, fare_id=None, route_id=None, | |
origin_id=None, destination_id=None, contains_id=None, | |
field_list=None): | |
(self.fare_id, self.route_id, self.origin_id, self.destination_id, | |
self.contains_id) = \ | |
(fare_id, route_id, origin_id, destination_id, contains_id) | |
if field_list: | |
(self.fare_id, self.route_id, self.origin_id, self.destination_id, | |
self.contains_id) = field_list | |
# canonicalize non-content values as None | |
if not self.route_id: | |
self.route_id = None | |
if not self.origin_id: | |
self.origin_id = None | |
if not self.destination_id: | |
self.destination_id = None | |
if not self.contains_id: | |
self.contains_id = None | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in FareRule._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.GetFieldValuesTuple() == other.GetFieldValuesTuple() | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
class Shape(object): | |
"""This class represents a geographic shape that corresponds to the route | |
taken by one or more Trips.""" | |
_REQUIRED_FIELD_NAMES = ['shape_id', 'shape_pt_lat', 'shape_pt_lon', | |
'shape_pt_sequence'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['shape_dist_traveled'] | |
def __init__(self, shape_id): | |
# List of shape point tuple (lat, lng, shape_dist_traveled), where lat and | |
# lon is the location of the shape point, and shape_dist_traveled is an | |
# increasing metric representing the distance traveled along the shape. | |
self.points = [] | |
# An ID that uniquely identifies a shape in the dataset. | |
self.shape_id = shape_id | |
# The max shape_dist_traveled of shape points in this shape. | |
self.max_distance = 0 | |
# List of shape_dist_traveled of each shape point. | |
self.distance = [] | |
def AddPoint(self, lat, lon, distance=None, | |
problems=default_problem_reporter): | |
try: | |
lat = float(lat) | |
if abs(lat) > 90.0: | |
problems.InvalidValue('shape_pt_lat', lat) | |
return | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_lat', lat) | |
return | |
try: | |
lon = float(lon) | |
if abs(lon) > 180.0: | |
problems.InvalidValue('shape_pt_lon', lon) | |
return | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_lon', lon) | |
return | |
if (abs(lat) < 1.0) and (abs(lon) < 1.0): | |
problems.InvalidValue('shape_pt_lat', lat, | |
'Point location too close to 0, 0, which means ' | |
'that it\'s probably an incorrect location.', | |
type=TYPE_WARNING) | |
return | |
if distance == '': # canonicalizing empty string to None for comparison | |
distance = None | |
if distance != None: | |
try: | |
distance = float(distance) | |
if (distance < self.max_distance and not | |
(len(self.points) == 0 and distance == 0)): # first one can be 0 | |
problems.InvalidValue('shape_dist_traveled', distance, | |
'Each subsequent point in a shape should ' | |
'have a distance value that\'s at least as ' | |
'large as the previous ones. In this case, ' | |
'the previous distance was %f.' % | |
self.max_distance) | |
return | |
else: | |
self.max_distance = distance | |
self.distance.append(distance) | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_dist_traveled', distance, | |
'This value should be a positive number.') | |
return | |
self.points.append((lat, lon, distance)) | |
def ClearPoints(self): | |
self.points = [] | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.points == other.points | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<Shape %s>" % self.__dict__ | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.shape_id): | |
problems.MissingValue('shape_id') | |
if not self.points: | |
problems.OtherProblem('The shape with shape_id "%s" contains no points.' % | |
self.shape_id, type=TYPE_WARNING) | |
def GetPointWithDistanceTraveled(self, shape_dist_traveled): | |
"""Returns a point on the shape polyline with the input shape_dist_traveled. | |
Args: | |
shape_dist_traveled: The input shape_dist_traveled. | |
Returns: | |
The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and | |
lng is the location of the shape point, and shape_dist_traveled is an | |
increasing metric representing the distance traveled along the shape. | |
Returns None if there is data error in shape. | |
""" | |
if not self.distance: | |
return None | |
if shape_dist_traveled <= self.distance[0]: | |
return self.points[0] | |
if shape_dist_traveled >= self.distance[-1]: | |
return self.points[-1] | |
index = bisect.bisect(self.distance, shape_dist_traveled) | |
(lat0, lng0, dist0) = self.points[index - 1] | |
(lat1, lng1, dist1) = self.points[index] | |
# Interpolate if shape_dist_traveled does not equal to any of the point | |
# in shape segment. | |
# (lat0, lng0) (lat, lng) (lat1, lng1) | |
# -----|--------------------|---------------------|------ | |
# dist0 shape_dist_traveled dist1 | |
# \------- ca --------/ \-------- bc -------/ | |
# \----------------- ba ------------------/ | |
ca = shape_dist_traveled - dist0 | |
bc = dist1 - shape_dist_traveled | |
ba = bc + ca | |
if ba == 0: | |
# This only happens when there's data error in shapes and should have been | |
# catched before. Check to avoid crash. | |
return None | |
# This won't work crossing longitude 180 and is only an approximation which | |
# works well for short distance. | |
lat = (lat1 * ca + lat0 * bc) / ba | |
lng = (lng1 * ca + lng0 * bc) / ba | |
return (lat, lng, shape_dist_traveled) | |
class ISO639(object): | |
# Set of all the 2-letter ISO 639-1 language codes. | |
codes_2letter = set([ | |
'aa', 'ab', 'ae', 'af', 'ak', 'am', 'an', 'ar', 'as', 'av', 'ay', 'az', | |
'ba', 'be', 'bg', 'bh', 'bi', 'bm', 'bn', 'bo', 'br', 'bs', 'ca', 'ce', | |
'ch', 'co', 'cr', 'cs', 'cu', 'cv', 'cy', 'da', 'de', 'dv', 'dz', 'ee', | |
'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fj', 'fo', 'fr', | |
'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'gv', 'ha', 'he', 'hi', 'ho', 'hr', | |
'ht', 'hu', 'hy', 'hz', 'ia', 'id', 'ie', 'ig', 'ii', 'ik', 'io', 'is', | |
'it', 'iu', 'ja', 'jv', 'ka', 'kg', 'ki', 'kj', 'kk', 'kl', 'km', 'kn', | |
'ko', 'kr', 'ks', 'ku', 'kv', 'kw', 'ky', 'la', 'lb', 'lg', 'li', 'ln', | |
'lo', 'lt', 'lu', 'lv', 'mg', 'mh', 'mi', 'mk', 'ml', 'mn', 'mo', 'mr', | |
'ms', 'mt', 'my', 'na', 'nb', 'nd', 'ne', 'ng', 'nl', 'nn', 'no', 'nr', | |
'nv', 'ny', 'oc', 'oj', 'om', 'or', 'os', 'pa', 'pi', 'pl', 'ps', 'pt', | |
'qu', 'rm', 'rn', 'ro', 'ru', 'rw', 'sa', 'sc', 'sd', 'se', 'sg', 'si', | |
'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'ss', 'st', 'su', 'sv', 'sw', | |
'ta', 'te', 'tg', 'th', 'ti', 'tk', 'tl', 'tn', 'to', 'tr', 'ts', 'tt', | |
'tw', 'ty', 'ug', 'uk', 'ur', 'uz', 've', 'vi', 'vo', 'wa', 'wo', 'xh', | |
'yi', 'yo', 'za', 'zh', 'zu', | |
]) | |
class Agency(GenericGTFSObject): | |
"""Represents an agency in a schedule. | |
Callers may assign arbitrary values to instance attributes. __init__ makes no | |
attempt at validating the attributes. Call Validate() to check that | |
attributes are valid and the agency object is consistent with itself. | |
Attributes: | |
All attributes are strings. | |
""" | |
_REQUIRED_FIELD_NAMES = ['agency_name', 'agency_url', 'agency_timezone'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['agency_id', 'agency_lang', | |
'agency_phone'] | |
_TABLE_NAME = 'agency' | |
def __init__(self, name=None, url=None, timezone=None, id=None, | |
field_dict=None, lang=None, **kwargs): | |
"""Initialize a new Agency object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
name: a string, ignored when field_dict is present | |
url: a string, ignored when field_dict is present | |
timezone: a string, ignored when field_dict is present | |
id: a string, ignored when field_dict is present | |
kwargs: arbitrary keyword arguments may be used to add attributes to the | |
new object, ignored when field_dict is present | |
""" | |
self._schedule = None | |
if not field_dict: | |
if name: | |
kwargs['agency_name'] = name | |
if url: | |
kwargs['agency_url'] = url | |
if timezone: | |
kwargs['agency_timezone'] = timezone | |
if id: | |
kwargs['agency_id'] = id | |
if lang: | |
kwargs['agency_lang'] = lang | |
field_dict = kwargs | |
self.__dict__.update(field_dict) | |
def Validate(self, problems=default_problem_reporter): | |
"""Validate attribute values and this object's internal consistency. | |
Returns: | |
True iff all validation checks passed. | |
""" | |
found_problem = False | |
for required in Agency._REQUIRED_FIELD_NAMES: | |
if IsEmpty(getattr(self, required, None)): | |
problems.MissingValue(required) | |
found_problem = True | |
if self.agency_url and not IsValidURL(self.agency_url): | |
problems.InvalidValue('agency_url', self.agency_url) | |
found_problem = True | |
if (not IsEmpty(self.agency_lang) and | |
self.agency_lang.lower() not in ISO639.codes_2letter): | |
problems.InvalidValue('agency_lang', self.agency_lang) | |
found_problem = True | |
try: | |
import pytz | |
if self.agency_timezone not in pytz.common_timezones: | |
problems.InvalidValue( | |
'agency_timezone', | |
self.agency_timezone, | |
'"%s" is not a common timezone name according to pytz version %s' % | |
(self.agency_timezone, pytz.VERSION)) | |
found_problem = True | |
except ImportError: # no pytz | |
print ("Timezone not checked " | |
"(install pytz package for timezone validation)") | |
return not found_problem | |
class Transfer(object): | |
"""Represents a transfer in a schedule""" | |
_REQUIRED_FIELD_NAMES = ['from_stop_id', 'to_stop_id', 'transfer_type'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['min_transfer_time'] | |
def __init__(self, schedule=None, from_stop_id=None, to_stop_id=None, transfer_type=None, | |
min_transfer_time=None, field_dict=None): | |
if schedule is not None: | |
self._schedule = weakref.proxy(schedule) # See weakref comment at top | |
else: | |
self._schedule = None | |
if field_dict: | |
self.__dict__.update(field_dict) | |
else: | |
self.from_stop_id = from_stop_id | |
self.to_stop_id = to_stop_id | |
self.transfer_type = transfer_type | |
self.min_transfer_time = min_transfer_time | |
if getattr(self, 'transfer_type', None) in ("", None): | |
# Use the default, recommended transfer, if attribute is not set or blank | |
self.transfer_type = 0 | |
else: | |
try: | |
self.transfer_type = NonNegIntStringToInt(self.transfer_type) | |
except (TypeError, ValueError): | |
pass | |
if hasattr(self, 'min_transfer_time'): | |
try: | |
self.min_transfer_time = NonNegIntStringToInt(self.min_transfer_time) | |
except (TypeError, ValueError): | |
pass | |
else: | |
self.min_transfer_time = None | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in Transfer._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.GetFieldValuesTuple() == other.GetFieldValuesTuple() | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<Transfer %s>" % self.__dict__ | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.from_stop_id): | |
problems.MissingValue('from_stop_id') | |
elif self._schedule: | |
if self.from_stop_id not in self._schedule.stops.keys(): | |
problems.InvalidValue('from_stop_id', self.from_stop_id) | |
if IsEmpty(self.to_stop_id): | |
problems.MissingValue('to_stop_id') | |
elif self._schedule: | |
if self.to_stop_id not in self._schedule.stops.keys(): | |
problems.InvalidValue('to_stop_id', self.to_stop_id) | |
if not IsEmpty(self.transfer_type): | |
if (not isinstance(self.transfer_type, int)) or \ | |
(self.transfer_type not in range(0, 4)): | |
problems.InvalidValue('transfer_type', self.transfer_type) | |
if not IsEmpty(self.min_transfer_time): | |
if (not isinstance(self.min_transfer_time, int)) or \ | |
self.min_transfer_time < 0: | |
problems.InvalidValue('min_transfer_time', self.min_transfer_time) | |
class ServicePeriod(object): | |
"""Represents a service, which identifies a set of dates when one or more | |
trips operate.""" | |
_DAYS_OF_WEEK = [ | |
'monday', 'tuesday', 'wednesday', 'thursday', 'friday', | |
'saturday', 'sunday' | |
] | |
_FIELD_NAMES_REQUIRED = [ | |
'service_id', 'start_date', 'end_date' | |
] + _DAYS_OF_WEEK | |
_FIELD_NAMES = _FIELD_NAMES_REQUIRED # no optional fields in this one | |
_FIELD_NAMES_CALENDAR_DATES = ['service_id', 'date', 'exception_type'] | |
def __init__(self, id=None, field_list=None): | |
self.original_day_values = [] | |
if field_list: | |
self.service_id = field_list[self._FIELD_NAMES.index('service_id')] | |
self.day_of_week = [False] * len(self._DAYS_OF_WEEK) | |
for day in self._DAYS_OF_WEEK: | |
value = field_list[self._FIELD_NAMES.index(day)] or '' # can be None | |
self.original_day_values += [value.strip()] | |
self.day_of_week[self._DAYS_OF_WEEK.index(day)] = (value == u'1') | |
self.start_date = field_list[self._FIELD_NAMES.index('start_date')] | |
self.end_date = field_list[self._FIELD_NAMES.index('end_date')] | |
else: | |
self.service_id = id | |
self.day_of_week = [False] * 7 | |
self.start_date = None | |
self.end_date = None | |
self.date_exceptions = {} # Map from 'YYYYMMDD' to 1 (add) or 2 (remove) | |
def _IsValidDate(self, date): | |
if re.match('^\d{8}$', date) == None: | |
return False | |
try: | |
time.strptime(date, "%Y%m%d") | |
return True | |
except ValueError: | |
return False | |
def GetDateRange(self): | |
"""Return the range over which this ServicePeriod is valid. | |
The range includes exception dates that add service outside of | |
(start_date, end_date), but doesn't shrink the range if exception | |
dates take away service at the edges of the range. | |
Returns: | |
A tuple of "YYYYMMDD" strings, (start date, end date) or (None, None) if | |
no dates have been given. | |
""" | |
start = self.start_date | |
end = self.end_date | |
for date in self.date_exceptions: | |
if self.date_exceptions[date] == 2: | |
continue | |
if not start or (date < start): | |
start = date | |
if not end or (date > end): | |
end = date | |
if start is None: | |
start = end | |
elif end is None: | |
end = start | |
# If start and end are None we did a little harmless shuffling | |
return (start, end) | |
def GetCalendarFieldValuesTuple(self): | |
"""Return the tuple of calendar.txt values or None if this ServicePeriod | |
should not be in calendar.txt .""" | |
if self.start_date and self.end_date: | |
return [getattr(self, fn) for fn in ServicePeriod._FIELD_NAMES] | |
def GenerateCalendarDatesFieldValuesTuples(self): | |
"""Generates tuples of calendar_dates.txt values. Yield zero tuples if | |
this ServicePeriod should not be in calendar_dates.txt .""" | |
for date, exception_type in self.date_exceptions.items(): | |
yield (self.service_id, date, unicode(exception_type)) | |
def GetCalendarDatesFieldValuesTuples(self): | |
"""Return a list of date execeptions""" | |
result = [] | |
for date_tuple in self.GenerateCalendarDatesFieldValuesTuples(): | |
result.append(date_tuple) | |
result.sort() # helps with __eq__ | |
return result | |
def SetDateHasService(self, date, has_service=True, problems=None): | |
if date in self.date_exceptions and problems: | |
problems.DuplicateID(('service_id', 'date'), | |
(self.service_id, date), | |
type=TYPE_WARNING) | |
self.date_exceptions[date] = has_service and 1 or 2 | |
def ResetDateToNormalService(self, date): | |
if date in self.date_exceptions: | |
del self.date_exceptions[date] | |
def SetStartDate(self, start_date): | |
"""Set the first day of service as a string in YYYYMMDD format""" | |
self.start_date = start_date | |
def SetEndDate(self, end_date): | |
"""Set the last day of service as a string in YYYYMMDD format""" | |
self.end_date = end_date | |
def SetDayOfWeekHasService(self, dow, has_service=True): | |
"""Set service as running (or not) on a day of the week. By default the | |
service does not run on any days. | |
Args: | |
dow: 0 for Monday through 6 for Sunday | |
has_service: True if this service operates on dow, False if it does not. | |
Returns: | |
None | |
""" | |
assert(dow >= 0 and dow < 7) | |
self.day_of_week[dow] = has_service | |
def SetWeekdayService(self, has_service=True): | |
"""Set service as running (or not) on all of Monday through Friday.""" | |
for i in range(0, 5): | |
self.SetDayOfWeekHasService(i, has_service) | |
def SetWeekendService(self, has_service=True): | |
"""Set service as running (or not) on Saturday and Sunday.""" | |
self.SetDayOfWeekHasService(5, has_service) | |
self.SetDayOfWeekHasService(6, has_service) | |
def SetServiceId(self, service_id): | |
"""Set the service_id for this schedule. Generally the default will | |
suffice so you won't need to call this method.""" | |
self.service_id = service_id | |
def IsActiveOn(self, date, date_object=None): | |
"""Test if this service period is active on a date. | |
Args: | |
date: a string of form "YYYYMMDD" | |
date_object: a date object representing the same date as date. | |
This parameter is optional, and present only for performance | |
reasons. | |
If the caller constructs the date string from a date object | |
that date object can be passed directly, thus avoiding the | |
costly conversion from string to date object. | |
Returns: | |
True iff this service is active on date. | |
""" | |
if date in self.date_exceptions: | |
if self.date_exceptions[date] == 1: | |
return True | |
else: | |
return False | |
if (self.start_date and self.end_date and self.start_date <= date and | |
date <= self.end_date): | |
if date_object is None: | |
date_object = DateStringToDateObject(date) | |
return self.day_of_week[date_object.weekday()] | |
return False | |
def ActiveDates(self): | |
"""Return dates this service period is active as a list of "YYYYMMDD".""" | |
(earliest, latest) = self.GetDateRange() | |
if earliest is None: | |
return [] | |
dates = [] | |
date_it = DateStringToDateObject(earliest) | |
date_end = DateStringToDateObject(latest) | |
delta = datetime.timedelta(days=1) | |
while date_it <= date_end: | |
date_it_string = date_it.strftime("%Y%m%d") | |
if self.IsActiveOn(date_it_string, date_it): | |
dates.append(date_it_string) | |
date_it = date_it + delta | |
return dates | |
def __getattr__(self, name): | |
try: | |
# Return 1 if value in day_of_week is True, 0 otherwise | |
return (self.day_of_week[ServicePeriod._DAYS_OF_WEEK.index(name)] | |
and 1 or 0) | |
except KeyError: | |
pass | |
except ValueError: # not a day of the week | |
pass | |
raise AttributeError(name) | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
if (self.GetCalendarFieldValuesTuple() != | |
other.GetCalendarFieldValuesTuple()): | |
return False | |
if (self.GetCalendarDatesFieldValuesTuples() != | |
other.GetCalendarDatesFieldValuesTuples()): | |
return False | |
return True | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def Validate(self, problems=default_problem_reporter): | |
if IsEmpty(self.service_id): | |
problems.MissingValue('service_id') | |
# self.start_date/self.end_date is None in 3 cases: | |
# ServicePeriod created by loader and | |
# 1a) self.service_id wasn't in calendar.txt | |
# 1b) calendar.txt didn't have a start_date/end_date column | |
# ServicePeriod created directly and | |
# 2) start_date/end_date wasn't set | |
# In case 1a no problem is reported. In case 1b the missing required column | |
# generates an error in _ReadCSV so this method should not report another | |
# problem. There is no way to tell the difference between cases 1b and 2 | |
# so case 2 is ignored because making the feedvalidator pretty is more | |
# important than perfect validation when an API users makes a mistake. | |
start_date = None | |
if self.start_date is not None: | |
if IsEmpty(self.start_date): | |
problems.MissingValue('start_date') | |
elif self._IsValidDate(self.start_date): | |
start_date = self.start_date | |
else: | |
problems.InvalidValue('start_date', self.start_date) | |
end_date = None | |
if self.end_date is not None: | |
if IsEmpty(self.end_date): | |
problems.MissingValue('end_date') | |
elif self._IsValidDate(self.end_date): | |
end_date = self.end_date | |
else: | |
problems.InvalidValue('end_date', self.end_date) | |
if start_date and end_date and end_date < start_date: | |
problems.InvalidValue('end_date', end_date, | |
'end_date of %s is earlier than ' | |
'start_date of "%s"' % | |
(end_date, start_date)) | |
if self.original_day_values: | |
index = 0 | |
for value in self.original_day_values: | |
column_name = self._DAYS_OF_WEEK[index] | |
if IsEmpty(value): | |
problems.MissingValue(column_name) | |
elif (value != u'0') and (value != '1'): | |
problems.InvalidValue(column_name, value) | |
index += 1 | |
if (True not in self.day_of_week and | |
1 not in self.date_exceptions.values()): | |
problems.OtherProblem('Service period with service_id "%s" ' | |
'doesn\'t have service on any days ' | |
'of the week.' % self.service_id, | |
type=TYPE_WARNING) | |
for date in self.date_exceptions: | |
if not self._IsValidDate(date): | |
problems.InvalidValue('date', date) | |
class CsvUnicodeWriter: | |
""" | |
Create a wrapper around a csv writer object which can safely write unicode | |
values. Passes all arguments to csv.writer. | |
""" | |
def __init__(self, *args, **kwargs): | |
self.writer = csv.writer(*args, **kwargs) | |
def writerow(self, row): | |
"""Write row to the csv file. Any unicode strings in row are encoded as | |
utf-8.""" | |
encoded_row = [] | |
for s in row: | |
if isinstance(s, unicode): | |
encoded_row.append(s.encode("utf-8")) | |
else: | |
encoded_row.append(s) | |
try: | |
self.writer.writerow(encoded_row) | |
except Exception, e: | |
print 'error writing %s as %s' % (row, encoded_row) | |
raise e | |
def writerows(self, rows): | |
"""Write rows to the csv file. Any unicode strings in rows are encoded as | |
utf-8.""" | |
for row in rows: | |
self.writerow(row) | |
def __getattr__(self, name): | |
return getattr(self.writer, name) | |
class Schedule: | |
"""Represents a Schedule, a collection of stops, routes, trips and | |
an agency. This is the main class for this module.""" | |
def __init__(self, problem_reporter=default_problem_reporter, | |
memory_db=True, check_duplicate_trips=False): | |
# Map from table name to list of columns present in this schedule | |
self._table_columns = {} | |
self._agencies = {} | |
self.stops = {} | |
self.routes = {} | |
self.trips = {} | |
self.service_periods = {} | |
self.fares = {} | |
self.fare_zones = {} # represents the set of all known fare zones | |
self._shapes = {} # shape_id to Shape | |
self._transfers = [] # list of transfers | |
self._default_service_period = None | |
self._default_agency = None | |
self.problem_reporter = problem_reporter | |
self._check_duplicate_trips = check_duplicate_trips | |
self.ConnectDb(memory_db) | |
def AddTableColumn(self, table, column): | |
"""Add column to table if it is not already there.""" | |
if column not in self._table_columns[table]: | |
self._table_columns[table].append(column) | |
def AddTableColumns(self, table, columns): | |
"""Add columns to table if they are not already there. | |
Args: | |
table: table name as a string | |
columns: an iterable of column names""" | |
table_columns = self._table_columns.setdefault(table, []) | |
for attr in columns: | |
if attr not in table_columns: | |
table_columns.append(attr) | |
def GetTableColumns(self, table): | |
"""Return list of columns in a table.""" | |
return self._table_columns[table] | |
def __del__(self): | |
if hasattr(self, '_temp_db_filename'): | |
os.remove(self._temp_db_filename) | |
def ConnectDb(self, memory_db): | |
if memory_db: | |
self._connection = sqlite.connect(":memory:") | |
else: | |
try: | |
self._temp_db_file = tempfile.NamedTemporaryFile() | |
self._connection = sqlite.connect(self._temp_db_file.name) | |
except sqlite.OperationalError: | |
# Windows won't let a file be opened twice. mkstemp does not remove the | |
# file when all handles to it are closed. | |
self._temp_db_file = None | |
(fd, self._temp_db_filename) = tempfile.mkstemp(".db") | |
os.close(fd) | |
self._connection = sqlite.connect(self._temp_db_filename) | |
cursor = self._connection.cursor() | |
cursor.execute("""CREATE TABLE stop_times ( | |
trip_id CHAR(50), | |
arrival_secs INTEGER, | |
departure_secs INTEGER, | |
stop_id CHAR(50), | |
stop_sequence INTEGER, | |
stop_headsign VAR CHAR(100), | |
pickup_type INTEGER, | |
drop_off_type INTEGER, | |
shape_dist_traveled FLOAT);""") | |
cursor.execute("""CREATE INDEX trip_index ON stop_times (trip_id);""") | |
cursor.execute("""CREATE INDEX stop_index ON stop_times (stop_id);""") | |
def GetStopBoundingBox(self): | |
return (min(s.stop_lat for s in self.stops.values()), | |
min(s.stop_lon for s in self.stops.values()), | |
max(s.stop_lat for s in self.stops.values()), | |
max(s.stop_lon for s in self.stops.values()), | |
) | |
def AddAgency(self, name, url, timezone, agency_id=None): | |
"""Adds an agency to this schedule.""" | |
agency = Agency(name, url, timezone, agency_id) | |
self.AddAgencyObject(agency) | |
return agency | |
def AddAgencyObject(self, agency, problem_reporter=None, validate=True): | |
assert agency._schedule is None | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if agency.agency_id in self._agencies: | |
problem_reporter.DuplicateID('agency_id', agency.agency_id) | |
return | |
self.AddTableColumns('agency', agency._ColumnNames()) | |
agency._schedule = weakref.proxy(self) | |
if validate: | |
agency.Validate(problem_reporter) | |
self._agencies[agency.agency_id] = agency | |
def GetAgency(self, agency_id): | |
"""Return Agency with agency_id or throw a KeyError""" | |
return self._agencies[agency_id] | |
def GetDefaultAgency(self): | |
"""Return the default Agency. If no default Agency has been set select the | |
default depending on how many Agency objects are in the Schedule. If there | |
are 0 make a new Agency the default, if there is 1 it becomes the default, | |
if there is more than 1 then return None. | |
""" | |
if not self._default_agency: | |
if len(self._agencies) == 0: | |
self.NewDefaultAgency() | |
elif len(self._agencies) == 1: | |
self._default_agency = self._agencies.values()[0] | |
return self._default_agency | |
def NewDefaultAgency(self, **kwargs): | |
"""Create a new Agency object and make it the default agency for this Schedule""" | |
agency = Agency(**kwargs) | |
if not agency.agency_id: | |
agency.agency_id = FindUniqueId(self._agencies) | |
self._default_agency = agency | |
self.SetDefaultAgency(agency, validate=False) # Blank agency won't validate | |
return agency | |
def SetDefaultAgency(self, agency, validate=True): | |
"""Make agency the default and add it to the schedule if not already added""" | |
assert isinstance(agency, Agency) | |
self._default_agency = agency | |
if agency.agency_id not in self._agencies: | |
self.AddAgencyObject(agency, validate=validate) | |
def GetAgencyList(self): | |
"""Returns the list of Agency objects known to this Schedule.""" | |
return self._agencies.values() | |
def GetServicePeriod(self, service_id): | |
"""Returns the ServicePeriod object with the given ID.""" | |
return self.service_periods[service_id] | |
def GetDefaultServicePeriod(self): | |
"""Return the default ServicePeriod. If no default ServicePeriod has been | |
set select the default depending on how many ServicePeriod objects are in | |
the Schedule. If there are 0 make a new ServicePeriod the default, if there | |
is 1 it becomes the default, if there is more than 1 then return None. | |
""" | |
if not self._default_service_period: | |
if len(self.service_periods) == 0: | |
self.NewDefaultServicePeriod() | |
elif len(self.service_periods) == 1: | |
self._default_service_period = self.service_periods.values()[0] | |
return self._default_service_period | |
def NewDefaultServicePeriod(self): | |
"""Create a new ServicePeriod object, make it the default service period and | |
return it. The default service period is used when you create a trip without | |
providing an explict service period. """ | |
service_period = ServicePeriod() | |
service_period.service_id = FindUniqueId(self.service_periods) | |
# blank service won't validate in AddServicePeriodObject | |
self.SetDefaultServicePeriod(service_period, validate=False) | |
return service_period | |
def SetDefaultServicePeriod(self, service_period, validate=True): | |
assert isinstance(service_period, ServicePeriod) | |
self._default_service_period = service_period | |
if service_period.service_id not in self.service_periods: | |
self.AddServicePeriodObject(service_period, validate=validate) | |
def AddServicePeriodObject(self, service_period, problem_reporter=None, | |
validate=True): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if service_period.service_id in self.service_periods: | |
problem_reporter.DuplicateID('service_id', service_period.service_id) | |
return | |
if validate: | |
service_period.Validate(problem_reporter) | |
self.service_periods[service_period.service_id] = service_period | |
def GetServicePeriodList(self): | |
return self.service_periods.values() | |
def GetDateRange(self): | |
"""Returns a tuple of (earliest, latest) dates on which the service | |
periods in the schedule define service, in YYYYMMDD form.""" | |
ranges = [period.GetDateRange() for period in self.GetServicePeriodList()] | |
starts = filter(lambda x: x, [item[0] for item in ranges]) | |
ends = filter(lambda x: x, [item[1] for item in ranges]) | |
if not starts or not ends: | |
return (None, None) | |
return (min(starts), max(ends)) | |
def GetServicePeriodsActiveEachDate(self, date_start, date_end): | |
"""Return a list of tuples (date, [period1, period2, ...]). | |
For each date in the range [date_start, date_end) make list of each | |
ServicePeriod object which is active. | |
Args: | |
date_start: The first date in the list, a date object | |
date_end: The first date after the list, a date object | |
Returns: | |
A list of tuples. Each tuple contains a date object and a list of zero or | |
more ServicePeriod objects. | |
""" | |
date_it = date_start | |
one_day = datetime.timedelta(days=1) | |
date_service_period_list = [] | |
while date_it < date_end: | |
periods_today = [] | |
date_it_string = date_it.strftime("%Y%m%d") | |
for service in self.GetServicePeriodList(): | |
if service.IsActiveOn(date_it_string, date_it): | |
periods_today.append(service) | |
date_service_period_list.append((date_it, periods_today)) | |
date_it += one_day | |
return date_service_period_list | |
def AddStop(self, lat, lng, name): | |
"""Add a stop to this schedule. | |
A new stop_id is created for this stop. Do not use this method unless all | |
stops in this Schedule are created with it. See source for details. | |
Args: | |
lat: Latitude of the stop as a float or string | |
lng: Longitude of the stop as a float or string | |
name: Name of the stop, which will appear in the feed | |
Returns: | |
A new Stop object | |
""" | |
# TODO: stop_id isn't guarenteed to be unique and conflicts are not | |
# handled. Please fix. | |
stop_id = unicode(len(self.stops)) | |
stop = Stop(stop_id=stop_id, lat=lat, lng=lng, name=name) | |
self.AddStopObject(stop) | |
return stop | |
def AddStopObject(self, stop, problem_reporter=None): | |
"""Add Stop object to this schedule if stop_id is non-blank.""" | |
assert stop._schedule is None | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if not stop.stop_id: | |
return | |
if stop.stop_id in self.stops: | |
problem_reporter.DuplicateID('stop_id', stop.stop_id) | |
return | |
stop._schedule = weakref.proxy(self) | |
self.AddTableColumns('stops', stop._ColumnNames()) | |
self.stops[stop.stop_id] = stop | |
if hasattr(stop, 'zone_id') and stop.zone_id: | |
self.fare_zones[stop.zone_id] = True | |
def GetStopList(self): | |
return self.stops.values() | |
def AddRoute(self, short_name, long_name, route_type): | |
"""Add a route to this schedule. | |
Args: | |
short_name: Short name of the route, such as "71L" | |
long_name: Full name of the route, such as "NW 21st Ave/St Helens Rd" | |
route_type: A type such as "Tram", "Subway" or "Bus" | |
Returns: | |
A new Route object | |
""" | |
route_id = unicode(len(self.routes)) | |
route = Route(short_name=short_name, long_name=long_name, | |
route_type=route_type, route_id=route_id) | |
route.agency_id = self.GetDefaultAgency().agency_id | |
self.AddRouteObject(route) | |
return route | |
def AddRouteObject(self, route, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
route.Validate(problem_reporter) | |
if route.route_id in self.routes: | |
problem_reporter.DuplicateID('route_id', route.route_id) | |
return | |
if route.agency_id not in self._agencies: | |
if not route.agency_id and len(self._agencies) == 1: | |
# we'll just assume that the route applies to the only agency | |
pass | |
else: | |
problem_reporter.InvalidValue('agency_id', route.agency_id, | |
'Route uses an unknown agency_id.') | |
return | |
self.AddTableColumns('routes', route._ColumnNames()) | |
route._schedule = weakref.proxy(self) | |
self.routes[route.route_id] = route | |
def GetRouteList(self): | |
return self.routes.values() | |
def GetRoute(self, route_id): | |
return self.routes[route_id] | |
def AddShapeObject(self, shape, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
shape.Validate(problem_reporter) | |
if shape.shape_id in self._shapes: | |
problem_reporter.DuplicateID('shape_id', shape.shape_id) | |
return | |
self._shapes[shape.shape_id] = shape | |
def GetShapeList(self): | |
return self._shapes.values() | |
def GetShape(self, shape_id): | |
return self._shapes[shape_id] | |
def AddTripObject(self, trip, problem_reporter=None, validate=True): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if trip.trip_id in self.trips: | |
problem_reporter.DuplicateID('trip_id', trip.trip_id) | |
return | |
self.AddTableColumns('trips', trip._ColumnNames()) | |
trip._schedule = weakref.proxy(self) | |
self.trips[trip.trip_id] = trip | |
# Call Trip.Validate after setting trip._schedule so that references | |
# are checked. trip.ValidateChildren will be called directly by | |
# schedule.Validate, after stop_times has been loaded. | |
if validate: | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
trip.Validate(problem_reporter, validate_children=False) | |
try: | |
self.routes[trip.route_id]._AddTripObject(trip) | |
except KeyError: | |
# Invalid route_id was reported in the Trip.Validate call above | |
pass | |
def GetTripList(self): | |
return self.trips.values() | |
def GetTrip(self, trip_id): | |
return self.trips[trip_id] | |
def AddFareObject(self, fare, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
fare.Validate(problem_reporter) | |
if fare.fare_id in self.fares: | |
problem_reporter.DuplicateID('fare_id', fare.fare_id) | |
return | |
self.fares[fare.fare_id] = fare | |
def GetFareList(self): | |
return self.fares.values() | |
def GetFare(self, fare_id): | |
return self.fares[fare_id] | |
def AddFareRuleObject(self, rule, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if IsEmpty(rule.fare_id): | |
problem_reporter.MissingValue('fare_id') | |
return | |
if rule.route_id and rule.route_id not in self.routes: | |
problem_reporter.InvalidValue('route_id', rule.route_id) | |
if rule.origin_id and rule.origin_id not in self.fare_zones: | |
problem_reporter.InvalidValue('origin_id', rule.origin_id) | |
if rule.destination_id and rule.destination_id not in self.fare_zones: | |
problem_reporter.InvalidValue('destination_id', rule.destination_id) | |
if rule.contains_id and rule.contains_id not in self.fare_zones: | |
problem_reporter.InvalidValue('contains_id', rule.contains_id) | |
if rule.fare_id in self.fares: | |
self.GetFare(rule.fare_id).rules.append(rule) | |
else: | |
problem_reporter.InvalidValue('fare_id', rule.fare_id, | |
'(This fare_id doesn\'t correspond to any ' | |
'of the IDs defined in the ' | |
'fare attributes.)') | |
def AddTransferObject(self, transfer, problem_reporter=None): | |
assert transfer._schedule is None, "only add Transfer to a schedule once" | |
transfer._schedule = weakref.proxy(self) # See weakref comment at top | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
transfer.Validate(problem_reporter) | |
self._transfers.append(transfer) | |
def GetTransferList(self): | |
return self._transfers | |
def GetStop(self, id): | |
return self.stops[id] | |
def GetFareZones(self): | |
"""Returns the list of all fare zones that have been identified by | |
the stops that have been added.""" | |
return self.fare_zones.keys() | |
def GetNearestStops(self, lat, lon, n=1): | |
"""Return the n nearest stops to lat,lon""" | |
dist_stop_list = [] | |
for s in self.stops.values(): | |
# TODO: Use ApproximateDistanceBetweenStops? | |
dist = (s.stop_lat - lat)**2 + (s.stop_lon - lon)**2 | |
if len(dist_stop_list) < n: | |
bisect.insort(dist_stop_list, (dist, s)) | |
elif dist < dist_stop_list[-1][0]: | |
bisect.insort(dist_stop_list, (dist, s)) | |
dist_stop_list.pop() # Remove stop with greatest distance | |
return [stop for dist, stop in dist_stop_list] | |
def GetStopsInBoundingBox(self, north, east, south, west, n): | |
"""Return a sample of up to n stops in a bounding box""" | |
stop_list = [] | |
for s in self.stops.values(): | |
if (s.stop_lat <= north and s.stop_lat >= south and | |
s.stop_lon <= east and s.stop_lon >= west): | |
stop_list.append(s) | |
if len(stop_list) == n: | |
break | |
return stop_list | |
def Load(self, feed_path, extra_validation=False): | |
loader = Loader(feed_path, self, problems=self.problem_reporter, | |
extra_validation=extra_validation) | |
loader.Load() | |
def _WriteArchiveString(self, archive, filename, stringio): | |
zi = zipfile.ZipInfo(filename) | |
# See | |
# http://stackoverflow.com/questions/434641/how-do-i-set-permissions-attributes-on-a-file-in-a-zip-file-using-pythons-zipf | |
zi.external_attr = 0666 << 16L # Set unix permissions to -rw-rw-rw | |
# ZIP_DEFLATED requires zlib. zlib comes with Python 2.4 and 2.5 | |
zi.compress_type = zipfile.ZIP_DEFLATED | |
archive.writestr(zi, stringio.getvalue()) | |
def WriteGoogleTransitFeed(self, file): | |
"""Output this schedule as a Google Transit Feed in file_name. | |
Args: | |
file: path of new feed file (a string) or a file-like object | |
Returns: | |
None | |
""" | |
# Compression type given when adding each file | |
archive = zipfile.ZipFile(file, 'w') | |
if 'agency' in self._table_columns: | |
agency_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(agency_string) | |
columns = self.GetTableColumns('agency') | |
writer.writerow(columns) | |
for a in self._agencies.values(): | |
writer.writerow([EncodeUnicode(a[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'agency.txt', agency_string) | |
calendar_dates_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(calendar_dates_string) | |
writer.writerow(ServicePeriod._FIELD_NAMES_CALENDAR_DATES) | |
has_data = False | |
for period in self.service_periods.values(): | |
for row in period.GenerateCalendarDatesFieldValuesTuples(): | |
has_data = True | |
writer.writerow(row) | |
wrote_calendar_dates = False | |
if has_data: | |
wrote_calendar_dates = True | |
self._WriteArchiveString(archive, 'calendar_dates.txt', | |
calendar_dates_string) | |
calendar_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(calendar_string) | |
writer.writerow(ServicePeriod._FIELD_NAMES) | |
has_data = False | |
for s in self.service_periods.values(): | |
row = s.GetCalendarFieldValuesTuple() | |
if row: | |
has_data = True | |
writer.writerow(row) | |
if has_data or not wrote_calendar_dates: | |
self._WriteArchiveString(archive, 'calendar.txt', calendar_string) | |
if 'stops' in self._table_columns: | |
stop_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(stop_string) | |
columns = self.GetTableColumns('stops') | |
writer.writerow(columns) | |
for s in self.stops.values(): | |
writer.writerow([EncodeUnicode(s[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'stops.txt', stop_string) | |
if 'routes' in self._table_columns: | |
route_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(route_string) | |
columns = self.GetTableColumns('routes') | |
writer.writerow(columns) | |
for r in self.routes.values(): | |
writer.writerow([EncodeUnicode(r[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'routes.txt', route_string) | |
if 'trips' in self._table_columns: | |
trips_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(trips_string) | |
columns = self.GetTableColumns('trips') | |
writer.writerow(columns) | |
for t in self.trips.values(): | |
writer.writerow([EncodeUnicode(t[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'trips.txt', trips_string) | |
# write frequencies.txt (if applicable) | |
headway_rows = [] | |
for trip in self.GetTripList(): | |
headway_rows += trip.GetHeadwayPeriodOutputTuples() | |
if headway_rows: | |
headway_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(headway_string) | |
writer.writerow(Trip._FIELD_NAMES_HEADWAY) | |
writer.writerows(headway_rows) | |
self._WriteArchiveString(archive, 'frequencies.txt', headway_string) | |
# write fares (if applicable) | |
if self.GetFareList(): | |
fare_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(fare_string) | |
writer.writerow(Fare._FIELD_NAMES) | |
writer.writerows(f.GetFieldValuesTuple() for f in self.GetFareList()) | |
self._WriteArchiveString(archive, 'fare_attributes.txt', fare_string) | |
# write fare rules (if applicable) | |
rule_rows = [] | |
for fare in self.GetFareList(): | |
for rule in fare.GetFareRuleList(): | |
rule_rows.append(rule.GetFieldValuesTuple()) | |
if rule_rows: | |
rule_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(rule_string) | |
writer.writerow(FareRule._FIELD_NAMES) | |
writer.writerows(rule_rows) | |
self._WriteArchiveString(archive, 'fare_rules.txt', rule_string) | |
stop_times_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(stop_times_string) | |
writer.writerow(StopTime._FIELD_NAMES) | |
for t in self.trips.values(): | |
writer.writerows(t._GenerateStopTimesTuples()) | |
self._WriteArchiveString(archive, 'stop_times.txt', stop_times_string) | |
# write shapes (if applicable) | |
shape_rows = [] | |
for shape in self.GetShapeList(): | |
seq = 1 | |
for (lat, lon, dist) in shape.points: | |
shape_rows.append((shape.shape_id, lat, lon, seq, dist)) | |
seq += 1 | |
if shape_rows: | |
shape_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(shape_string) | |
writer.writerow(Shape._FIELD_NAMES) | |
writer.writerows(shape_rows) | |
self._WriteArchiveString(archive, 'shapes.txt', shape_string) | |
# write transfers (if applicable) | |
if self.GetTransferList(): | |
transfer_string = StringIO.StringIO() | |
writer = CsvUnicodeWriter(transfer_string) | |
writer.writerow(Transfer._FIELD_NAMES) | |
writer.writerows(f.GetFieldValuesTuple() for f in self.GetTransferList()) | |
self._WriteArchiveString(archive, 'transfers.txt', transfer_string) | |
archive.close() | |
def GenerateDateTripsDeparturesList(self, date_start, date_end): | |
"""Return a list of (date object, number of trips, number of departures). | |
The list is generated for dates in the range [date_start, date_end). | |
Args: | |
date_start: The first date in the list, a date object | |
date_end: The first date after the list, a date object | |
Returns: | |
a list of (date object, number of trips, number of departures) tuples | |
""" | |
service_id_to_trips = defaultdict(lambda: 0) | |
service_id_to_departures = defaultdict(lambda: 0) | |
for trip in self.GetTripList(): | |
headway_start_times = trip.GetHeadwayStartTimes() | |
if headway_start_times: | |
trip_runs = len(headway_start_times) | |
else: | |
trip_runs = 1 | |
service_id_to_trips[trip.service_id] += trip_runs | |
service_id_to_departures[trip.service_id] += ( | |
(trip.GetCountStopTimes() - 1) * trip_runs) | |
date_services = self.GetServicePeriodsActiveEachDate(date_start, date_end) | |
date_trips = [] | |
for date, services in date_services: | |
day_trips = sum(service_id_to_trips[s.service_id] for s in services) | |
day_departures = sum( | |
service_id_to_departures[s.service_id] for s in services) | |
date_trips.append((date, day_trips, day_departures)) | |
return date_trips | |
def ValidateFeedStartAndExpirationDates(self, | |
problems, | |
first_date, | |
last_date, | |
today): | |
"""Validate the start and expiration dates of the feed. | |
Issue a warning if it only starts in the future, or if | |
it expires within 60 days. | |
Args: | |
problems: The problem reporter object | |
first_date: A date object representing the first day the feed is active | |
last_date: A date object representing the last day the feed is active | |
today: A date object representing the date the validation is being run on | |
Returns: | |
None | |
""" | |
warning_cutoff = today + datetime.timedelta(days=60) | |
if last_date < warning_cutoff: | |
problems.ExpirationDate(time.mktime(last_date.timetuple())) | |
if first_date > today: | |
problems.FutureService(time.mktime(first_date.timetuple())) | |
def ValidateServiceGaps(self, | |
problems, | |
validation_start_date, | |
validation_end_date, | |
service_gap_interval): | |
"""Validate consecutive dates without service in the feed. | |
Issue a warning if it finds service gaps of at least | |
"service_gap_interval" consecutive days in the date range | |
[validation_start_date, last_service_date) | |
Args: | |
problems: The problem reporter object | |
validation_start_date: A date object representing the date from which the | |
validation should take place | |
validation_end_date: A date object representing the first day the feed is | |
active | |
service_gap_interval: An integer indicating how many consecutive days the | |
service gaps need to have for a warning to be issued | |
Returns: | |
None | |
""" | |
if service_gap_interval is None: | |
return | |
departures = self.GenerateDateTripsDeparturesList(validation_start_date, | |
validation_end_date) | |
# The first day without service of the _current_ gap | |
first_day_without_service = validation_start_date | |
# The last day without service of the _current_ gap | |
last_day_without_service = validation_start_date | |
consecutive_days_without_service = 0 | |
for day_date, day_trips, _ in departures: | |
if day_trips == 0: | |
if consecutive_days_without_service == 0: | |
first_day_without_service = day_date | |
consecutive_days_without_service += 1 | |
last_day_without_service = day_date | |
else: | |
if consecutive_days_without_service >= service_gap_interval: | |
problems.TooManyDaysWithoutService(first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service) | |
consecutive_days_without_service = 0 | |
# We have to check if there is a gap at the end of the specified date range | |
if consecutive_days_without_service >= service_gap_interval: | |
problems.TooManyDaysWithoutService(first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service) | |
def Validate(self, | |
problems=None, | |
validate_children=True, | |
today=None, | |
service_gap_interval=None): | |
"""Validates various holistic aspects of the schedule | |
(mostly interrelationships between the various data sets).""" | |
if today is None: | |
today = datetime.date.today() | |
if not problems: | |
problems = self.problem_reporter | |
(start_date, end_date) = self.GetDateRange() | |
if not end_date or not start_date: | |
problems.OtherProblem('This feed has no effective service dates!', | |
type=TYPE_WARNING) | |
else: | |
try: | |
last_service_day = datetime.datetime( | |
*(time.strptime(end_date, "%Y%m%d")[0:6])).date() | |
first_service_day = datetime.datetime( | |
*(time.strptime(start_date, "%Y%m%d")[0:6])).date() | |
except ValueError: | |
# Format of start_date and end_date checked in class ServicePeriod | |
pass | |
else: | |
self.ValidateFeedStartAndExpirationDates(problems, | |
first_service_day, | |
last_service_day, | |
today) | |
# We start checking for service gaps a bit in the past if the | |
# feed was active then. See | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=188 | |
# | |
# We subtract 1 from service_gap_interval so that if today has | |
# service no warning is issued. | |
# | |
# Service gaps are searched for only up to one year from today | |
if service_gap_interval is not None: | |
service_gap_timedelta = datetime.timedelta( | |
days=service_gap_interval - 1) | |
one_year = datetime.timedelta(days=365) | |
self.ValidateServiceGaps( | |
problems, | |
max(first_service_day, | |
today - service_gap_timedelta), | |
min(last_service_day, | |
today + one_year), | |
service_gap_interval) | |
# TODO: Check Trip fields against valid values | |
# Check for stops that aren't referenced by any trips and broken | |
# parent_station references. Also check that the parent station isn't too | |
# far from its child stops. | |
for stop in self.stops.values(): | |
if validate_children: | |
stop.Validate(problems) | |
cursor = self._connection.cursor() | |
cursor.execute("SELECT count(*) FROM stop_times WHERE stop_id=? LIMIT 1", | |
(stop.stop_id,)) | |
count = cursor.fetchone()[0] | |
if stop.location_type == 0 and count == 0: | |
problems.UnusedStop(stop.stop_id, stop.stop_name) | |
elif stop.location_type == 1 and count != 0: | |
problems.UsedStation(stop.stop_id, stop.stop_name) | |
if stop.location_type != 1 and stop.parent_station: | |
if stop.parent_station not in self.stops: | |
problems.InvalidValue("parent_station", | |
EncodeUnicode(stop.parent_station), | |
"parent_station '%s' not found for stop_id " | |
"'%s' in stops.txt" % | |
(EncodeUnicode(stop.parent_station), | |
EncodeUnicode(stop.stop_id))) | |
elif self.stops[stop.parent_station].location_type != 1: | |
problems.InvalidValue("parent_station", | |
EncodeUnicode(stop.parent_station), | |
"parent_station '%s' of stop_id '%s' must " | |
"have location_type=1 in stops.txt" % | |
(EncodeUnicode(stop.parent_station), | |
EncodeUnicode(stop.stop_id))) | |
else: | |
parent_station = self.stops[stop.parent_station] | |
distance = ApproximateDistanceBetweenStops(stop, parent_station) | |
if distance > MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR: | |
problems.StopTooFarFromParentStation( | |
stop.stop_id, stop.stop_name, parent_station.stop_id, | |
parent_station.stop_name, distance, TYPE_ERROR) | |
elif distance > MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING: | |
problems.StopTooFarFromParentStation( | |
stop.stop_id, stop.stop_name, parent_station.stop_id, | |
parent_station.stop_name, distance, TYPE_WARNING) | |
#TODO: check that every station is used. | |
# Then uncomment testStationWithoutReference. | |
# Check for stops that might represent the same location (specifically, | |
# stops that are less that 2 meters apart) First filter out stops without a | |
# valid lat and lon. Then sort by latitude, then find the distance between | |
# each pair of stations within 2 meters latitude of each other. This avoids | |
# doing n^2 comparisons in the average case and doesn't need a spatial | |
# index. | |
sorted_stops = filter(lambda s: s.stop_lat and s.stop_lon, | |
self.GetStopList()) | |
sorted_stops.sort(key=(lambda x: x.stop_lat)) | |
TWO_METERS_LAT = 0.000018 | |
for index, stop in enumerate(sorted_stops[:-1]): | |
index += 1 | |
while ((index < len(sorted_stops)) and | |
((sorted_stops[index].stop_lat - stop.stop_lat) < TWO_METERS_LAT)): | |
distance = ApproximateDistanceBetweenStops(stop, sorted_stops[index]) | |
if distance < 2: | |
other_stop = sorted_stops[index] | |
if stop.location_type == 0 and other_stop.location_type == 0: | |
problems.StopsTooClose( | |
EncodeUnicode(stop.stop_name), | |
EncodeUnicode(stop.stop_id), | |
EncodeUnicode(other_stop.stop_name), | |
EncodeUnicode(other_stop.stop_id), distance) | |
elif stop.location_type == 1 and other_stop.location_type == 1: | |
problems.StationsTooClose( | |
EncodeUnicode(stop.stop_name), EncodeUnicode(stop.stop_id), | |
EncodeUnicode(other_stop.stop_name), | |
EncodeUnicode(other_stop.stop_id), distance) | |
elif (stop.location_type in (0, 1) and | |
other_stop.location_type in (0, 1)): | |
if stop.location_type == 0 and other_stop.location_type == 1: | |
this_stop = stop | |
this_station = other_stop | |
elif stop.location_type == 1 and other_stop.location_type == 0: | |
this_stop = other_stop | |
this_station = stop | |
if this_stop.parent_station != this_station.stop_id: | |
problems.DifferentStationTooClose( | |
EncodeUnicode(this_stop.stop_name), | |
EncodeUnicode(this_stop.stop_id), | |
EncodeUnicode(this_station.stop_name), | |
EncodeUnicode(this_station.stop_id), distance) | |
index += 1 | |
# Check for multiple routes using same short + long name | |
route_names = {} | |
for route in self.routes.values(): | |
if validate_children: | |
route.Validate(problems) | |
short_name = '' | |
if not IsEmpty(route.route_short_name): | |
short_name = route.route_short_name.lower().strip() | |
long_name = '' | |
if not IsEmpty(route.route_long_name): | |
long_name = route.route_long_name.lower().strip() | |
name = (short_name, long_name) | |
if name in route_names: | |
problems.InvalidValue('route_long_name', | |
long_name, | |
'The same combination of ' | |
'route_short_name and route_long_name ' | |
'shouldn\'t be used for more than one ' | |
'route, as it is for the for the two routes ' | |
'with IDs "%s" and "%s".' % | |
(route.route_id, route_names[name].route_id), | |
type=TYPE_WARNING) | |
else: | |
route_names[name] = route | |
stop_types = {} # a dict mapping stop_id to [route_id, route_type, is_match] | |
trips = {} # a dict mapping tuple to (route_id, trip_id) | |
for trip in sorted(self.trips.values()): | |
if trip.route_id not in self.routes: | |
continue | |
route_type = self.GetRoute(trip.route_id).route_type | |
arrival_times = [] | |
stop_ids = [] | |
for index, st in enumerate(trip.GetStopTimes(problems)): | |
stop_id = st.stop.stop_id | |
arrival_times.append(st.arrival_time) | |
stop_ids.append(stop_id) | |
# Check a stop if which belongs to both subway and bus. | |
if (route_type == Route._ROUTE_TYPE_NAMES['Subway'] or | |
route_type == Route._ROUTE_TYPE_NAMES['Bus']): | |
if stop_id not in stop_types: | |
stop_types[stop_id] = [trip.route_id, route_type, 0] | |
elif (stop_types[stop_id][1] != route_type and | |
stop_types[stop_id][2] == 0): | |
stop_types[stop_id][2] = 1 | |
if stop_types[stop_id][1] == Route._ROUTE_TYPE_NAMES['Subway']: | |
subway_route_id = stop_types[stop_id][0] | |
bus_route_id = trip.route_id | |
else: | |
subway_route_id = trip.route_id | |
bus_route_id = stop_types[stop_id][0] | |
problems.StopWithMultipleRouteTypes(st.stop.stop_name, stop_id, | |
subway_route_id, bus_route_id) | |
# Check duplicate trips which go through the same stops with same | |
# service and start times. | |
if self._check_duplicate_trips: | |
if not stop_ids or not arrival_times: | |
continue | |
key = (trip.service_id, min(arrival_times), str(stop_ids)) | |
if key not in trips: | |
trips[key] = (trip.route_id, trip.trip_id) | |
else: | |
problems.DuplicateTrip(trips[key][1], trips[key][0], trip.trip_id, | |
trip.route_id) | |
# Check that routes' agency IDs are valid, if set | |
for route in self.routes.values(): | |
if (not IsEmpty(route.agency_id) and | |
not route.agency_id in self._agencies): | |
problems.InvalidValue('agency_id', | |
route.agency_id, | |
'The route with ID "%s" specifies agency_id ' | |
'"%s", which doesn\'t exist.' % | |
(route.route_id, route.agency_id)) | |
# Make sure all trips have stop_times | |
# We're doing this here instead of in Trip.Validate() so that | |
# Trips can be validated without error during the reading of trips.txt | |
for trip in self.trips.values(): | |
trip.ValidateChildren(problems) | |
count_stop_times = trip.GetCountStopTimes() | |
if not count_stop_times: | |
problems.OtherProblem('The trip with the trip_id "%s" doesn\'t have ' | |
'any stop times defined.' % trip.trip_id, | |
type=TYPE_WARNING) | |
if len(trip._headways) > 0: # no stoptimes, but there are headways | |
problems.OtherProblem('Frequencies defined, but no stop times given ' | |
'in trip %s' % trip.trip_id, type=TYPE_ERROR) | |
elif count_stop_times == 1: | |
problems.OtherProblem('The trip with the trip_id "%s" only has one ' | |
'stop on it; it should have at least one more ' | |
'stop so that the riders can leave!' % | |
trip.trip_id, type=TYPE_WARNING) | |
else: | |
# These methods report InvalidValue if there's no first or last time | |
trip.GetStartTime(problems=problems) | |
trip.GetEndTime(problems=problems) | |
# Check for unused shapes | |
known_shape_ids = set(self._shapes.keys()) | |
used_shape_ids = set() | |
for trip in self.GetTripList(): | |
used_shape_ids.add(trip.shape_id) | |
unused_shape_ids = known_shape_ids - used_shape_ids | |
if unused_shape_ids: | |
problems.OtherProblem('The shapes with the following shape_ids aren\'t ' | |
'used by any trips: %s' % | |
', '.join(unused_shape_ids), | |
type=TYPE_WARNING) | |
# Map from literal string that should never be found in the csv data to a human | |
# readable description | |
INVALID_LINE_SEPARATOR_UTF8 = { | |
"\x0c": "ASCII Form Feed 0x0C", | |
# May be part of end of line, but not found elsewhere | |
"\x0d": "ASCII Carriage Return 0x0D, \\r", | |
"\xe2\x80\xa8": "Unicode LINE SEPARATOR U+2028", | |
"\xe2\x80\xa9": "Unicode PARAGRAPH SEPARATOR U+2029", | |
"\xc2\x85": "Unicode NEXT LINE SEPARATOR U+0085", | |
} | |
class EndOfLineChecker: | |
"""Wrapper for a file-like object that checks for consistent line ends. | |
The check for consistent end of lines (all CR LF or all LF) only happens if | |
next() is called until it raises StopIteration. | |
""" | |
def __init__(self, f, name, problems): | |
"""Create new object. | |
Args: | |
f: file-like object to wrap | |
name: name to use for f. StringIO objects don't have a name attribute. | |
problems: a ProblemReporterBase object | |
""" | |
self._f = f | |
self._name = name | |
self._crlf = 0 | |
self._crlf_examples = [] | |
self._lf = 0 | |
self._lf_examples = [] | |
self._line_number = 0 # first line will be number 1 | |
self._problems = problems | |
def __iter__(self): | |
return self | |
def next(self): | |
"""Return next line without end of line marker or raise StopIteration.""" | |
try: | |
next_line = self._f.next() | |
except StopIteration: | |
self._FinalCheck() | |
raise | |
self._line_number += 1 | |
m_eol = re.search(r"[\x0a\x0d]*$", next_line) | |
if m_eol.group() == "\x0d\x0a": | |
self._crlf += 1 | |
if self._crlf <= 5: | |
self._crlf_examples.append(self._line_number) | |
elif m_eol.group() == "\x0a": | |
self._lf += 1 | |
if self._lf <= 5: | |
self._lf_examples.append(self._line_number) | |
elif m_eol.group() == "": | |
# Should only happen at the end of the file | |
try: | |
self._f.next() | |
raise RuntimeError("Unexpected row without new line sequence") | |
except StopIteration: | |
# Will be raised again when EndOfLineChecker.next() is next called | |
pass | |
else: | |
self._problems.InvalidLineEnd( | |
codecs.getencoder('string_escape')(m_eol.group())[0], | |
(self._name, self._line_number)) | |
next_line_contents = next_line[0:m_eol.start()] | |
for seq, name in INVALID_LINE_SEPARATOR_UTF8.items(): | |
if next_line_contents.find(seq) != -1: | |
self._problems.OtherProblem( | |
"Line contains %s" % name, | |
context=(self._name, self._line_number)) | |
return next_line_contents | |
def _FinalCheck(self): | |
if self._crlf > 0 and self._lf > 0: | |
crlf_plural = self._crlf > 1 and "s" or "" | |
crlf_lines = ", ".join(["%s" % e for e in self._crlf_examples]) | |
if self._crlf > len(self._crlf_examples): | |
crlf_lines += ", ..." | |
lf_plural = self._lf > 1 and "s" or "" | |
lf_lines = ", ".join(["%s" % e for e in self._lf_examples]) | |
if self._lf > len(self._lf_examples): | |
lf_lines += ", ..." | |
self._problems.OtherProblem( | |
"Found %d CR LF \"\\r\\n\" line end%s (line%s %s) and " | |
"%d LF \"\\n\" line end%s (line%s %s). A file must use a " | |
"consistent line end." % (self._crlf, crlf_plural, crlf_plural, | |
crlf_lines, self._lf, lf_plural, | |
lf_plural, lf_lines), | |
(self._name,)) | |
# Prevent _FinalCheck() from reporting the problem twice, in the unlikely | |
# case that it is run twice | |
self._crlf = 0 | |
self._lf = 0 | |
# Filenames specified in GTFS spec | |
KNOWN_FILENAMES = [ | |
'agency.txt', | |
'stops.txt', | |
'routes.txt', | |
'trips.txt', | |
'stop_times.txt', | |
'calendar.txt', | |
'calendar_dates.txt', | |
'fare_attributes.txt', | |
'fare_rules.txt', | |
'shapes.txt', | |
'frequencies.txt', | |
'transfers.txt', | |
] | |
class Loader: | |
def __init__(self, | |
feed_path=None, | |
schedule=None, | |
problems=default_problem_reporter, | |
extra_validation=False, | |
load_stop_times=True, | |
memory_db=True, | |
zip=None, | |
check_duplicate_trips=False): | |
"""Initialize a new Loader object. | |
Args: | |
feed_path: string path to a zip file or directory | |
schedule: a Schedule object or None to have one created | |
problems: a ProblemReporter object, the default reporter raises an | |
exception for each problem | |
extra_validation: True if you would like extra validation | |
load_stop_times: load the stop_times table, used to speed load time when | |
times are not needed. The default is True. | |
memory_db: if creating a new Schedule object use an in-memory sqlite | |
database instead of creating one in a temporary file | |
zip: a zipfile.ZipFile object, optionally used instead of path | |
""" | |
if not schedule: | |
schedule = Schedule(problem_reporter=problems, memory_db=memory_db, | |
check_duplicate_trips=check_duplicate_trips) | |
self._extra_validation = extra_validation | |
self._schedule = schedule | |
self._problems = problems | |
self._path = feed_path | |
self._zip = zip | |
self._load_stop_times = load_stop_times | |
def _DetermineFormat(self): | |
"""Determines whether the feed is in a form that we understand, and | |
if so, returns True.""" | |
if self._zip: | |
# If zip was passed to __init__ then path isn't used | |
assert not self._path | |
return True | |
if not isinstance(self._path, basestring) and hasattr(self._path, 'read'): | |
# A file-like object, used for testing with a StringIO file | |
self._zip = zipfile.ZipFile(self._path, mode='r') | |
return True | |
if not os.path.exists(self._path): | |
self._problems.FeedNotFound(self._path) | |
return False | |
if self._path.endswith('.zip'): | |
try: | |
self._zip = zipfile.ZipFile(self._path, mode='r') | |
except IOError: # self._path is a directory | |
pass | |
except zipfile.BadZipfile: | |
self._problems.UnknownFormat(self._path) | |
return False | |
if not self._zip and not os.path.isdir(self._path): | |
self._problems.UnknownFormat(self._path) | |
return False | |
return True | |
def _GetFileNames(self): | |
"""Returns a list of file names in the feed.""" | |
if self._zip: | |
return self._zip.namelist() | |
else: | |
return os.listdir(self._path) | |
def _CheckFileNames(self): | |
filenames = self._GetFileNames() | |
for feed_file in filenames: | |
if feed_file not in KNOWN_FILENAMES: | |
if not feed_file.startswith('.'): | |
# Don't worry about .svn files and other hidden files | |
# as this will break the tests. | |
self._problems.UnknownFile(feed_file) | |
def _GetUtf8Contents(self, file_name): | |
"""Check for errors in file_name and return a string for csv reader.""" | |
contents = self._FileContents(file_name) | |
if not contents: # Missing file | |
return | |
# Check for errors that will prevent csv.reader from working | |
if len(contents) >= 2 and contents[0:2] in (codecs.BOM_UTF16_BE, | |
codecs.BOM_UTF16_LE): | |
self._problems.FileFormat("appears to be encoded in utf-16", (file_name, )) | |
# Convert and continue, so we can find more errors | |
contents = codecs.getdecoder('utf-16')(contents)[0].encode('utf-8') | |
null_index = contents.find('\0') | |
if null_index != -1: | |
# It is easier to get some surrounding text than calculate the exact | |
# row_num | |
m = re.search(r'.{,20}\0.{,20}', contents, re.DOTALL) | |
self._problems.FileFormat( | |
"contains a null in text \"%s\" at byte %d" % | |
(codecs.getencoder('string_escape')(m.group()), null_index + 1), | |
(file_name, )) | |
return | |
# strip out any UTF-8 Byte Order Marker (otherwise it'll be | |
# treated as part of the first column name, causing a mis-parse) | |
contents = contents.lstrip(codecs.BOM_UTF8) | |
return contents | |
def _ReadCsvDict(self, file_name, all_cols, required): | |
"""Reads lines from file_name, yielding a dict of unicode values.""" | |
assert file_name.endswith(".txt") | |
table_name = file_name[0:-4] | |
contents = self._GetUtf8Contents(file_name) | |
if not contents: | |
return | |
eol_checker = EndOfLineChecker(StringIO.StringIO(contents), | |
file_name, self._problems) | |
# The csv module doesn't provide a way to skip trailing space, but when I | |
# checked 15/675 feeds had trailing space in a header row and 120 had spaces | |
# after fields. Space after header fields can cause a serious parsing | |
# problem, so warn. Space after body fields can cause a problem time, | |
# integer and id fields; they will be validated at higher levels. | |
reader = csv.reader(eol_checker, skipinitialspace=True) | |
raw_header = reader.next() | |
header_occurrences = defaultdict(lambda: 0) | |
header = [] | |
valid_columns = [] # Index into raw_header and raw_row | |
for i, h in enumerate(raw_header): | |
h_stripped = h.strip() | |
if not h_stripped: | |
self._problems.CsvSyntax( | |
description="The header row should not contain any blank values. " | |
"The corresponding column will be skipped for the " | |
"entire file.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=TYPE_ERROR) | |
continue | |
elif h != h_stripped: | |
self._problems.CsvSyntax( | |
description="The header row should not contain any " | |
"space characters.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=TYPE_WARNING) | |
header.append(h_stripped) | |
valid_columns.append(i) | |
header_occurrences[h_stripped] += 1 | |
for name, count in header_occurrences.items(): | |
if count > 1: | |
self._problems.DuplicateColumn( | |
header=name, | |
file_name=file_name, | |
count=count) | |
self._schedule._table_columns[table_name] = header | |
# check for unrecognized columns, which are often misspellings | |
unknown_cols = set(header) - set(all_cols) | |
if len(unknown_cols) == len(header): | |
self._problems.CsvSyntax( | |
description="The header row did not contain any known column " | |
"names. The file is most likely missing the header row " | |
"or not in the expected CSV format.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=TYPE_ERROR) | |
else: | |
for col in unknown_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.UnrecognizedColumn(file_name, col, context) | |
missing_cols = set(required) - set(header) | |
for col in missing_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.MissingColumn(file_name, col, context) | |
line_num = 1 # First line read by reader.next() above | |
for raw_row in reader: | |
line_num += 1 | |
if len(raw_row) == 0: # skip extra empty lines in file | |
continue | |
if len(raw_row) > len(raw_header): | |
self._problems.OtherProblem('Found too many cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(line_num, file_name), | |
(file_name, line_num), | |
type=TYPE_WARNING) | |
if len(raw_row) < len(raw_header): | |
self._problems.OtherProblem('Found missing cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(line_num, file_name), | |
(file_name, line_num), | |
type=TYPE_WARNING) | |
# raw_row is a list of raw bytes which should be valid utf-8. Convert each | |
# valid_columns of raw_row into Unicode. | |
valid_values = [] | |
unicode_error_columns = [] # index of valid_values elements with an error | |
for i in valid_columns: | |
try: | |
valid_values.append(raw_row[i].decode('utf-8')) | |
except UnicodeDecodeError: | |
# Replace all invalid characters with REPLACEMENT CHARACTER (U+FFFD) | |
valid_values.append(codecs.getdecoder("utf8") | |
(raw_row[i], errors="replace")[0]) | |
unicode_error_columns.append(len(valid_values) - 1) | |
except IndexError: | |
break | |
# The error report may contain a dump of all values in valid_values so | |
# problems can not be reported until after converting all of raw_row to | |
# Unicode. | |
for i in unicode_error_columns: | |
self._problems.InvalidValue(header[i], valid_values[i], | |
'Unicode error', | |
(file_name, line_num, | |
valid_values, header)) | |
d = dict(zip(header, valid_values)) | |
yield (d, line_num, header, valid_values) | |
# TODO: Add testing for this specific function | |
def _ReadCSV(self, file_name, cols, required): | |
"""Reads lines from file_name, yielding a list of unicode values | |
corresponding to the column names in cols.""" | |
contents = self._GetUtf8Contents(file_name) | |
if not contents: | |
return | |
eol_checker = EndOfLineChecker(StringIO.StringIO(contents), | |
file_name, self._problems) | |
reader = csv.reader(eol_checker) # Use excel dialect | |
header = reader.next() | |
header = map(lambda x: x.strip(), header) # trim any whitespace | |
header_occurrences = defaultdict(lambda: 0) | |
for column_header in header: | |
header_occurrences[column_header] += 1 | |
for name, count in header_occurrences.items(): | |
if count > 1: | |
self._problems.DuplicateColumn( | |
header=name, | |
file_name=file_name, | |
count=count) | |
# check for unrecognized columns, which are often misspellings | |
unknown_cols = set(header).difference(set(cols)) | |
for col in unknown_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.UnrecognizedColumn(file_name, col, context) | |
col_index = [-1] * len(cols) | |
for i in range(len(cols)): | |
if cols[i] in header: | |
col_index[i] = header.index(cols[i]) | |
elif cols[i] in required: | |
self._problems.MissingColumn(file_name, cols[i]) | |
row_num = 1 | |
for row in reader: | |
row_num += 1 | |
if len(row) == 0: # skip extra empty lines in file | |
continue | |
if len(row) > len(header): | |
self._problems.OtherProblem('Found too many cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(row_num, file_name), (file_name, row_num), | |
type=TYPE_WARNING) | |
if len(row) < len(header): | |
self._problems.OtherProblem('Found missing cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(row_num, file_name), (file_name, row_num), | |
type=TYPE_WARNING) | |
result = [None] * len(cols) | |
unicode_error_columns = [] # A list of column numbers with an error | |
for i in range(len(cols)): | |
ci = col_index[i] | |
if ci >= 0: | |
if len(row) <= ci: # handle short CSV rows | |
result[i] = u'' | |
else: | |
try: | |
result[i] = row[ci].decode('utf-8').strip() | |
except UnicodeDecodeError: | |
# Replace all invalid characters with | |
# REPLACEMENT CHARACTER (U+FFFD) | |
result[i] = codecs.getdecoder("utf8")(row[ci], | |
errors="replace")[0].strip() | |
unicode_error_columns.append(i) | |
for i in unicode_error_columns: | |
self._problems.InvalidValue(cols[i], result[i], | |
'Unicode error', | |
(file_name, row_num, result, cols)) | |
yield (result, row_num, cols) | |
def _HasFile(self, file_name): | |
"""Returns True if there's a file in the current feed with the | |
given file_name in the current feed.""" | |
if self._zip: | |
return file_name in self._zip.namelist() | |
else: | |
file_path = os.path.join(self._path, file_name) | |
return os.path.exists(file_path) and os.path.isfile(file_path) | |
def _FileContents(self, file_name): | |
results = None | |
if self._zip: | |
try: | |
results = self._zip.read(file_name) | |
except KeyError: # file not found in archve | |
self._problems.MissingFile(file_name) | |
return None | |
else: | |
try: | |
data_file = open(os.path.join(self._path, file_name), 'rb') | |
results = data_file.read() | |
except IOError: # file not found | |
self._problems.MissingFile(file_name) | |
return None | |
if not results: | |
self._problems.EmptyFile(file_name) | |
return results | |
def _LoadAgencies(self): | |
for (d, row_num, header, row) in self._ReadCsvDict('agency.txt', | |
Agency._FIELD_NAMES, | |
Agency._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('agency.txt', row_num, row, header) | |
agency = Agency(field_dict=d) | |
self._schedule.AddAgencyObject(agency, self._problems) | |
self._problems.ClearContext() | |
def _LoadStops(self): | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
'stops.txt', | |
Stop._FIELD_NAMES, | |
Stop._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('stops.txt', row_num, row, header) | |
stop = Stop(field_dict=d) | |
stop.Validate(self._problems) | |
self._schedule.AddStopObject(stop, self._problems) | |
self._problems.ClearContext() | |
def _LoadRoutes(self): | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
'routes.txt', | |
Route._FIELD_NAMES, | |
Route._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('routes.txt', row_num, row, header) | |
route = Route(field_dict=d) | |
self._schedule.AddRouteObject(route, self._problems) | |
self._problems.ClearContext() | |
def _LoadCalendar(self): | |
file_name = 'calendar.txt' | |
file_name_dates = 'calendar_dates.txt' | |
if not self._HasFile(file_name) and not self._HasFile(file_name_dates): | |
self._problems.MissingFile(file_name) | |
return | |
# map period IDs to (period object, (file_name, row_num, row, cols)) | |
periods = {} | |
# process calendar.txt | |
if self._HasFile(file_name): | |
has_useful_contents = False | |
for (row, row_num, cols) in \ | |
self._ReadCSV(file_name, | |
ServicePeriod._FIELD_NAMES, | |
ServicePeriod._FIELD_NAMES_REQUIRED): | |
context = (file_name, row_num, row, cols) | |
self._problems.SetFileContext(*context) | |
period = ServicePeriod(field_list=row) | |
if period.service_id in periods: | |
self._problems.DuplicateID('service_id', period.service_id) | |
else: | |
periods[period.service_id] = (period, context) | |
self._problems.ClearContext() | |
# process calendar_dates.txt | |
if self._HasFile(file_name_dates): | |
# ['service_id', 'date', 'exception_type'] | |
fields = ServicePeriod._FIELD_NAMES_CALENDAR_DATES | |
for (row, row_num, cols) in self._ReadCSV(file_name_dates, | |
fields, fields): | |
context = (file_name_dates, row_num, row, cols) | |
self._problems.SetFileContext(*context) | |
service_id = row[0] | |
period = None | |
if service_id in periods: | |
period = periods[service_id][0] | |
else: | |
period = ServicePeriod(service_id) | |
periods[period.service_id] = (period, context) | |
exception_type = row[2] | |
if exception_type == u'1': | |
period.SetDateHasService(row[1], True, self._problems) | |
elif exception_type == u'2': | |
period.SetDateHasService(row[1], False, self._problems) | |
else: | |
self._problems.InvalidValue('exception_type', exception_type) | |
self._problems.ClearContext() | |
# Now insert the periods into the schedule object, so that they're | |
# validated with both calendar and calendar_dates info present | |
for period, context in periods.values(): | |
self._problems.SetFileContext(*context) | |
self._schedule.AddServicePeriodObject(period, self._problems) | |
self._problems.ClearContext() | |
def _LoadShapes(self): | |
if not self._HasFile('shapes.txt'): | |
return | |
shapes = {} # shape_id to tuple | |
for (row, row_num, cols) in self._ReadCSV('shapes.txt', | |
Shape._FIELD_NAMES, | |
Shape._REQUIRED_FIELD_NAMES): | |
file_context = ('shapes.txt', row_num, row, cols) | |
self._problems.SetFileContext(*file_context) | |
(shape_id, lat, lon, seq, dist) = row | |
if IsEmpty(shape_id): | |
self._problems.MissingValue('shape_id') | |
continue | |
try: | |
seq = int(seq) | |
except (TypeError, ValueError): | |
self._problems.InvalidValue('shape_pt_sequence', seq, | |
'Value should be a number (0 or higher)') | |
continue | |
shapes.setdefault(shape_id, []).append((seq, lat, lon, dist, file_context)) | |
self._problems.ClearContext() | |
for shape_id, points in shapes.items(): | |
shape = Shape(shape_id) | |
points.sort() | |
if points and points[0][0] < 0: | |
self._problems.InvalidValue('shape_pt_sequence', points[0][0], | |
'In shape %s, a negative sequence number ' | |
'%d was found; sequence numbers should be ' | |
'0 or higher.' % (shape_id, points[0][0])) | |
last_seq = None | |
for (seq, lat, lon, dist, file_context) in points: | |
if (seq == last_seq): | |
self._problems.SetFileContext(*file_context) | |
self._problems.InvalidValue('shape_pt_sequence', seq, | |
'The sequence number %d occurs more ' | |
'than once in shape %s.' % | |
(seq, shape_id)) | |
last_seq = seq | |
shape.AddPoint(lat, lon, dist, self._problems) | |
self._problems.ClearContext() | |
self._schedule.AddShapeObject(shape, self._problems) | |
def _LoadTrips(self): | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
'trips.txt', | |
Trip._FIELD_NAMES, | |
Trip._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('trips.txt', row_num, row, header) | |
trip = Trip(field_dict=d) | |
self._schedule.AddTripObject(trip, self._problems) | |
self._problems.ClearContext() | |
def _LoadFares(self): | |
if not self._HasFile('fare_attributes.txt'): | |
return | |
for (row, row_num, cols) in self._ReadCSV('fare_attributes.txt', | |
Fare._FIELD_NAMES, | |
Fare._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('fare_attributes.txt', row_num, row, cols) | |
fare = Fare(field_list=row) | |
self._schedule.AddFareObject(fare, self._problems) | |
self._problems.ClearContext() | |
def _LoadFareRules(self): | |
if not self._HasFile('fare_rules.txt'): | |
return | |
for (row, row_num, cols) in self._ReadCSV('fare_rules.txt', | |
FareRule._FIELD_NAMES, | |
FareRule._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext('fare_rules.txt', row_num, row, cols) | |
rule = FareRule(field_list=row) | |
self._schedule.AddFareRuleObject(rule, self._problems) | |
self._problems.ClearContext() | |
def _LoadHeadways(self): | |
file_name = 'frequencies.txt' | |
if not self._HasFile(file_name): # headways are an optional feature | |
return | |
# ['trip_id', 'start_time', 'end_time', 'headway_secs'] | |
fields = Trip._FIELD_NAMES_HEADWAY | |
modified_trips = {} | |
for (row, row_num, cols) in self._ReadCSV(file_name, fields, fields): | |
self._problems.SetFileContext(file_name, row_num, row, cols) | |
(trip_id, start_time, end_time, headway_secs) = row | |
try: | |
trip = self._schedule.GetTrip(trip_id) | |
trip.AddHeadwayPeriod(start_time, end_time, headway_secs, | |
self._problems) | |
modified_trips[trip_id] = trip | |
except KeyError: | |
self._problems.InvalidValue('trip_id', trip_id) | |
self._problems.ClearContext() | |
for trip in modified_trips.values(): | |
trip.Validate(self._problems) | |
def _LoadStopTimes(self): | |
for (row, row_num, cols) in self._ReadCSV('stop_times.txt', | |
StopTime._FIELD_NAMES, | |
StopTime._REQUIRED_FIELD_NAMES): | |
file_context = ('stop_times.txt', row_num, row, cols) | |
self._problems.SetFileContext(*file_context) | |
(trip_id, arrival_time, departure_time, stop_id, stop_sequence, | |
stop_headsign, pickup_type, drop_off_type, shape_dist_traveled) = row | |
try: | |
sequence = int(stop_sequence) | |
except (TypeError, ValueError): | |
self._problems.InvalidValue('stop_sequence', stop_sequence, | |
'This should be a number.') | |
continue | |
if sequence < 0: | |
self._problems.InvalidValue('stop_sequence', sequence, | |
'Sequence numbers should be 0 or higher.') | |
if stop_id not in self._schedule.stops: | |
self._problems.InvalidValue('stop_id', stop_id, | |
'This value wasn\'t defined in stops.txt') | |
continue | |
stop = self._schedule.stops[stop_id] | |
if trip_id not in self._schedule.trips: | |
self._problems.InvalidValue('trip_id', trip_id, | |
'This value wasn\'t defined in trips.txt') | |
continue | |
trip = self._schedule.trips[trip_id] | |
# If self._problems.Report returns then StopTime.__init__ will return | |
# even if the StopTime object has an error. Thus this code may add a | |
# StopTime that didn't validate to the database. | |
# Trip.GetStopTimes then tries to make a StopTime from the invalid data | |
# and calls the problem reporter for errors. An ugly solution is to | |
# wrap problems and a better solution is to move all validation out of | |
# __init__. For now make sure Trip.GetStopTimes gets a problem reporter | |
# when called from Trip.Validate. | |
stop_time = StopTime(self._problems, stop, arrival_time, | |
departure_time, stop_headsign, | |
pickup_type, drop_off_type, | |
shape_dist_traveled, stop_sequence=sequence) | |
trip._AddStopTimeObjectUnordered(stop_time, self._schedule) | |
self._problems.ClearContext() | |
# stop_times are validated in Trip.ValidateChildren, called by | |
# Schedule.Validate | |
def _LoadTransfers(self): | |
file_name = 'transfers.txt' | |
if not self._HasFile(file_name): # transfers are an optional feature | |
return | |
for (d, row_num, header, row) in self._ReadCsvDict(file_name, | |
Transfer._FIELD_NAMES, | |
Transfer._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext(file_name, row_num, row, header) | |
transfer = Transfer(field_dict=d) | |
self._schedule.AddTransferObject(transfer, self._problems) | |
self._problems.ClearContext() | |
def Load(self): | |
self._problems.ClearContext() | |
if not self._DetermineFormat(): | |
return self._schedule | |
self._CheckFileNames() | |
self._LoadAgencies() | |
self._LoadStops() | |
self._LoadRoutes() | |
self._LoadCalendar() | |
self._LoadShapes() | |
self._LoadTrips() | |
self._LoadHeadways() | |
if self._load_stop_times: | |
self._LoadStopTimes() | |
self._LoadFares() | |
self._LoadFareRules() | |
self._LoadTransfers() | |
if self._zip: | |
self._zip.close() | |
self._zip = None | |
if self._extra_validation: | |
self._schedule.Validate(self._problems, validate_children=False) | |
return self._schedule | |
class ShapeLoader(Loader): | |
"""A subclass of Loader that only loads the shapes from a GTFS file.""" | |
def __init__(self, *args, **kwargs): | |
"""Initialize a new ShapeLoader object. | |
See Loader.__init__ for argument documentation. | |
""" | |
Loader.__init__(self, *args, **kwargs) | |
def Load(self): | |
self._LoadShapes() | |
return self._schedule | |
Binary files a/origin-src/transitfeed-1.2.5/transitfeed/_transitfeed.pyc and /dev/null differ
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A library for manipulating points and polylines. | |
This is a library for creating and manipulating points on the unit | |
sphere, as an approximate model of Earth. The primary use of this | |
library is to make manipulation and matching of polylines easy in the | |
transitfeed library. | |
NOTE: in this library, Earth is modelled as a sphere, whereas | |
GTFS specifies that latitudes and longitudes are in WGS84. For the | |
purpose of comparing and matching latitudes and longitudes that | |
are relatively close together on the surface of the earth, this | |
is adequate; for other purposes, this library may not be accurate | |
enough. | |
""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import copy | |
import decimal | |
import heapq | |
import math | |
class ShapeError(Exception): | |
"""Thrown whenever there is a shape parsing error.""" | |
pass | |
EARTH_RADIUS_METERS = 6371010.0 | |
class Point(object): | |
""" | |
A class representing a point on the unit sphere in three dimensions. | |
""" | |
def __init__(self, x, y, z): | |
self.x = x | |
self.y = y | |
self.z = z | |
def __hash__(self): | |
return hash((self.x, self.y, self.z)) | |
def __cmp__(self, other): | |
if not isinstance(other, Point): | |
raise TypeError('Point.__cmp__(x,y) requires y to be a "Point", ' | |
'not a "%s"' % type(other).__name__) | |
return cmp((self.x, self.y, self.z), (other.x, other.y, other.z)) | |
def __str__(self): | |
return "(%.15f, %.15f, %.15f) " % (self.x, self.y, self.z) | |
def Norm2(self): | |
""" | |
Returns the L_2 (Euclidean) norm of self. | |
""" | |
sum = self.x * self.x + self.y * self.y + self.z * self.z | |
return math.sqrt(float(sum)) | |
def IsUnitLength(self): | |
return abs(self.Norm2() - 1.0) < 1e-14 | |
def Plus(self, other): | |
""" | |
Returns a new point which is the pointwise sum of self and other. | |
""" | |
return Point(self.x + other.x, | |
self.y + other.y, | |
self.z + other.z) | |
def Minus(self, other): | |
""" | |
Returns a new point which is the pointwise subtraction of other from | |
self. | |
""" | |
return Point(self.x - other.x, | |
self.y - other.y, | |
self.z - other.z) | |
def DotProd(self, other): | |
""" | |
Returns the (scalar) dot product of self with other. | |
""" | |
return self.x * other.x + self.y * other.y + self.z * other.z | |
def Times(self, val): | |
""" | |
Returns a new point which is pointwise multiplied by val. | |
""" | |
return Point(self.x * val, self.y * val, self.z * val) | |
def Normalize(self): | |
""" | |
Returns a unit point in the same direction as self. | |
""" | |
return self.Times(1 / self.Norm2()) | |
def RobustCrossProd(self, other): | |
""" | |
A robust version of cross product. If self and other | |
are not nearly the same point, returns the same value | |
as CrossProd() modulo normalization. Otherwise returns | |
an arbitrary unit point orthogonal to self. | |
""" | |
assert(self.IsUnitLength() and other.IsUnitLength()) | |
x = self.Plus(other).CrossProd(other.Minus(self)) | |
if abs(x.x) > 1e-15 or abs(x.y) > 1e-15 or abs(x.z) > 1e-15: | |
return x.Normalize() | |
else: | |
return self.Ortho() | |
def LargestComponent(self): | |
""" | |
Returns (i, val) where i is the component index (0 - 2) | |
which has largest absolute value and val is the value | |
of the component. | |
""" | |
if abs(self.x) > abs(self.y): | |
if abs(self.x) > abs(self.z): | |
return (0, self.x) | |
else: | |
return (2, self.z) | |
else: | |
if abs(self.y) > abs(self.z): | |
return (1, self.y) | |
else: | |
return (2, self.z) | |
def Ortho(self): | |
"""Returns a unit-length point orthogonal to this point""" | |
(index, val) = self.LargestComponent() | |
index = index - 1 | |
if index < 0: | |
index = 2 | |
temp = Point(0.012, 0.053, 0.00457) | |
if index == 0: | |
temp.x = 1 | |
elif index == 1: | |
temp.y = 1 | |
elif index == 2: | |
temp.z = 1 | |
return self.CrossProd(temp).Normalize() | |
def CrossProd(self, other): | |
""" | |
Returns the cross product of self and other. | |
""" | |
return Point( | |
self.y * other.z - self.z * other.y, | |
self.z * other.x - self.x * other.z, | |
self.x * other.y - self.y * other.x) | |
@staticmethod | |
def _approxEq(a, b): | |
return abs(a - b) < 1e-11 | |
def Equals(self, other): | |
""" | |
Returns true of self and other are approximately equal. | |
""" | |
return (self._approxEq(self.x, other.x) | |
and self._approxEq(self.y, other.y) | |
and self._approxEq(self.z, other.z)) | |
def Angle(self, other): | |
""" | |
Returns the angle in radians between self and other. | |
""" | |
return math.atan2(self.CrossProd(other).Norm2(), | |
self.DotProd(other)) | |
def ToLatLng(self): | |
""" | |
Returns that latitude and longitude that this point represents | |
under a spherical Earth model. | |
""" | |
rad_lat = math.atan2(self.z, math.sqrt(self.x * self.x + self.y * self.y)) | |
rad_lng = math.atan2(self.y, self.x) | |
return (rad_lat * 180.0 / math.pi, rad_lng * 180.0 / math.pi) | |
@staticmethod | |
def FromLatLng(lat, lng): | |
""" | |
Returns a new point representing this latitude and longitude under | |
a spherical Earth model. | |
""" | |
phi = lat * (math.pi / 180.0) | |
theta = lng * (math.pi / 180.0) | |
cosphi = math.cos(phi) | |
return Point(math.cos(theta) * cosphi, | |
math.sin(theta) * cosphi, | |
math.sin(phi)) | |
def GetDistanceMeters(self, other): | |
assert(self.IsUnitLength() and other.IsUnitLength()) | |
return self.Angle(other) * EARTH_RADIUS_METERS | |
def SimpleCCW(a, b, c): | |
""" | |
Returns true if the triangle abc is oriented counterclockwise. | |
""" | |
return c.CrossProd(a).DotProd(b) > 0 | |
def GetClosestPoint(x, a, b): | |
""" | |
Returns the point on the great circle segment ab closest to x. | |
""" | |
assert(x.IsUnitLength()) | |
assert(a.IsUnitLength()) | |
assert(b.IsUnitLength()) | |
a_cross_b = a.RobustCrossProd(b) | |
# project to the great circle going through a and b | |
p = x.Minus( | |
a_cross_b.Times( | |
x.DotProd(a_cross_b) / a_cross_b.Norm2())) | |
# if p lies between a and b, return it | |
if SimpleCCW(a_cross_b, a, p) and SimpleCCW(p, b, a_cross_b): | |
return p.Normalize() | |
# otherwise return the closer of a or b | |
if x.Minus(a).Norm2() <= x.Minus(b).Norm2(): | |
return a | |
else: | |
return b | |
class Poly(object): | |
""" | |
A class representing a polyline. | |
""" | |
def __init__(self, points = [], name=None): | |
self._points = list(points) | |
self._name = name | |
def AddPoint(self, p): | |
""" | |
Adds a new point to the end of the polyline. | |
""" | |
assert(p.IsUnitLength()) | |
self._points.append(p) | |
def GetName(self): | |
return self._name | |
def GetPoint(self, i): | |
return self._points[i] | |
def GetPoints(self): | |
return self._points | |
def GetNumPoints(self): | |
return len(self._points) | |
def _GetPointSafe(self, i): | |
try: | |
return self.GetPoint(i) | |
except IndexError: | |
return None | |
def GetClosestPoint(self, p): | |
""" | |
Returns (closest_p, closest_i), where closest_p is the closest point | |
to p on the piecewise linear curve represented by the polyline, | |
and closest_i is the index of the point on the polyline just before | |
the polyline segment that contains closest_p. | |
""" | |
assert(len(self._points) > 0) | |
closest_point = self._points[0] | |
closest_i = 0 | |
for i in range(0, len(self._points) - 1): | |
(a, b) = (self._points[i], self._points[i+1]) | |
cur_closest_point = GetClosestPoint(p, a, b) | |
if p.Angle(cur_closest_point) < p.Angle(closest_point): | |
closest_point = cur_closest_point.Normalize() | |
closest_i = i | |
return (closest_point, closest_i) | |
def LengthMeters(self): | |
"""Return length of this polyline in meters.""" | |
assert(len(self._points) > 0) | |
length = 0 | |
for i in range(0, len(self._points) - 1): | |
length += self._points[i].GetDistanceMeters(self._points[i+1]) | |
return length | |
def Reversed(self): | |
"""Return a polyline that is the reverse of this polyline.""" | |
return Poly(reversed(self.GetPoints()), self.GetName()) | |
def CutAtClosestPoint(self, p): | |
""" | |
Let x be the point on the polyline closest to p. Then | |
CutAtClosestPoint returns two new polylines, one representing | |
the polyline from the beginning up to x, and one representing | |
x onwards to the end of the polyline. x is the first point | |
returned in the second polyline. | |
""" | |
(closest, i) = self.GetClosestPoint(p) | |
tmp = [closest] | |
tmp.extend(self._points[i+1:]) | |
return (Poly(self._points[0:i+1]), | |
Poly(tmp)) | |
def GreedyPolyMatchDist(self, shape): | |
""" | |
Tries a greedy matching algorithm to match self to the | |
given shape. Returns the maximum distance in meters of | |
any point in self to its matched point in shape under the | |
algorithm. | |
Args: shape, a Poly object. | |
""" | |
tmp_shape = Poly(shape.GetPoints()) | |
max_radius = 0 | |
for (i, point) in enumerate(self._points): | |
tmp_shape = tmp_shape.CutAtClosestPoint(point)[1] | |
dist = tmp_shape.GetPoint(0).GetDistanceMeters(point) | |
max_radius = max(max_radius, dist) | |
return max_radius | |
@staticmethod | |
def MergePolys(polys, merge_point_threshold=10): | |
""" | |
Merge multiple polylines, in the order that they were passed in. | |
Merged polyline will have the names of their component parts joined by ';'. | |
Example: merging [a,b], [c,d] and [e,f] will result in [a,b,c,d,e,f]. | |
However if the endpoints of two adjacent polylines are less than | |
merge_point_threshold meters apart, we will only use the first endpoint in | |
the merged polyline. | |
""" | |
name = ";".join((p.GetName(), '')[p.GetName() is None] for p in polys) | |
merged = Poly([], name) | |
if polys: | |
first_poly = polys[0] | |
for p in first_poly.GetPoints(): | |
merged.AddPoint(p) | |
last_point = merged._GetPointSafe(-1) | |
for poly in polys[1:]: | |
first_point = poly._GetPointSafe(0) | |
if (last_point and first_point and | |
last_point.GetDistanceMeters(first_point) <= merge_point_threshold): | |
points = poly.GetPoints()[1:] | |
else: | |
points = poly.GetPoints() | |
for p in points: | |
merged.AddPoint(p) | |
last_point = merged._GetPointSafe(-1) | |
return merged | |
def __str__(self): | |
return self._ToString(str) | |
def ToLatLngString(self): | |
return self._ToString(lambda p: str(p.ToLatLng())) | |
def _ToString(self, pointToStringFn): | |
return "%s: %s" % (self.GetName() or "", | |
", ".join([pointToStringFn(p) for p in self._points])) | |
class PolyCollection(object): | |
""" | |
A class representing a collection of polylines. | |
""" | |
def __init__(self): | |
self._name_to_shape = {} | |
pass | |
def AddPoly(self, poly, smart_duplicate_handling=True): | |
""" | |
Adds a new polyline to the collection. | |
""" | |
inserted_name = poly.GetName() | |
if poly.GetName() in self._name_to_shape: | |
if not smart_duplicate_handling: | |
raise ShapeError("Duplicate shape found: " + poly.GetName()) | |
print ("Warning: duplicate shape id being added to collection: " + | |
poly.GetName()) | |
if poly.GreedyPolyMatchDist(self._name_to_shape[poly.GetName()]) < 10: | |
print " (Skipping as it apears to be an exact duplicate)" | |
else: | |
print " (Adding new shape variant with uniquified name)" | |
inserted_name = "%s-%d" % (inserted_name, len(self._name_to_shape)) | |
self._name_to_shape[inserted_name] = poly | |
def NumPolys(self): | |
return len(self._name_to_shape) | |
def FindMatchingPolys(self, start_point, end_point, max_radius=150): | |
""" | |
Returns a list of polylines in the collection that have endpoints | |
within max_radius of the given start and end points. | |
""" | |
matches = [] | |
for shape in self._name_to_shape.itervalues(): | |
if start_point.GetDistanceMeters(shape.GetPoint(0)) < max_radius and \ | |
end_point.GetDistanceMeters(shape.GetPoint(-1)) < max_radius: | |
matches.append(shape) | |
return matches | |
class PolyGraph(PolyCollection): | |
""" | |
A class representing a graph where the edges are polylines. | |
""" | |
def __init__(self): | |
PolyCollection.__init__(self) | |
self._nodes = {} | |
def AddPoly(self, poly, smart_duplicate_handling=True): | |
PolyCollection.AddPoly(self, poly, smart_duplicate_handling) | |
start_point = poly.GetPoint(0) | |
end_point = poly.GetPoint(-1) | |
self._AddNodeWithEdge(start_point, poly) | |
self._AddNodeWithEdge(end_point, poly) | |
def _AddNodeWithEdge(self, point, edge): | |
if point in self._nodes: | |
self._nodes[point].add(edge) | |
else: | |
self._nodes[point] = set([edge]) | |
def ShortestPath(self, start, goal): | |
"""Uses the A* algorithm to find a shortest path between start and goal. | |
For more background see http://en.wikipedia.org/wiki/A-star_algorithm | |
Some definitions: | |
g(x): The actual shortest distance traveled from initial node to current | |
node. | |
h(x): The estimated (or "heuristic") distance from current node to goal. | |
We use the distance on Earth from node to goal as the heuristic. | |
This heuristic is both admissible and monotonic (see wikipedia for | |
more details). | |
f(x): The sum of g(x) and h(x), used to prioritize elements to look at. | |
Arguments: | |
start: Point that is in the graph, start point of the search. | |
goal: Point that is in the graph, end point for the search. | |
Returns: | |
A Poly object representing the shortest polyline through the graph from | |
start to goal, or None if no path found. | |
""" | |
assert start in self._nodes | |
assert goal in self._nodes | |
closed_set = set() # Set of nodes already evaluated. | |
open_heap = [(0, start)] # Nodes to visit, heapified by f(x). | |
open_set = set([start]) # Same as open_heap, but a set instead of a heap. | |
g_scores = { start: 0 } # Distance from start along optimal path | |
came_from = {} # Map to reconstruct optimal path once we're done. | |
while open_set: | |
(f_x, x) = heapq.heappop(open_heap) | |
open_set.remove(x) | |
if x == goal: | |
return self._ReconstructPath(came_from, goal) | |
closed_set.add(x) | |
edges = self._nodes[x] | |
for edge in edges: | |
if edge.GetPoint(0) == x: | |
y = edge.GetPoint(-1) | |
else: | |
y = edge.GetPoint(0) | |
if y in closed_set: | |
continue | |
tentative_g_score = g_scores[x] + edge.LengthMeters() | |
tentative_is_better = False | |
if y not in open_set: | |
h_y = y.GetDistanceMeters(goal) | |
f_y = tentative_g_score + h_y | |
open_set.add(y) | |
heapq.heappush(open_heap, (f_y, y)) | |
tentative_is_better = True | |
elif tentative_g_score < g_scores[y]: | |
tentative_is_better = True | |
if tentative_is_better: | |
came_from[y] = (x, edge) | |
g_scores[y] = tentative_g_score | |
return None | |
def _ReconstructPath(self, came_from, current_node): | |
""" | |
Helper method for ShortestPath, to reconstruct path. | |
Arguments: | |
came_from: a dictionary mapping Point to (Point, Poly) tuples. | |
This dictionary keeps track of the previous neighbor to a node, and | |
the edge used to get from the previous neighbor to the node. | |
current_node: the current Point in the path. | |
Returns: | |
A Poly that represents the path through the graph from the start of the | |
search to current_node. | |
""" | |
if current_node in came_from: | |
(previous_node, previous_edge) = came_from[current_node] | |
if previous_edge.GetPoint(0) == current_node: | |
previous_edge = previous_edge.Reversed() | |
p = self._ReconstructPath(came_from, previous_node) | |
return Poly.MergePolys([p, previous_edge], merge_point_threshold=0) | |
else: | |
return Poly([], '') | |
def FindShortestMultiPointPath(self, points, max_radius=150, keep_best_n=10, | |
verbosity=0): | |
""" | |
Return a polyline, representing the shortest path through this graph that | |
has edge endpoints on each of a given list of points in sequence. We allow | |
fuzziness in matching of input points to points in this graph. | |
We limit ourselves to a view of the best keep_best_n paths at any time, as a | |
greedy optimization. | |
""" | |
assert len(points) > 1 | |
nearby_points = [] | |
paths_found = [] # A heap sorted by inverse path length. | |
for i, point in enumerate(points): | |
nearby = [p for p in self._nodes.iterkeys() | |
if p.GetDistanceMeters(point) < max_radius] | |
if verbosity >= 2: | |
print ("Nearby points for point %d %s: %s" | |
% (i + 1, | |
str(point.ToLatLng()), | |
", ".join([str(n.ToLatLng()) for n in nearby]))) | |
if nearby: | |
nearby_points.append(nearby) | |
else: | |
print "No nearby points found for point %s" % str(point.ToLatLng()) | |
return None | |
pathToStr = lambda start, end, path: (" Best path %s -> %s: %s" | |
% (str(start.ToLatLng()), | |
str(end.ToLatLng()), | |
path and path.GetName() or | |
"None")) | |
if verbosity >= 3: | |
print "Step 1" | |
step = 2 | |
start_points = nearby_points[0] | |
end_points = nearby_points[1] | |
for start in start_points: | |
for end in end_points: | |
path = self.ShortestPath(start, end) | |
if verbosity >= 3: | |
print pathToStr(start, end, path) | |
PolyGraph._AddPathToHeap(paths_found, path, keep_best_n) | |
for possible_points in nearby_points[2:]: | |
if verbosity >= 3: | |
print "\nStep %d" % step | |
step += 1 | |
new_paths_found = [] | |
start_end_paths = {} # cache of shortest paths between (start, end) pairs | |
for score, path in paths_found: | |
start = path.GetPoint(-1) | |
for end in possible_points: | |
if (start, end) in start_end_paths: | |
new_segment = start_end_paths[(start, end)] | |
else: | |
new_segment = self.ShortestPath(start, end) | |
if verbosity >= 3: | |
print pathToStr(start, end, new_segment) | |
start_end_paths[(start, end)] = new_segment | |
if new_segment: | |
new_path = Poly.MergePolys([path, new_segment], | |
merge_point_threshold=0) | |
PolyGraph._AddPathToHeap(new_paths_found, new_path, keep_best_n) | |
paths_found = new_paths_found | |
if paths_found: | |
best_score, best_path = max(paths_found) | |
return best_path | |
else: | |
return None | |
@staticmethod | |
def _AddPathToHeap(heap, path, keep_best_n): | |
if path and path.GetNumPoints(): | |
new_item = (-path.LengthMeters(), path) | |
if new_item not in heap: | |
if len(heap) < keep_best_n: | |
heapq.heappush(heap, new_item) | |
else: | |
heapq.heapreplace(heap, new_item) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2009 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import optparse | |
import sys | |
class OptionParserLongError(optparse.OptionParser): | |
"""OptionParser subclass that includes list of options above error message.""" | |
def error(self, msg): | |
print >>sys.stderr, self.format_help() | |
print >>sys.stderr, '\n\n%s: error: %s\n\n' % (self.get_prog_name(), msg) | |
sys.exit(2) | |
def RunWithCrashHandler(f): | |
try: | |
exit_code = f() | |
sys.exit(exit_code) | |
except (SystemExit, KeyboardInterrupt): | |
raise | |
except: | |
import inspect | |
import traceback | |
# Save trace and exception now. These calls look at the most recently | |
# raised exception. The code that makes the report might trigger other | |
# exceptions. | |
original_trace = inspect.trace(3)[1:] | |
formatted_exception = traceback.format_exception_only(*(sys.exc_info()[:2])) | |
apology = """Yikes, the program threw an unexpected exception! | |
Hopefully a complete report has been saved to transitfeedcrash.txt, | |
though if you are seeing this message we've already disappointed you once | |
today. Please include the report in a new issue at | |
http://code.google.com/p/googletransitdatafeed/issues/entry | |
or an email to the public group googletransitdatafeed@googlegroups.com. Sorry! | |
""" | |
dashes = '%s\n' % ('-' * 60) | |
dump = [] | |
dump.append(apology) | |
dump.append(dashes) | |
try: | |
import transitfeed | |
dump.append("transitfeed version %s\n\n" % transitfeed.__version__) | |
except NameError: | |
# Oh well, guess we won't put the version in the report | |
pass | |
for (frame_obj, filename, line_num, fun_name, context_lines, | |
context_index) in original_trace: | |
dump.append('File "%s", line %d, in %s\n' % (filename, line_num, | |
fun_name)) | |
if context_lines: | |
for (i, line) in enumerate(context_lines): | |
if i == context_index: | |
dump.append(' --> %s' % line) | |
else: | |
dump.append(' %s' % line) | |
for local_name, local_val in frame_obj.f_locals.items(): | |
try: | |
truncated_val = str(local_val)[0:500] | |
except Exception, e: | |
dump.append(' Exception in str(%s): %s' % (local_name, e)) | |
else: | |
if len(truncated_val) >= 500: | |
truncated_val = '%s...' % truncated_val[0:499] | |
dump.append(' %s = %s\n' % (local_name, truncated_val)) | |
dump.append('\n') | |
dump.append(''.join(formatted_exception)) | |
open('transitfeedcrash.txt', 'w').write(''.join(dump)) | |
print ''.join(dump) | |
print dashes | |
print apology | |
try: | |
raw_input('Press enter to continue...') | |
except EOFError: | |
# Ignore stdin being closed. This happens during some tests. | |
pass | |
sys.exit(127) | |
# Pick one of two defaultdict implementations. A native version was added to | |
# the collections library in python 2.5. If that is not available use Jason's | |
# pure python recipe. He gave us permission to distribute it. | |
# On Mon, Nov 30, 2009 at 07:27, jason kirtland <jek at discorporate.us> wrote: | |
# > | |
# > Hi Tom, sure thing! It's not easy to find on the cookbook site, but the | |
# > recipe is under the Python license. | |
# > | |
# > Cheers, | |
# > Jason | |
# > | |
# > On Thu, Nov 26, 2009 at 3:03 PM, Tom Brown <tom.brown.code@gmail.com> wrote: | |
# > | |
# >> I would like to include http://code.activestate.com/recipes/523034/ in | |
# >> http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution | |
# >> which is distributed under the Apache License, Version 2.0 with Copyright | |
# >> Google. May we include your code with a comment in the source pointing at | |
# >> the original URL? Thanks, Tom Brown | |
try: | |
# Try the native implementation first | |
from collections import defaultdict | |
except: | |
# Fallback for python2.4, which didn't include collections.defaultdict | |
class defaultdict(dict): | |
def __init__(self, default_factory=None, *a, **kw): | |
if (default_factory is not None and | |
not hasattr(default_factory, '__call__')): | |
raise TypeError('first argument must be callable') | |
dict.__init__(self, *a, **kw) | |
self.default_factory = default_factory | |
def __getitem__(self, key): | |
try: | |
return dict.__getitem__(self, key) | |
except KeyError: | |
return self.__missing__(key) | |
def __missing__(self, key): | |
if self.default_factory is None: | |
raise KeyError(key) | |
self[key] = value = self.default_factory() | |
return value | |
def __reduce__(self): | |
if self.default_factory is None: | |
args = tuple() | |
else: | |
args = self.default_factory, | |
return type(self), args, None, None, self.items() | |
def copy(self): | |
return self.__copy__() | |
def __copy__(self): | |
return type(self)(self.default_factory, self) | |
def __deepcopy__(self, memo): | |
import copy | |
return type(self)(self.default_factory, | |
copy.deepcopy(self.items())) | |
def __repr__(self): | |
return 'defaultdict(%s, %s)' % (self.default_factory, | |
dict.__repr__(self)) | |
Binary files a/origin-src/transitfeed-1.2.5/transitfeed/util.pyc and /dev/null differ
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
Filters out trips which are not on the defualt routes and | |
set their trip_typeattribute accordingly. | |
For usage information run unusual_trip_filter.py --help | |
""" | |
__author__ = 'Jiri Semecky <jiri.semecky@gmail.com>' | |
import codecs | |
import os | |
import os.path | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import util | |
class UnusualTripFilter(object): | |
"""Class filtering trips going on unusual paths. | |
Those are usually trips going to/from depot or changing to another route | |
in the middle. Sets the 'trip_type' attribute of the trips.txt dataset | |
so that non-standard trips are marked as special (value 1) | |
instead of regular (default value 0). | |
""" | |
def __init__ (self, threshold=0.1, force=False, quiet=False, route_type=None): | |
self._threshold = threshold | |
self._quiet = quiet | |
self._force = force | |
if route_type in transitfeed.Route._ROUTE_TYPE_NAMES: | |
self._route_type = transitfeed.Route._ROUTE_TYPE_NAMES[route_type] | |
elif route_type is None: | |
self._route_type = None | |
else: | |
self._route_type = int(route_type) | |
def filter_line(self, route): | |
"""Mark unusual trips for the given route.""" | |
if self._route_type is not None and self._route_type != route.route_type: | |
self.info('Skipping route %s due to different route_type value (%s)' % | |
(route['route_id'], route['route_type'])) | |
return | |
self.info('Filtering infrequent trips for route %s.' % route.route_id) | |
trip_count = len(route.trips) | |
for pattern_id, pattern in route.GetPatternIdTripDict().items(): | |
ratio = float(1.0 * len(pattern) / trip_count) | |
if not self._force: | |
if (ratio < self._threshold): | |
self.info("\t%d trips on route %s with headsign '%s' recognized " | |
"as unusual (ratio %f)" % | |
(len(pattern), | |
route['route_short_name'], | |
pattern[0]['trip_headsign'], | |
ratio)) | |
for trip in pattern: | |
trip.trip_type = 1 # special | |
self.info("\t\tsetting trip_type of trip %s as special" % | |
trip.trip_id) | |
else: | |
self.info("\t%d trips on route %s with headsign '%s' recognized " | |
"as %s (ratio %f)" % | |
(len(pattern), | |
route['route_short_name'], | |
pattern[0]['trip_headsign'], | |
('regular', 'unusual')[ratio < self._threshold], | |
ratio)) | |
for trip in pattern: | |
trip.trip_type = ('0','1')[ratio < self._threshold] | |
self.info("\t\tsetting trip_type of trip %s as %s" % | |
(trip.trip_id, | |
('regular', 'unusual')[ratio < self._threshold])) | |
def filter(self, dataset): | |
"""Mark unusual trips for all the routes in the dataset.""" | |
self.info('Going to filter infrequent routes in the dataset') | |
for route in dataset.routes.values(): | |
self.filter_line(route) | |
def info(self, text): | |
if not self._quiet: | |
print text.encode("utf-8") | |
def main(): | |
usage = \ | |
'''%prog [options] <GTFS.zip> | |
Filters out trips which do not follow the most common stop sequences and | |
sets their trip_type attribute accordingly. <GTFS.zip> is overwritten with | |
the modifed GTFS file unless the --output option is used. | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='Name of the output GTFS file (writing to input feed if omitted).') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Force use of in-memory sqlite db.') | |
parser.add_option('-t', '--threshold', default=0.1, | |
dest='threshold', type='float', | |
help='Frequency threshold for considering pattern as non-regular.') | |
parser.add_option('-r', '--route_type', default=None, | |
dest='route_type', type='string', | |
help='Filter only selected route type (specified by number' | |
'or one of the following names: ' + \ | |
', '.join(transitfeed.Route._ROUTE_TYPE_NAMES) + ').') | |
parser.add_option('-f', '--override_trip_type', default=False, | |
dest='override_trip_type', action='store_true', | |
help='Forces overwrite of current trip_type values.') | |
parser.add_option('-q', '--quiet', dest='quiet', | |
default=False, action='store_true', | |
help='Suppress information output.') | |
(options, args) = parser.parse_args() | |
if len(args) != 1: | |
parser.error('You must provide the path of a single feed.') | |
filter = UnusualTripFilter(float(options.threshold), | |
force=options.override_trip_type, | |
quiet=options.quiet, | |
route_type=options.route_type) | |
feed_name = args[0] | |
feed_name = feed_name.strip() | |
filter.info('Loading %s' % feed_name) | |
loader = transitfeed.Loader(feed_name, extra_validation=True, | |
memory_db=options.memory_db) | |
data = loader.Load() | |
filter.filter(data) | |
print 'Saving data' | |
# Write the result | |
if options.output is None: | |
data.WriteGoogleTransitFeed(feed_name) | |
else: | |
data.WriteGoogleTransitFeed(options.output) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | |
<title>FeedValidator: good_feed.zip</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
table.dump td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
table.dump td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; font-family:monospace} | |
table.count_outside td {vertical-align: top} | |
table.count_outside {border-spacing: 0px; } | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 0.5em} | |
h4.issueHeader {padding-left: 1em} | |
.pass {background-color: lightgreen} | |
.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
GTFS validation results for feed:<br> | |
<code><span class="path">test/data/</span><b>good_feed.zip</b></code> | |
<br><br> | |
<table> | |
<tr><th class="header">Agencies:</th><td class="header"><a href="http://google.com">Autorité de passage de démonstration</a></td></tr> | |
<tr><th class="header">Routes:</th><td class="header">5</td></tr> | |
<tr><th class="header">Stops:</th><td class="header">10</td></tr> | |
<tr><th class="header">Trips:</th><td class="header">11</td></tr> | |
<tr><th class="header">Shapes:</th><td class="header">0</td></tr> | |
<tr><th class="header">Effective:</th><td class="header">January 01, 2007 to December 31, 2011</td></tr> | |
</table> | |
<br> | |
During the upcoming service dates Sun Apr 18 to Wed Jun 16: | |
<table> | |
<tr><th class="header">Average trips per date:</th><td class="header">141</td></tr> | |
<tr><th class="header">Most trips on a date:</th><td class="header">144, on 17 service dates (Sun Apr 18, Sat Apr 24, Sun Apr 25, ...)</td></tr> | |
<tr><th class="header">Least trips on a date:</th><td class="header">140, on 43 service dates (Mon Apr 19, Tue Apr 20, Wed Apr 21, ...)</td></tr> | |
</table> | |
<br> | |
<span class="pass">feed validated successfully</span> | |
<br><br> | |
<div class="footer"> | |
Generated by <a href="http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator"> | |
FeedValidator</a> version 1.2.5 on April 18, 2010 at 06:12 PM EST. | |
</div> | |
</body> | |
</html> |
Apache License | |
Version 2.0, January 2004 | |
http://www.apache.org/licenses/ | |
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | |
1. Definitions. | |
"License" shall mean the terms and conditions for use, reproduction, | |
and distribution as defined by Sections 1 through 9 of this document. | |
"Licensor" shall mean the copyright owner or entity authorized by | |
the copyright owner that is granting the License. | |
"Legal Entity" shall mean the union of the acting entity and all | |
other entities that control, are controlled by, or are under common | |
control with that entity. For the purposes of this definition, | |
"control" means (i) the power, direct or indirect, to cause the | |
direction or management of such entity, whether by contract or | |
otherwise, or (ii) ownership of fifty percent (50%) or more of the | |
outstanding shares, or (iii) beneficial ownership of such entity. | |
"You" (or "Your") shall mean an individual or Legal Entity | |
exercising permissions granted by this License. | |
"Source" form shall mean the preferred form for making modifications, | |
including but not limited to software source code, documentation | |
source, and configuration files. | |
"Object" form shall mean any form resulting from mechanical | |
transformation or translation of a Source form, including but | |
not limited to compiled object code, generated documentation, | |
and conversions to other media types. | |
"Work" shall mean the work of authorship, whether in Source or | |
Object form, made available under the License, as indicated by a | |
copyright notice that is included in or attached to the work | |
(an example is provided in the Appendix below). | |
"Derivative Works" shall mean any work, whether in Source or Object | |
form, that is based on (or derived from) the Work and for which the | |
editorial revisions, annotations, elaborations, or other modifications | |
represent, as a whole, an original work of authorship. For the purposes | |
of this License, Derivative Works shall not include works that remain | |
separable from, or merely link (or bind by name) to the interfaces of, | |
the Work and Derivative Works thereof. | |
"Contribution" shall mean any work of authorship, including | |
the original version of the Work and any modifications or additions | |
to that Work or Derivative Works thereof, that is intentionally | |
submitted to Licensor for inclusion in the Work by the copyright owner | |
or by an individual or Legal Entity authorized to submit on behalf of | |
the copyright owner. For the purposes of this definition, "submitted" | |
means any form of electronic, verbal, or written communication sent | |
to the Licensor or its representatives, including but not limited to | |
communication on electronic mailing lists, source code control systems, | |
and issue tracking systems that are managed by, or on behalf of, the | |
Licensor for the purpose of discussing and improving the Work, but | |
excluding communication that is conspicuously marked or otherwise | |
designated in writing by the copyright owner as "Not a Contribution." | |
"Contributor" shall mean Licensor and any individual or Legal Entity | |
on behalf of whom a Contribution has been received by Licensor and | |
subsequently incorporated within the Work. | |
2. Grant of Copyright License. Subject to the terms and conditions of | |
this License, each Contributor hereby grants to You a perpetual, | |
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |
copyright license to reproduce, prepare Derivative Works of, | |
publicly display, publicly perform, sublicense, and distribute the | |
Work and such Derivative Works in Source or Object form. | |
3. Grant of Patent License. Subject to the terms and conditions of | |
this License, each Contributor hereby grants to You a perpetual, | |
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |
(except as stated in this section) patent license to make, have made, | |
use, offer to sell, sell, import, and otherwise transfer the Work, | |
where such license applies only to those patent claims licensable | |
by such Contributor that are necessarily infringed by their | |
Contribution(s) alone or by combination of their Contribution(s) | |
with the Work to which such Contribution(s) was submitted. If You | |
institute patent litigation against any entity (including a | |
cross-claim or counterclaim in a lawsuit) alleging that the Work | |
or a Contribution incorporated within the Work constitutes direct | |
or contributory patent infringement, then any patent licenses | |
granted to You under this License for that Work shall terminate | |
as of the date such litigation is filed. | |
4. Redistribution. You may reproduce and distribute copies of the | |
Work or Derivative Works thereof in any medium, with or without | |
modifications, and in Source or Object form, provided that You | |
meet the following conditions: | |
(a) You must give any other recipients of the Work or | |
Derivative Works a copy of this License; and | |
(b) You must cause any modified files to carry prominent notices | |
stating that You changed the files; and | |
(c) You must retain, in the Source form of any Derivative Works | |
that You distribute, all copyright, patent, trademark, and | |
attribution notices from the Source form of the Work, | |
excluding those notices that do not pertain to any part of | |
the Derivative Works; and | |
(d) If the Work includes a "NOTICE" text file as part of its | |
distribution, then any Derivative Works that You distribute must | |
include a readable copy of the attribution notices contained | |
within such NOTICE file, excluding those notices that do not | |
pertain to any part of the Derivative Works, in at least one | |
of the following places: within a NOTICE text file distributed | |
as part of the Derivative Works; within the Source form or | |
documentation, if provided along with the Derivative Works; or, | |
within a display generated by the Derivative Works, if and | |
wherever such third-party notices normally appear. The contents | |
of the NOTICE file are for informational purposes only and | |
do not modify the License. You may add Your own attribution | |
notices within Derivative Works that You distribute, alongside | |
or as an addendum to the NOTICE text from the Work, provided | |
that such additional attribution notices cannot be construed | |
as modifying the License. | |
You may add Your own copyright statement to Your modifications and | |
may provide additional or different license terms and conditions | |
for use, reproduction, or distribution of Your modifications, or | |
for any such Derivative Works as a whole, provided Your use, | |
reproduction, and distribution of the Work otherwise complies with | |
the conditions stated in this License. | |
5. Submission of Contributions. Unless You explicitly state otherwise, | |
any Contribution intentionally submitted for inclusion in the Work | |
by You to the Licensor shall be under the terms and conditions of | |
this License, without any additional terms or conditions. | |
Notwithstanding the above, nothing herein shall supersede or modify | |
the terms of any separate license agreement you may have executed | |
with Licensor regarding such Contributions. | |
6. Trademarks. This License does not grant permission to use the trade | |
names, trademarks, service marks, or product names of the Licensor, | |
except as required for reasonable and customary use in describing the | |
origin of the Work and reproducing the content of the NOTICE file. | |
7. Disclaimer of Warranty. Unless required by applicable law or | |
agreed to in writing, Licensor provides the Work (and each | |
Contributor provides its Contributions) on an "AS IS" BASIS, | |
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | |
implied, including, without limitation, any warranties or conditions | |
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A | |
PARTICULAR PURPOSE. You are solely responsible for determining the | |
appropriateness of using or redistributing the Work and assume any | |
risks associated with Your exercise of permissions under this License. | |
8. Limitation of Liability. In no event and under no legal theory, | |
whether in tort (including negligence), contract, or otherwise, | |
unless required by applicable law (such as deliberate and grossly | |
negligent acts) or agreed to in writing, shall any Contributor be | |
liable to You for damages, including any direct, indirect, special, | |
incidental, or consequential damages of any character arising as a | |
result of this License or out of the use or inability to use the | |
Work (including but not limited to damages for loss of goodwill, | |
work stoppage, computer failure or malfunction, or any and all | |
other commercial damages or losses), even if such Contributor | |
has been advised of the possibility of such damages. | |
9. Accepting Warranty or Additional Liability. While redistributing | |
the Work or Derivative Works thereof, You may choose to offer, | |
and charge a fee for, acceptance of support, warranty, indemnity, | |
or other liability obligations and/or rights consistent with this | |
License. However, in accepting such obligations, You may act only | |
on Your own behalf and on Your sole responsibility, not on behalf | |
of any other Contributor, and only if You agree to indemnify, | |
defend, and hold each Contributor harmless for any liability | |
incurred by, or claims asserted against, such Contributor by reason | |
of your accepting any such warranty or additional liability. | |
END OF TERMS AND CONDITIONS | |
APPENDIX: How to apply the Apache License to your work. | |
To apply the Apache License to your work, attach the following | |
boilerplate notice, with the fields enclosed by brackets "[]" | |
replaced with your own identifying information. (Don't include | |
the brackets!) The text should be enclosed in the appropriate | |
comment syntax for the file format. We also recommend that a | |
file or class name and description of purpose be included on the | |
same "printed page" as the copyright notice for easier | |
identification within third-party archives. | |
Copyright [yyyy] [name of copyright owner] | |
Licensed under the Apache License, Version 2.0 (the "License"); | |
you may not use this file except in compliance with the License. | |
You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software | |
distributed under the License is distributed on an "AS IS" BASIS, | |
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
See the License for the specific language governing permissions and | |
limitations under the License. | |
INSTALL file for transitfeed distribution | |
To download and install in one step make sure you have easy-install installed and run | |
easy_install transitfeed | |
Since you got this far chances are you have downloaded a copy of the source | |
code. Install with the command | |
python setup.py install | |
If you don't want to install you may be able to run the scripts from this | |
directory. For example, try running | |
./feedvalidator.py -n test/data/good_feed.zip | |
Metadata-Version: 1.0 | |
Name: transitfeed | |
Version: 1.2.6 | |
Summary: Google Transit Feed Specification library and tools | |
Home-page: http://code.google.com/p/googletransitdatafeed/ | |
Author: Tom Brown | |
Author-email: tom.brown.code@gmail.com | |
License: Apache License, Version 2.0 | |
Download-URL: http://googletransitdatafeed.googlecode.com/files/transitfeed-1.2.6.tar.gz | |
Description: This module provides a library for reading, writing and validating Google Transit Feed Specification files. It includes some scripts that validate a feed, display it using the Google Maps API and the start of a KML importer and exporter. | |
Platform: OS Independent | |
Classifier: Development Status :: 4 - Beta | |
Classifier: Intended Audience :: Developers | |
Classifier: Intended Audience :: Information Technology | |
Classifier: Intended Audience :: Other Audience | |
Classifier: License :: OSI Approved :: Apache Software License | |
Classifier: Operating System :: OS Independent | |
Classifier: Programming Language :: Python | |
Classifier: Topic :: Scientific/Engineering :: GIS | |
Classifier: Topic :: Software Development :: Libraries :: Python Modules | |
README file for transitfeed distribution | |
This distribution contains a library to help you parse and generate Google | |
Transit Feed files. It also contains some sample tools that demonstrate the | |
library and are useful in their own right when maintaining Google | |
Transit Feed files. You may fetch the specification from | |
http://code.google.com/transit/spec/transit_feed_specification.htm | |
See INSTALL for installation instructions | |
The most recent source can be downloaded from our subversion repository at | |
http://googletransitdatafeed.googlecode.com/svn/trunk/python/ | |
See http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution | |
for more information. | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Filter the unused stops out of a transit feed file.""" | |
import optparse | |
import sys | |
import transitfeed | |
def main(): | |
parser = optparse.OptionParser( | |
usage="usage: %prog [options] input_feed output_feed", | |
version="%prog "+transitfeed.__version__) | |
parser.add_option("-l", "--list_removed", dest="list_removed", | |
default=False, | |
action="store_true", | |
help="Print removed stops to stdout") | |
(options, args) = parser.parse_args() | |
if len(args) != 2: | |
print >>sys.stderr, parser.format_help() | |
print >>sys.stderr, "\n\nYou must provide input_feed and output_feed\n\n" | |
sys.exit(2) | |
input_path = args[0] | |
output_path = args[1] | |
loader = transitfeed.Loader(input_path) | |
schedule = loader.Load() | |
print "Removing unused stops..." | |
removed = 0 | |
for stop_id, stop in schedule.stops.items(): | |
if not stop.GetTrips(schedule): | |
removed += 1 | |
del schedule.stops[stop_id] | |
if options.list_removed: | |
print "Removing %s (%s)" % (stop_id, stop.stop_name) | |
if removed == 0: | |
print "No unused stops." | |
elif removed == 1: | |
print "Removed 1 stop" | |
else: | |
print "Removed %d stops" % removed | |
schedule.WriteGoogleTransitFeed(output_path) | |
if __name__ == "__main__": | |
main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Output Google Transit URLs for queries near stops. | |
The output can be used to speed up manual testing. Load the output from this | |
file and then open many of the links in new tabs. In each result check that the | |
polyline looks okay (no unnecassary loops, no jumps to a far away location) and | |
look at the time of each leg. Also check the route names and headsigns are | |
formatted correctly and not redundant. | |
""" | |
from datetime import datetime | |
from datetime import timedelta | |
import math | |
import optparse | |
import os.path | |
import random | |
import sys | |
import transitfeed | |
import urllib | |
import urlparse | |
def Distance(lat0, lng0, lat1, lng1): | |
""" | |
Compute the geodesic distance in meters between two points on the | |
surface of the Earth. The latitude and longitude angles are in | |
degrees. | |
Approximate geodesic distance function (Haversine Formula) assuming | |
a perfect sphere of radius 6367 km (see "What are some algorithms | |
for calculating the distance between 2 points?" in the GIS Faq at | |
http://www.census.gov/geo/www/faq-index.html). The approximate | |
radius is adequate for our needs here, but a more sophisticated | |
geodesic function should be used if greater accuracy is required | |
(see "When is it NOT okay to assume the Earth is a sphere?" in the | |
same faq). | |
""" | |
deg2rad = math.pi / 180.0 | |
lat0 = lat0 * deg2rad | |
lng0 = lng0 * deg2rad | |
lat1 = lat1 * deg2rad | |
lng1 = lng1 * deg2rad | |
dlng = lng1 - lng0 | |
dlat = lat1 - lat0 | |
a = math.sin(dlat*0.5) | |
b = math.sin(dlng*0.5) | |
a = a * a + math.cos(lat0) * math.cos(lat1) * b * b | |
c = 2.0 * math.atan2(math.sqrt(a), math.sqrt(1.0 - a)) | |
return 6367000.0 * c | |
def AddNoiseToLatLng(lat, lng): | |
"""Add up to 500m of error to each coordinate of lat, lng.""" | |
m_per_tenth_lat = Distance(lat, lng, lat + 0.1, lng) | |
m_per_tenth_lng = Distance(lat, lng, lat, lng + 0.1) | |
lat_per_100m = 1 / m_per_tenth_lat * 10 | |
lng_per_100m = 1 / m_per_tenth_lng * 10 | |
return (lat + (lat_per_100m * 5 * (random.random() * 2 - 1)), | |
lng + (lng_per_100m * 5 * (random.random() * 2 - 1))) | |
def GetRandomLocationsNearStops(schedule): | |
"""Return a list of (lat, lng) tuples.""" | |
locations = [] | |
for s in schedule.GetStopList(): | |
locations.append(AddNoiseToLatLng(s.stop_lat, s.stop_lon)) | |
return locations | |
def GetRandomDatetime(): | |
"""Return a datetime in the next week.""" | |
seconds_offset = random.randint(0, 60 * 60 * 24 * 7) | |
dt = datetime.today() + timedelta(seconds=seconds_offset) | |
return dt.replace(second=0, microsecond=0) | |
def FormatLatLng(lat_lng): | |
"""Format a (lat, lng) tuple into a string for maps.google.com.""" | |
return "%0.6f,%0.6f" % lat_lng | |
def LatLngsToGoogleUrl(source, destination, dt): | |
"""Return a URL for routing between two (lat, lng) at a datetime.""" | |
params = {"saddr": FormatLatLng(source), | |
"daddr": FormatLatLng(destination), | |
"time": dt.strftime("%I:%M%p"), | |
"date": dt.strftime("%Y-%m-%d"), | |
"dirflg": "r", | |
"ie": "UTF8", | |
"oe": "UTF8"} | |
url = urlparse.urlunsplit(("http", "maps.google.com", "/maps", | |
urllib.urlencode(params), "")) | |
return url | |
def LatLngsToGoogleLink(source, destination): | |
"""Return a string "<a ..." for a trip at a random time.""" | |
dt = GetRandomDatetime() | |
return "<a href='%s'>from:%s to:%s on %s</a>" % ( | |
LatLngsToGoogleUrl(source, destination, dt), | |
FormatLatLng(source), FormatLatLng(destination), | |
dt.ctime()) | |
def WriteOutput(title, locations, limit, f): | |
"""Write html to f for up to limit trips between locations. | |
Args: | |
title: String used in html title | |
locations: list of (lat, lng) tuples | |
limit: maximum number of queries in the html | |
f: a file object | |
""" | |
output_prefix = """ | |
<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | |
<title>%(title)s</title> | |
</head> | |
<body> | |
Random queries for %(title)s<p> | |
This list of random queries should speed up important manual testing. Here are | |
some things to check when looking at the results of a query. | |
<ul> | |
<li> Check the agency attribution under the trip results: | |
<ul> | |
<li> has correct name and spelling of the agency | |
<li> opens a page with general information about the service | |
</ul> | |
<li> For each alternate trip check that each of these is reasonable: | |
<ul> | |
<li> the total time of the trip | |
<li> the time for each leg. Bad data frequently results in a leg going a long | |
way in a few minutes. | |
<li> the icons and mode names (Tram, Bus, etc) are correct for each leg | |
<li> the route names and headsigns are correctly formatted and not | |
redundant. | |
For a good example see <a | |
href="http://code.google.com/transit/spec/transit_feed_specification.html#transitScreenshots">the | |
screenshots in the Google Transit Feed Specification</a>. | |
<li> the shape line on the map looks correct. Make sure the polyline does | |
not zig-zag, loop, skip stops or jump far away unless the trip does the | |
same thing. | |
<li> the route is active on the day the trip planner returns | |
</ul> | |
</ul> | |
If you find a problem be sure to save the URL. This file is generated randomly. | |
<ol> | |
""" % locals() | |
output_suffix = """ | |
</ol> | |
</body> | |
</html> | |
""" % locals() | |
f.write(transitfeed.EncodeUnicode(output_prefix)) | |
for source, destination in zip(locations[0:limit], locations[1:limit + 1]): | |
f.write(transitfeed.EncodeUnicode("<li>%s\n" % | |
LatLngsToGoogleLink(source, destination))) | |
f.write(transitfeed.EncodeUnicode(output_suffix)) | |
def ParentAndBaseName(path): | |
"""Given a path return only the parent name and file name as a string.""" | |
dirname, basename = os.path.split(path) | |
dirname = dirname.rstrip(os.path.sep) | |
if os.path.altsep: | |
dirname = dirname.rstrip(os.path.altsep) | |
_, parentname = os.path.split(dirname) | |
return os.path.join(parentname, basename) | |
def main(): | |
usage = \ | |
"""%prog [options] <input GTFS.zip> | |
Create an HTML page of random URLs for the Google Maps transit trip | |
planner. The queries go between places near stops listed in a <input GTFS.zip>. | |
By default 50 random URLs are saved to google_random_queries.html. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/GoogleRandomQueries | |
""" | |
parser = optparse.OptionParser( | |
usage=usage, | |
version="%prog "+transitfeed.__version__) | |
parser.add_option("-l", "--limit", dest="limit", type="int", | |
help="Maximum number of URLs to generate") | |
parser.add_option("-o", "--output", dest="output", metavar="HTML_OUTPUT_PATH", | |
help="write HTML output to HTML_OUTPUT_PATH") | |
parser.set_defaults(output="google_random_queries.html", limit=50) | |
(options, args) = parser.parse_args() | |
if len(args) != 1: | |
print >>sys.stderr, parser.format_help() | |
print >>sys.stderr, "\n\nYou must provide the path of a single feed\n\n" | |
sys.exit(2) | |
feed_path = args[0] | |
# ProblemReporter prints problems on console. | |
loader = transitfeed.Loader(feed_path, problems=transitfeed.ProblemReporter(), | |
load_stop_times=False) | |
schedule = loader.Load() | |
locations = GetRandomLocationsNearStops(schedule) | |
random.shuffle(locations) | |
agencies = ", ".join([a.agency_name for a in schedule.GetAgencyList()]) | |
title = "%s (%s)" % (agencies, ParentAndBaseName(feed_path)) | |
WriteOutput(title, | |
locations, | |
options.limit, | |
open(options.output, "w")) | |
print ("Load %s in your web browser. It contains more instructions." % | |
options.output) | |
if __name__ == "__main__": | |
main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Google has a homegrown database for managing the company shuttle. The | |
database dumps its contents in XML. This scripts converts the proprietary XML | |
format into a Google Transit Feed Specification file. | |
""" | |
import datetime | |
from optparse import OptionParser | |
import os.path | |
import re | |
import transitfeed | |
import urllib | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
class NoUnusedStopExceptionProblemReporter(transitfeed.ProblemReporter): | |
"""The company shuttle database has a few unused stops for reasons unrelated | |
to this script. Ignore them. | |
""" | |
def __init__(self): | |
accumulator = transitfeed.ExceptionProblemAccumulator() | |
transitfeed.ProblemReporter.__init__(self, accumulator) | |
def UnusedStop(self, stop_id, stop_name): | |
pass | |
def SaveFeed(input, output): | |
tree = ET.parse(urllib.urlopen(input)) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService() | |
service_period.SetStartDate('20070314') | |
service_period.SetEndDate('20071231') | |
# Holidays for 2007 | |
service_period.SetDateHasService('20070528', has_service=False) | |
service_period.SetDateHasService('20070704', has_service=False) | |
service_period.SetDateHasService('20070903', has_service=False) | |
service_period.SetDateHasService('20071122', has_service=False) | |
service_period.SetDateHasService('20071123', has_service=False) | |
service_period.SetDateHasService('20071224', has_service=False) | |
service_period.SetDateHasService('20071225', has_service=False) | |
service_period.SetDateHasService('20071226', has_service=False) | |
service_period.SetDateHasService('20071231', has_service=False) | |
stops = {} # Map from xml stop id to python Stop object | |
agency = schedule.NewDefaultAgency(name='GBus', url='http://shuttle/', | |
timezone='America/Los_Angeles') | |
for xml_stop in tree.getiterator('stop'): | |
stop = schedule.AddStop(lat=float(xml_stop.attrib['lat']), | |
lng=float(xml_stop.attrib['lng']), | |
name=xml_stop.attrib['name']) | |
stops[xml_stop.attrib['id']] = stop | |
for xml_shuttleGroup in tree.getiterator('shuttleGroup'): | |
if xml_shuttleGroup.attrib['name'] == 'Test': | |
continue | |
r = schedule.AddRoute(short_name="", | |
long_name=xml_shuttleGroup.attrib['name'], route_type='Bus') | |
for xml_route in xml_shuttleGroup.getiterator('route'): | |
t = r.AddTrip(schedule=schedule, headsign=xml_route.attrib['name'], | |
trip_id=xml_route.attrib['id']) | |
trip_stops = [] # Build a list of (time, Stop) tuples | |
for xml_schedule in xml_route.getiterator('schedule'): | |
trip_stops.append( (int(xml_schedule.attrib['time']) / 1000, | |
stops[xml_schedule.attrib['stopId']]) ) | |
trip_stops.sort() # Sort by time | |
for (time, stop) in trip_stops: | |
t.AddStopTime(stop=stop, arrival_secs=time, departure_secs=time) | |
schedule.Validate(problems=NoUnusedStopExceptionProblemReporter()) | |
schedule.WriteGoogleTransitFeed(output) | |
def main(): | |
parser = OptionParser() | |
parser.add_option('--input', dest='input', | |
help='Path or URL of input') | |
parser.add_option('--output', dest='output', | |
help='Path of output file. Should end in .zip and if it ' | |
'contains the substring YYYYMMDD it will be replaced with ' | |
'today\'s date. It is impossible to include the literal ' | |
'string YYYYYMMDD in the path of the output file.') | |
parser.add_option('--execute', dest='execute', | |
help='Commands to run to copy the output. %(path)s is ' | |
'replaced with full path of the output and %(name)s is ' | |
'replaced with name part of the path. Try ' | |
'scp %(path)s myhost:www/%(name)s', | |
action='append') | |
parser.set_defaults(input=None, output=None, execute=[]) | |
(options, args) = parser.parse_args() | |
today = datetime.date.today().strftime('%Y%m%d') | |
options.output = re.sub(r'YYYYMMDD', today, options.output) | |
(_, name) = os.path.split(options.output) | |
path = options.output | |
SaveFeed(options.input, options.output) | |
for command in options.execute: | |
import subprocess | |
def check_call(cmd): | |
"""Convenience function that is in the docs for subprocess but not | |
installed on my system.""" | |
retcode = subprocess.call(cmd, shell=True) | |
if retcode < 0: | |
raise Exception("Child '%s' was terminated by signal %d" % (cmd, | |
-retcode)) | |
elif retcode != 0: | |
raise Exception("Child '%s' returned %d" % (cmd, retcode)) | |
# path_output and filename_current can be used to run arbitrary commands | |
check_call(command % locals()) | |
if __name__ == '__main__': | |
main() | |
<shuttle><office id="us-nye" name="US Nye County"> | |
<stops> | |
<stop id="1" name="Stagecoach Hotel and Casino" shortName="Stagecoach" lat="36.915682" lng="-116.751677" /> | |
<stop id="2" name="North Ave / N A Ave" shortName="N Ave / A Ave N" lat="36.914944" lng="-116.761472" /> | |
<stop id="3" name="North Ave / D Ave N" shortName="N Ave / D Ave N" lat="36.914893" lng="-116.76821" /> | |
<stop id="4" name="Doing Ave / D Ave N" shortName="Doing / D Ave N" lat="36.909489" lng="-116.768242" /> | |
<stop id="5" name="E Main St / S Irving St" shortName="E Main / S Irving" lat="36.905697" lng="-116.76218" /> | |
</stops> | |
<shuttleGroups> | |
<shuttleGroup id="4" name="Bar Circle Loop" > | |
<routes> | |
<route id="1" name="Outbound"> | |
<schedules> | |
<schedule id="164" stopId="1" time="60300000"/> | |
<schedule id="165" stopId="2" time="60600000"/> | |
<schedule id="166" stopId="3" time="60720000"/> | |
<schedule id="167" stopId="4" time="60780000"/> | |
<schedule id="168" stopId="5" time="60900000"/> | |
</schedules><meta></meta></route> | |
<route id="2" name="Inbound"> | |
<schedules> | |
<schedule id="260" stopId="5" time="30000000"/> | |
<schedule id="261" stopId="4" time="30120000"/> | |
<schedule id="262" stopId="3" time="30180000"/> | |
<schedule id="263" stopId="2" time="30300000"/> | |
<schedule id="264" stopId="1" time="30600000"/> | |
</schedules><meta></meta></route></routes> | |
</shuttleGroup> | |
</shuttleGroups></office></shuttle> | |
#!/usr/bin/python2.5 | |
# A really simple example of using transitfeed to build a Google Transit | |
# Feed Specification file. | |
import transitfeed | |
from optparse import OptionParser | |
parser = OptionParser() | |
parser.add_option('--output', dest='output', | |
help='Path of output file. Should end in .zip') | |
parser.set_defaults(output='google_transit.zip') | |
(options, args) = parser.parse_args() | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService(True) | |
service_period.SetDateHasService('20070704') | |
stop1 = schedule.AddStop(lng=-122, lat=37.2, name="Suburbia") | |
stop2 = schedule.AddStop(lng=-122.001, lat=37.201, name="Civic Center") | |
route = schedule.AddRoute(short_name="22", long_name="Civic Center Express", | |
route_type="Bus") | |
trip = route.AddTrip(schedule, headsign="To Downtown") | |
trip.AddStopTime(stop1, stop_time='09:00:00') | |
trip.AddStopTime(stop2, stop_time='09:15:00') | |
trip = route.AddTrip(schedule, headsign="To Suburbia") | |
trip.AddStopTime(stop1, stop_time='17:30:00') | |
trip.AddStopTime(stop2, stop_time='17:45:00') | |
schedule.Validate() | |
schedule.WriteGoogleTransitFeed(options.output) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# An example script that demonstrates converting a proprietary format to a | |
# Google Transit Feed Specification file. | |
# | |
# You can load table.txt, the example input, in Excel. It contains three | |
# sections: | |
# 1) A list of global options, starting with a line containing the word | |
# 'options'. Each option has an name in the first column and most options | |
# have a value in the second column. | |
# 2) A table of stops, starting with a line containing the word 'stops'. Each | |
# row of the table has 3 columns: name, latitude, longitude | |
# 3) A list of routes. There is an empty row between each route. The first row | |
# for a route lists the short_name and long_name. After the first row the | |
# left-most column lists the stop names visited by the route. Each column | |
# contains the times a single trip visits the stops. | |
# | |
# This is very simple example which you could use as a base for your own | |
# transit feed builder. | |
import transitfeed | |
from optparse import OptionParser | |
import re | |
stops = {} | |
# table is a list of lists in this form | |
# [ ['Short Name', 'Long Name'], | |
# ['Stop 1', 'Stop 2', ...] | |
# [time_at_1, time_at_2, ...] # times for trip 1 | |
# [time_at_1, time_at_2, ...] # times for trip 2 | |
# ... ] | |
def AddRouteToSchedule(schedule, table): | |
if len(table) >= 2: | |
r = schedule.AddRoute(short_name=table[0][0], long_name=table[0][1], route_type='Bus') | |
for trip in table[2:]: | |
if len(trip) > len(table[1]): | |
print "ignoring %s" % trip[len(table[1]):] | |
trip = trip[0:len(table[1])] | |
t = r.AddTrip(schedule, headsign='My headsign') | |
trip_stops = [] # Build a list of (time, stopname) tuples | |
for i in range(0, len(trip)): | |
if re.search(r'\S', trip[i]): | |
trip_stops.append( (transitfeed.TimeToSecondsSinceMidnight(trip[i]), table[1][i]) ) | |
trip_stops.sort() # Sort by time | |
for (time, stopname) in trip_stops: | |
t.AddStopTime(stop=stops[stopname.lower()], arrival_secs=time, | |
departure_secs=time) | |
def TransposeTable(table): | |
"""Transpose a list of lists, using None to extend all input lists to the | |
same length. | |
For example: | |
>>> TransposeTable( | |
[ [11, 12, 13], | |
[21, 22], | |
[31, 32, 33, 34]]) | |
[ [11, 21, 31], | |
[12, 22, 32], | |
[13, None, 33], | |
[None, None, 34]] | |
""" | |
transposed = [] | |
rows = len(table) | |
cols = max(len(row) for row in table) | |
for x in range(cols): | |
transposed.append([]) | |
for y in range(rows): | |
if x < len(table[y]): | |
transposed[x].append(table[y][x]) | |
else: | |
transposed[x].append(None) | |
return transposed | |
def ProcessOptions(schedule, table): | |
service_period = schedule.GetDefaultServicePeriod() | |
agency_name, agency_url, agency_timezone = (None, None, None) | |
for row in table[1:]: | |
command = row[0].lower() | |
if command == 'weekday': | |
service_period.SetWeekdayService() | |
elif command == 'start_date': | |
service_period.SetStartDate(row[1]) | |
elif command == 'end_date': | |
service_period.SetEndDate(row[1]) | |
elif command == 'add_date': | |
service_period.SetDateHasService(date=row[1]) | |
elif command == 'remove_date': | |
service_period.SetDateHasService(date=row[1], has_service=False) | |
elif command == 'agency_name': | |
agency_name = row[1] | |
elif command == 'agency_url': | |
agency_url = row[1] | |
elif command == 'agency_timezone': | |
agency_timezone = row[1] | |
if not (agency_name and agency_url and agency_timezone): | |
print "You must provide agency information" | |
schedule.NewDefaultAgency(agency_name=agency_name, agency_url=agency_url, | |
agency_timezone=agency_timezone) | |
def AddStops(schedule, table): | |
for name, lat_str, lng_str in table[1:]: | |
stop = schedule.AddStop(lat=float(lat_str), lng=float(lng_str), name=name) | |
stops[name.lower()] = stop | |
def ProcessTable(schedule, table): | |
if table[0][0].lower() == 'options': | |
ProcessOptions(schedule, table) | |
elif table[0][0].lower() == 'stops': | |
AddStops(schedule, table) | |
else: | |
transposed = [table[0]] # Keep route_short_name and route_long_name on first row | |
# Transpose rest of table. Input contains the stop names in table[x][0], x | |
# >= 1 with trips found in columns, so we need to transpose table[1:]. | |
# As a diagram Transpose from | |
# [['stop 1', '10:00', '11:00', '12:00'], | |
# ['stop 2', '10:10', '11:10', '12:10'], | |
# ['stop 3', '10:20', '11:20', '12:20']] | |
# to | |
# [['stop 1', 'stop 2', 'stop 3'], | |
# ['10:00', '10:10', '10:20'], | |
# ['11:00', '11:11', '11:20'], | |
# ['12:00', '12:12', '12:20']] | |
transposed.extend(TransposeTable(table[1:])) | |
AddRouteToSchedule(schedule, transposed) | |
def main(): | |
parser = OptionParser() | |
parser.add_option('--input', dest='input', | |
help='Path of input file') | |
parser.add_option('--output', dest='output', | |
help='Path of output file, should end in .zip') | |
parser.set_defaults(output='feed.zip') | |
(options, args) = parser.parse_args() | |
schedule = transitfeed.Schedule() | |
table = [] | |
for line in open(options.input): | |
line = line.rstrip() | |
if not line: | |
ProcessTable(schedule, table) | |
table = [] | |
else: | |
table.append(line.split('\t')) | |
ProcessTable(schedule, table) | |
schedule.WriteGoogleTransitFeed(options.output) | |
if __name__ == '__main__': | |
main() | |
options | |
weekday | |
start_date 20070315 | |
end_date 20071215 | |
remove_date 20070704 | |
agency_name Gbus | |
agency_url http://shuttle/ | |
agency_timezone America/Los_Angeles | |
stops | |
Stagecoach 36.915682 -116.751677 | |
N Ave / A Ave N 36.914944 -116.761472 | |
N Ave / D Ave N 36.914893 -116.76821 | |
Doing / D Ave N 36.909489 -116.768242 | |
E Main / S Irving 36.905697 -116.76218 | |
O in Bar Circle Inbound | |
Stagecoach 9:00:00 9:30:00 10:00:00 12:00:00 | |
N Ave / A Ave N 9:05:00 9:35:00 10:05:00 12:05:00 | |
N Ave / D Ave N 9:07:00 9:37:00 10:07:00 12:07:00 | |
Doing / D Ave N 9:09:00 9:39:00 10:09:00 12:09:00 | |
E Main / S Irving 9:11:00 9:41:00 10:11:00 12:11:00 | |
O out Bar Circle Outbound | |
E Main / S Irving 15:00:00 15:30:00 16:00:00 18:00:00 | |
Doing / D Ave N 15:05:00 15:35:00 16:05:00 18:05:00 | |
N Ave / D Ave N 15:07:00 15:37:00 16:07:00 18:07:00 | |
N Ave / A Ave N 15:09:00 15:39:00 16:09:00 18:09:00 | |
Stagecoach 15:11:00 15:41:00 16:11:00 18:11:00 | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Validates a GTFS file. | |
For usage information run feedvalidator.py --help | |
""" | |
import bisect | |
import codecs | |
import datetime | |
from transitfeed.util import defaultdict | |
import optparse | |
import os | |
import os.path | |
import re | |
import socket | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import TYPE_ERROR, TYPE_WARNING | |
from urllib2 import Request, urlopen, HTTPError, URLError | |
from transitfeed import util | |
import webbrowser | |
SVN_TAG_URL = 'http://googletransitdatafeed.googlecode.com/svn/tags/' | |
def MaybePluralizeWord(count, word): | |
if count == 1: | |
return word | |
else: | |
return word + 's' | |
def PrettyNumberWord(count, word): | |
return '%d %s' % (count, MaybePluralizeWord(count, word)) | |
def UnCamelCase(camel): | |
return re.sub(r'([a-z])([A-Z])', r'\1 \2', camel) | |
def ProblemCountText(error_count, warning_count): | |
results = [] | |
if error_count: | |
results.append(PrettyNumberWord(error_count, 'error')) | |
if warning_count: | |
results.append(PrettyNumberWord(warning_count, 'warning')) | |
return ' and '.join(results) | |
def CalendarSummary(schedule): | |
today = datetime.date.today() | |
summary_end_date = today + datetime.timedelta(days=60) | |
start_date, end_date = schedule.GetDateRange() | |
if not start_date or not end_date: | |
return {} | |
try: | |
start_date_object = transitfeed.DateStringToDateObject(start_date) | |
end_date_object = transitfeed.DateStringToDateObject(end_date) | |
except ValueError: | |
return {} | |
# Get the list of trips only during the period the feed is active. | |
# As such we have to check if it starts in the future and/or if | |
# if it ends in less than 60 days. | |
date_trips_departures = schedule.GenerateDateTripsDeparturesList( | |
max(today, start_date_object), | |
min(summary_end_date, end_date_object)) | |
if not date_trips_departures: | |
return {} | |
# Check that the dates which will be shown in summary agree with these | |
# calculations. Failure implies a bug which should be fixed. It isn't good | |
# for users to discover assertion failures but means it will likely be fixed. | |
assert start_date <= date_trips_departures[0][0].strftime("%Y%m%d") | |
assert end_date >= date_trips_departures[-1][0].strftime("%Y%m%d") | |
# Generate a map from int number of trips in a day to a list of date objects | |
# with that many trips. The list of dates is sorted. | |
trips_dates = defaultdict(lambda: []) | |
trips = 0 | |
for date, day_trips, day_departures in date_trips_departures: | |
trips += day_trips | |
trips_dates[day_trips].append(date) | |
mean_trips = trips / len(date_trips_departures) | |
max_trips = max(trips_dates.keys()) | |
min_trips = min(trips_dates.keys()) | |
calendar_summary = {} | |
calendar_summary['mean_trips'] = mean_trips | |
calendar_summary['max_trips'] = max_trips | |
calendar_summary['max_trips_dates'] = FormatDateList(trips_dates[max_trips]) | |
calendar_summary['min_trips'] = min_trips | |
calendar_summary['min_trips_dates'] = FormatDateList(trips_dates[min_trips]) | |
calendar_summary['date_trips_departures'] = date_trips_departures | |
calendar_summary['date_summary_range'] = "%s to %s" % ( | |
date_trips_departures[0][0].strftime("%a %b %d"), | |
date_trips_departures[-1][0].strftime("%a %b %d")) | |
return calendar_summary | |
def FormatDateList(dates): | |
if not dates: | |
return "0 service dates" | |
formatted = [d.strftime("%a %b %d") for d in dates[0:3]] | |
if len(dates) > 3: | |
formatted.append("...") | |
return "%s (%s)" % (PrettyNumberWord(len(dates), "service date"), | |
", ".join(formatted)) | |
def MaxVersion(versions): | |
versions = filter(None, versions) | |
versions.sort(lambda x,y: -cmp([int(item) for item in x.split('.')], | |
[int(item) for item in y.split('.')])) | |
if len(versions) > 0: | |
return versions[0] | |
class CountingConsoleProblemAccumulator(transitfeed.SimpleProblemAccumulator): | |
def __init__(self): | |
self._error_count = 0 | |
self._warning_count = 0 | |
def _Report(self, e): | |
transitfeed.SimpleProblemAccumulator._Report(self, e) | |
if e.IsError(): | |
self._error_count += 1 | |
else: | |
self._warning_count += 1 | |
def ErrorCount(self): | |
return self._error_count | |
def WarningCount(self): | |
return self._warning_count | |
def FormatCount(self): | |
return ProblemCountText(self.ErrorCount(), self.WarningCount()) | |
def HasIssues(self): | |
return self.ErrorCount() or self.WarningCount() | |
class BoundedProblemList(object): | |
"""A list of one type of ExceptionWithContext objects with bounded size.""" | |
def __init__(self, size_bound): | |
self._count = 0 | |
self._exceptions = [] | |
self._size_bound = size_bound | |
def Add(self, e): | |
self._count += 1 | |
try: | |
bisect.insort(self._exceptions, e) | |
except TypeError: | |
# The base class ExceptionWithContext raises this exception in __cmp__ | |
# to signal that an object is not comparable. Instead of keeping the most | |
# significant issue keep the first reported. | |
if self._count <= self._size_bound: | |
self._exceptions.append(e) | |
else: | |
# self._exceptions is in order. Drop the least significant if the list is | |
# now too long. | |
if self._count > self._size_bound: | |
del self._exceptions[-1] | |
def _GetDroppedCount(self): | |
return self._count - len(self._exceptions) | |
def __repr__(self): | |
return "<BoundedProblemList %s>" % repr(self._exceptions) | |
count = property(lambda s: s._count) | |
dropped_count = property(_GetDroppedCount) | |
problems = property(lambda s: s._exceptions) | |
class LimitPerTypeProblemAccumulator(transitfeed.ProblemAccumulatorInterface): | |
def __init__(self, limit_per_type): | |
# {TYPE_WARNING: {"ClassName": BoundedProblemList()}} | |
self._type_to_name_to_problist = { | |
TYPE_WARNING: defaultdict(lambda: BoundedProblemList(limit_per_type)), | |
TYPE_ERROR: defaultdict(lambda: BoundedProblemList(limit_per_type)) | |
} | |
def HasIssues(self): | |
return (self._type_to_name_to_problist[TYPE_ERROR] or | |
self._type_to_name_to_problist[TYPE_WARNING]) | |
def _Report(self, e): | |
self._type_to_name_to_problist[e.GetType()][e.__class__.__name__].Add(e) | |
def ErrorCount(self): | |
error_sets = self._type_to_name_to_problist[TYPE_ERROR].values() | |
return sum(map(lambda v: v.count, error_sets)) | |
def WarningCount(self): | |
warning_sets = self._type_to_name_to_problist[TYPE_WARNING].values() | |
return sum(map(lambda v: v.count, warning_sets)) | |
def ProblemList(self, problem_type, class_name): | |
"""Return the BoundedProblemList object for given type and class.""" | |
return self._type_to_name_to_problist[problem_type][class_name] | |
def ProblemListMap(self, problem_type): | |
"""Return the map from class name to BoundedProblemList object.""" | |
return self._type_to_name_to_problist[problem_type] | |
class HTMLCountingProblemAccumulator(LimitPerTypeProblemAccumulator): | |
def FormatType(self, f, level_name, class_problist): | |
"""Write the HTML dumping all problems of one type. | |
Args: | |
f: file object open for writing | |
level_name: string such as "Error" or "Warning" | |
class_problist: sequence of tuples (class name, | |
BoundedProblemList object) | |
""" | |
class_problist.sort() | |
output = [] | |
for classname, problist in class_problist: | |
output.append('<h4 class="issueHeader"><a name="%s%s">%s</a></h4><ul>\n' % | |
(level_name, classname, UnCamelCase(classname))) | |
for e in problist.problems: | |
self.FormatException(e, output) | |
if problist.dropped_count: | |
output.append('<li>and %d more of this type.' % | |
(problist.dropped_count)) | |
output.append('</ul>\n') | |
f.write(''.join(output)) | |
def FormatTypeSummaryTable(self, level_name, name_to_problist): | |
"""Return an HTML table listing the number of problems by class name. | |
Args: | |
level_name: string such as "Error" or "Warning" | |
name_to_problist: dict mapping class name to an BoundedProblemList object | |
Returns: | |
HTML in a string | |
""" | |
output = [] | |
output.append('<table>') | |
for classname in sorted(name_to_problist.keys()): | |
problist = name_to_problist[classname] | |
human_name = MaybePluralizeWord(problist.count, UnCamelCase(classname)) | |
output.append('<tr><td>%d</td><td><a href="#%s%s">%s</a></td></tr>\n' % | |
(problist.count, level_name, classname, human_name)) | |
output.append('</table>\n') | |
return ''.join(output) | |
def FormatException(self, e, output): | |
"""Append HTML version of e to list output.""" | |
d = e.GetDictToFormat() | |
for k in ('file_name', 'feedname', 'column_name'): | |
if k in d.keys(): | |
d[k] = '<code>%s</code>' % d[k] | |
problem_text = e.FormatProblem(d).replace('\n', '<br>') | |
output.append('<li>') | |
output.append('<div class="problem">%s</div>' % | |
transitfeed.EncodeUnicode(problem_text)) | |
try: | |
if hasattr(e, 'row_num'): | |
line_str = 'line %d of ' % e.row_num | |
else: | |
line_str = '' | |
output.append('in %s<code>%s</code><br>\n' % | |
(line_str, e.file_name)) | |
row = e.row | |
headers = e.headers | |
column_name = e.column_name | |
table_header = '' # HTML | |
table_data = '' # HTML | |
for header, value in zip(headers, row): | |
attributes = '' | |
if header == column_name: | |
attributes = ' class="problem"' | |
table_header += '<th%s>%s</th>' % (attributes, header) | |
table_data += '<td%s>%s</td>' % (attributes, value) | |
# Make sure output is encoded into UTF-8 | |
output.append('<table class="dump"><tr>%s</tr>\n' % | |
transitfeed.EncodeUnicode(table_header)) | |
output.append('<tr>%s</tr></table>\n' % | |
transitfeed.EncodeUnicode(table_data)) | |
except AttributeError, e: | |
pass # Hope this was getting an attribute from e ;-) | |
output.append('<br></li>\n') | |
def FormatCount(self): | |
return ProblemCountText(self.ErrorCount(), self.WarningCount()) | |
def CountTable(self): | |
output = [] | |
output.append('<table class="count_outside">\n') | |
output.append('<tr>') | |
if self.ProblemListMap(TYPE_ERROR): | |
output.append('<td><span class="fail">%s</span></td>' % | |
PrettyNumberWord(self.ErrorCount(), "error")) | |
if self.ProblemListMap(TYPE_WARNING): | |
output.append('<td><span class="fail">%s</span></td>' % | |
PrettyNumberWord(self.WarningCount(), "warning")) | |
output.append('</tr>\n<tr>') | |
if self.ProblemListMap(TYPE_ERROR): | |
output.append('<td>\n') | |
output.append(self.FormatTypeSummaryTable("Error", | |
self.ProblemListMap(TYPE_ERROR))) | |
output.append('</td>\n') | |
if self.ProblemListMap(TYPE_WARNING): | |
output.append('<td>\n') | |
output.append(self.FormatTypeSummaryTable("Warning", | |
self.ProblemListMap(TYPE_WARNING))) | |
output.append('</td>\n') | |
output.append('</table>') | |
return ''.join(output) | |
def WriteOutput(self, feed_location, f, schedule, other_problems): | |
"""Write the html output to f.""" | |
if self.HasIssues(): | |
if self.ErrorCount() + self.WarningCount() == 1: | |
summary = ('<span class="fail">Found this problem:</span>\n%s' % | |
self.CountTable()) | |
else: | |
summary = ('<span class="fail">Found these problems:</span>\n%s' % | |
self.CountTable()) | |
else: | |
summary = '<span class="pass">feed validated successfully</span>' | |
if other_problems is not None: | |
summary = ('<span class="fail">\n%s</span><br><br>' % | |
other_problems) + summary | |
basename = os.path.basename(feed_location) | |
feed_path = (feed_location[:feed_location.rfind(basename)], basename) | |
agencies = ', '.join(['<a href="%s">%s</a>' % (a.agency_url, a.agency_name) | |
for a in schedule.GetAgencyList()]) | |
if not agencies: | |
agencies = '?' | |
dates = "No valid service dates found" | |
(start, end) = schedule.GetDateRange() | |
if start and end: | |
def FormatDate(yyyymmdd): | |
src_format = "%Y%m%d" | |
dst_format = "%B %d, %Y" | |
try: | |
return time.strftime(dst_format, | |
time.strptime(yyyymmdd, src_format)) | |
except ValueError: | |
return yyyymmdd | |
formatted_start = FormatDate(start) | |
formatted_end = FormatDate(end) | |
dates = "%s to %s" % (formatted_start, formatted_end) | |
calendar_summary = CalendarSummary(schedule) | |
if calendar_summary: | |
calendar_summary_html = """<br> | |
During the upcoming service dates %(date_summary_range)s: | |
<table> | |
<tr><th class="header">Average trips per date:</th><td class="header">%(mean_trips)s</td></tr> | |
<tr><th class="header">Most trips on a date:</th><td class="header">%(max_trips)s, on %(max_trips_dates)s</td></tr> | |
<tr><th class="header">Least trips on a date:</th><td class="header">%(min_trips)s, on %(min_trips_dates)s</td></tr> | |
</table>""" % calendar_summary | |
else: | |
calendar_summary_html = "" | |
output_prefix = """ | |
<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | |
<title>FeedValidator: %(feed_file)s</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
table.dump td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
table.dump td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; font-family:monospace} | |
table.count_outside td {vertical-align: top} | |
table.count_outside {border-spacing: 0px; } | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 0.5em} | |
h4.issueHeader {padding-left: 1em} | |
.pass {background-color: lightgreen} | |
.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
GTFS validation results for feed:<br> | |
<code><span class="path">%(feed_dir)s</span><b>%(feed_file)s</b></code> | |
<br><br> | |
<table> | |
<tr><th class="header">Agencies:</th><td class="header">%(agencies)s</td></tr> | |
<tr><th class="header">Routes:</th><td class="header">%(routes)s</td></tr> | |
<tr><th class="header">Stops:</th><td class="header">%(stops)s</td></tr> | |
<tr><th class="header">Trips:</th><td class="header">%(trips)s</td></tr> | |
<tr><th class="header">Shapes:</th><td class="header">%(shapes)s</td></tr> | |
<tr><th class="header">Effective:</th><td class="header">%(dates)s</td></tr> | |
</table> | |
%(calendar_summary)s | |
<br> | |
%(problem_summary)s | |
<br><br> | |
""" % { "feed_file": feed_path[1], | |
"feed_dir": feed_path[0], | |
"agencies": agencies, | |
"routes": len(schedule.GetRouteList()), | |
"stops": len(schedule.GetStopList()), | |
"trips": len(schedule.GetTripList()), | |
"shapes": len(schedule.GetShapeList()), | |
"dates": dates, | |
"problem_summary": summary, | |
"calendar_summary": calendar_summary_html} | |
# In output_suffix string | |
# time.strftime() returns a regular local time string (not a Unicode one) with | |
# default system encoding. And decode() will then convert this time string back | |
# into a Unicode string. We use decode() here because we don't want the operating | |
# system to do any system encoding (which may cause some problem if the string | |
# contains some non-English characters) for the string. Therefore we decode it | |
# back to its original Unicode code print. | |
time_unicode = (time.strftime('%B %d, %Y at %I:%M %p %Z'). | |
decode(sys.getfilesystemencoding())) | |
output_suffix = """ | |
<div class="footer"> | |
Generated by <a href="http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator"> | |
FeedValidator</a> version %s on %s. | |
</div> | |
</body> | |
</html>""" % (transitfeed.__version__, time_unicode) | |
f.write(transitfeed.EncodeUnicode(output_prefix)) | |
if self.ProblemListMap(TYPE_ERROR): | |
f.write('<h3 class="issueHeader">Errors:</h3>') | |
self.FormatType(f, "Error", | |
self.ProblemListMap(TYPE_ERROR).items()) | |
if self.ProblemListMap(TYPE_WARNING): | |
f.write('<h3 class="issueHeader">Warnings:</h3>') | |
self.FormatType(f, "Warning", | |
self.ProblemListMap(TYPE_WARNING).items()) | |
f.write(transitfeed.EncodeUnicode(output_suffix)) | |
def RunValidationOutputFromOptions(feed, options): | |
"""Validate feed, output results per options and return an exit code.""" | |
if options.output.upper() == "CONSOLE": | |
return RunValidationOutputToConsole(feed, options) | |
else: | |
return RunValidationOutputToFilename(feed, options, options.output) | |
def RunValidationOutputToFilename(feed, options, output_filename): | |
"""Validate feed, save HTML at output_filename and return an exit code.""" | |
try: | |
output_file = open(output_filename, 'w') | |
exit_code = RunValidationOutputToFile(feed, options, output_file) | |
output_file.close() | |
except IOError, e: | |
print 'Error while writing %s: %s' % (output_filename, e) | |
output_filename = None | |
exit_code = 2 | |
if options.manual_entry and output_filename: | |
webbrowser.open('file://%s' % os.path.abspath(output_filename)) | |
return exit_code | |
def RunValidationOutputToFile(feed, options, output_file): | |
"""Validate feed, write HTML to output_file and return an exit code.""" | |
accumulator = HTMLCountingProblemAccumulator(options.limit_per_type) | |
problems = transitfeed.ProblemReporter(accumulator) | |
schedule, exit_code, other_problems_string = RunValidation(feed, options, | |
problems) | |
if isinstance(feed, basestring): | |
feed_location = feed | |
else: | |
feed_location = getattr(feed, 'name', repr(feed)) | |
accumulator.WriteOutput(feed_location, output_file, schedule, | |
other_problems_string) | |
return exit_code | |
def RunValidationOutputToConsole(feed, options): | |
"""Validate feed, print reports and return an exit code.""" | |
accumulator = CountingConsoleProblemAccumulator() | |
problems = transitfeed.ProblemReporter(accumulator) | |
_, exit_code, _ = RunValidation(feed, options, problems) | |
return exit_code | |
def RunValidation(feed, options, problems): | |
"""Validate feed, returning the loaded Schedule and exit code. | |
Args: | |
feed: GTFS file, either path of the file as a string or a file object | |
options: options object returned by optparse | |
problems: transitfeed.ProblemReporter instance | |
Returns: | |
a transitfeed.Schedule object, exit code and plain text string of other | |
problems | |
Exit code is 2 if an extension is provided but can't be loaded, 1 if | |
problems are found and 0 if the Schedule is problem free. | |
plain text string is '' if no other problems are found. | |
""" | |
other_problems_string = CheckVersion(latest_version=options.latest_version) | |
# TODO: Add tests for this flag in testfeedvalidator.py | |
if options.extension: | |
try: | |
__import__(options.extension) | |
extension_module = sys.modules[options.extension] | |
except ImportError: | |
# TODO: Document extensions in a wiki page, place link here | |
print("Could not import extension %s! Please ensure it is a proper " | |
"Python module." % options.extension) | |
exit(2) | |
else: | |
extension_module = transitfeed | |
gtfs_factory = extension_module.GetGtfsFactory() | |
print 'validating %s' % feed | |
loader = gtfs_factory.Loader(feed, problems=problems, extra_validation=False, | |
memory_db=options.memory_db, | |
check_duplicate_trips=\ | |
options.check_duplicate_trips, | |
gtfs_factory=gtfs_factory) | |
schedule = loader.Load() | |
schedule.Validate(service_gap_interval=options.service_gap_interval) | |
if feed == 'IWantMyvalidation-crash.txt': | |
# See test/testfeedvalidator.py | |
raise Exception('For testing the feed validator crash handler.') | |
if other_problems_string: | |
print other_problems_string | |
accumulator = problems.GetAccumulator() | |
if accumulator.HasIssues(): | |
print 'ERROR: %s found' % accumulator.FormatCount() | |
return schedule, 1, other_problems_string | |
else: | |
print 'feed validated successfully' | |
return schedule, 0, other_problems_string | |
def CheckVersion(latest_version=''): | |
""" | |
Check there is newer version of this project. | |
Codes are based on http://www.voidspace.org.uk/python/articles/urllib2.shtml | |
Already got permission from the copyright holder. | |
""" | |
current_version = transitfeed.__version__ | |
if not latest_version: | |
timeout = 20 | |
socket.setdefaulttimeout(timeout) | |
request = Request(SVN_TAG_URL) | |
try: | |
response = urlopen(request) | |
content = response.read() | |
versions = re.findall(r'>transitfeed-([\d\.]+)\/<\/a>', content) | |
latest_version = MaxVersion(versions) | |
except HTTPError, e: | |
return('The server couldn\'t fulfill the request. Error code: %s.' | |
% e.code) | |
except URLError, e: | |
return('We failed to reach transitfeed server. Reason: %s.' % e.reason) | |
if not latest_version: | |
return('We had trouble parsing the contents of %s.' % SVN_TAG_URL) | |
newest_version = MaxVersion([latest_version, current_version]) | |
if current_version != newest_version: | |
return('A new version %s of transitfeed is available. Please visit ' | |
'http://code.google.com/p/googletransitdatafeed and download.' | |
% newest_version) | |
def main(): | |
usage = \ | |
'''%prog [options] [<input GTFS.zip>] | |
Validates GTFS file (or directory) <input GTFS.zip> and writes a HTML | |
report of the results to validation-results.html. | |
If <input GTFS.zip> is ommited the filename is read from the console. Dragging | |
a file into the console may enter the filename. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-n', '--noprompt', action='store_false', | |
dest='manual_entry', | |
help='do not prompt for feed location or load output in ' | |
'browser') | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='write html output to FILE or --output=CONSOLE to ' | |
'print all errors and warnings to the command console') | |
parser.add_option('-p', '--performance', action='store_true', | |
dest='performance', | |
help='output memory and time performance (Availability: ' | |
'Unix') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Use in-memory sqlite db instead of a temporary file. ' | |
'It is faster but uses more RAM.') | |
parser.add_option('-d', '--duplicate_trip_check', | |
dest='check_duplicate_trips', action='store_true', | |
help='Check for duplicate trips which go through the same ' | |
'stops with same service and start times') | |
parser.add_option('-l', '--limit_per_type', | |
dest='limit_per_type', action='store', type='int', | |
help='Maximum number of errors and warnings to keep of ' | |
'each type') | |
parser.add_option('--latest_version', dest='latest_version', | |
action='store', | |
help='a version number such as 1.2.1 or None to get the ' | |
'latest version from code.google.com. Output a warning if ' | |
'transitfeed.py is older than this version.') | |
parser.add_option('--service_gap_interval', | |
dest='service_gap_interval', | |
action='store', | |
type='int', | |
help='the number of consecutive days to search for with no ' | |
'scheduled service. For each interval with no service ' | |
'having this number of days or more a warning will be ' | |
'issued') | |
parser.add_option('--extension', | |
dest='extension', | |
help='the name of the Python module that containts a GTFS ' | |
'extension that is to be loaded and used while validating ' | |
'the specified feed.') | |
parser.set_defaults(manual_entry=True, output='validation-results.html', | |
memory_db=False, check_duplicate_trips=False, | |
limit_per_type=5, latest_version='', | |
service_gap_interval=13) | |
(options, args) = parser.parse_args() | |
if not len(args) == 1: | |
if options.manual_entry: | |
feed = raw_input('Enter Feed Location: ') | |
else: | |
parser.error('You must provide the path of a single feed') | |
else: | |
feed = args[0] | |
feed = feed.strip('"') | |
if options.performance: | |
return ProfileRunValidationOutputFromOptions(feed, options) | |
else: | |
return RunValidationOutputFromOptions(feed, options) | |
def ProfileRunValidationOutputFromOptions(feed, options): | |
"""Run RunValidationOutputFromOptions, print profile and return exit code.""" | |
import cProfile | |
import pstats | |
# runctx will modify a dict, but not locals(). We need a way to get rv back. | |
locals_for_exec = locals() | |
cProfile.runctx('rv = RunValidationOutputFromOptions(feed, options)', | |
globals(), locals_for_exec, 'validate-stats') | |
# Only available on Unix, http://docs.python.org/lib/module-resource.html | |
import resource | |
print "Time: %d seconds" % ( | |
resource.getrusage(resource.RUSAGE_SELF).ru_utime + | |
resource.getrusage(resource.RUSAGE_SELF).ru_stime) | |
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/286222 | |
# http://aspn.activestate.com/ASPN/Cookbook/ "The recipes are freely | |
# available for review and use." | |
def _VmB(VmKey): | |
"""Return size from proc status in bytes.""" | |
_proc_status = '/proc/%d/status' % os.getpid() | |
_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0, | |
'KB': 1024.0, 'MB': 1024.0*1024.0} | |
# get pseudo file /proc/<pid>/status | |
try: | |
t = open(_proc_status) | |
v = t.read() | |
t.close() | |
except: | |
raise Exception("no proc file %s" % _proc_status) | |
return 0 # non-Linux? | |
# get VmKey line e.g. 'VmRSS: 9999 kB\n ...' | |
try: | |
i = v.index(VmKey) | |
v = v[i:].split(None, 3) # whitespace | |
except: | |
return 0 # v is empty | |
if len(v) < 3: | |
raise Exception("%s" % v) | |
return 0 # invalid format? | |
# convert Vm value to bytes | |
return int(float(v[1]) * _scale[v[2]]) | |
# I ran this on over a hundred GTFS files, comparing VmSize to VmRSS | |
# (resident set size). The difference was always under 2% or 3MB. | |
print "Virtual Memory Size: %d bytes" % _VmB('VmSize:') | |
# Output report of where CPU time was spent. | |
p = pstats.Stats('validate-stats') | |
p.strip_dirs() | |
p.sort_stats('cumulative').print_stats(30) | |
p.sort_stats('cumulative').print_callers(30) | |
return locals_for_exec['rv'] | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
__doc__ = """ | |
Package holding files for Google Transit Feed Specification Schedule Viewer. | |
""" | |
# This package contains the data files for schedule_viewer.py, a script that | |
# comes with the transitfeed distribution. According to the thread | |
# "[Distutils] distutils data_files and setuptools.pkg_resources are driving | |
# me crazy" this is the easiest way to include data files. My experience | |
# agrees. - Tom 2007-05-29 | |
// =================================================================== | |
// Author: Matt Kruse <matt@mattkruse.com> | |
// WWW: http://www.mattkruse.com/ | |
// | |
// NOTICE: You may use this code for any purpose, commercial or | |
// private, without any further permission from the author. You may | |
// remove this notice from your final code if you wish, however it is | |
// appreciated by the author if at least my web site address is kept. | |
// | |
// You may *NOT* re-distribute this code in any way except through its | |
// use. That means, you can include it in your product, or your web | |
// site, or any other form where the code is actually being used. You | |
// may not put the plain javascript up on your site for download or | |
// include it in your javascript libraries for download. | |
// If you wish to share this code with others, please just point them | |
// to the URL instead. | |
// Please DO NOT link directly to my .js files from your site. Copy | |
// the files to your server and use them there. Thank you. | |
// =================================================================== | |
/* SOURCE FILE: AnchorPosition.js */ | |
/* | |
AnchorPosition.js | |
Author: Matt Kruse | |
Last modified: 10/11/02 | |
DESCRIPTION: These functions find the position of an <A> tag in a document, | |
so other elements can be positioned relative to it. | |
COMPATABILITY: Netscape 4.x,6.x,Mozilla, IE 5.x,6.x on Windows. Some small | |
positioning errors - usually with Window positioning - occur on the | |
Macintosh platform. | |
FUNCTIONS: | |
getAnchorPosition(anchorname) | |
Returns an Object() having .x and .y properties of the pixel coordinates | |
of the upper-left corner of the anchor. Position is relative to the PAGE. | |
getAnchorWindowPosition(anchorname) | |
Returns an Object() having .x and .y properties of the pixel coordinates | |
of the upper-left corner of the anchor, relative to the WHOLE SCREEN. | |
NOTES: | |
1) For popping up separate browser windows, use getAnchorWindowPosition. | |
Otherwise, use getAnchorPosition | |
2) Your anchor tag MUST contain both NAME and ID attributes which are the | |
same. For example: | |
<A NAME="test" ID="test"> </A> | |
3) There must be at least a space between <A> </A> for IE5.5 to see the | |
anchor tag correctly. Do not do <A></A> with no space. | |
*/ | |
// getAnchorPosition(anchorname) | |
// This function returns an object having .x and .y properties which are the coordinates | |
// of the named anchor, relative to the page. | |
function getAnchorPosition(anchorname) { | |
// This function will return an Object with x and y properties | |
var useWindow=false; | |
var coordinates=new Object(); | |
var x=0,y=0; | |
// Browser capability sniffing | |
var use_gebi=false, use_css=false, use_layers=false; | |
if (document.getElementById) { use_gebi=true; } | |
else if (document.all) { use_css=true; } | |
else if (document.layers) { use_layers=true; } | |
// Logic to find position | |
if (use_gebi && document.all) { | |
x=AnchorPosition_getPageOffsetLeft(document.all[anchorname]); | |
y=AnchorPosition_getPageOffsetTop(document.all[anchorname]); | |
} | |
else if (use_gebi) { | |
var o=document.getElementById(anchorname); | |
x=AnchorPosition_getPageOffsetLeft(o); | |
y=AnchorPosition_getPageOffsetTop(o); | |
} | |
else if (use_css) { | |
x=AnchorPosition_getPageOffsetLeft(document.all[anchorname]); | |
y=AnchorPosition_getPageOffsetTop(document.all[anchorname]); | |
} | |
else if (use_layers) { | |
var found=0; | |
for (var i=0; i<document.anchors.length; i++) { | |
if (document.anchors[i].name==anchorname) { found=1; break; } | |
} | |
if (found==0) { | |
coordinates.x=0; coordinates.y=0; return coordinates; | |
} | |
x=document.anchors[i].x; | |
y=document.anchors[i].y; | |
} | |
else { | |
coordinates.x=0; coordinates.y=0; return coordinates; | |
} | |
coordinates.x=x; | |
coordinates.y=y; | |
return coordinates; | |
} | |
// getAnchorWindowPosition(anchorname) | |
// This function returns an object having .x and .y properties which are the coordinates | |
// of the named anchor, relative to the window | |
function getAnchorWindowPosition(anchorname) { | |
var coordinates=getAnchorPosition(anchorname); | |
var x=0; | |
var y=0; | |
if (document.getElementById) { | |
if (isNaN(window.screenX)) { | |
x=coordinates.x-document.body.scrollLeft+window.screenLeft; | |
y=coordinates.y-document.body.scrollTop+window.screenTop; | |
} | |
else { | |
x=coordinates.x+window.screenX+(window.outerWidth-window.innerWidth)-window.pageXOffset; | |
y=coordinates.y+window.screenY+(window.outerHeight-24-window.innerHeight)-window.pageYOffset; | |
} | |
} | |
else if (document.all) { | |
x=coordinates.x-document.body.scrollLeft+window.screenLeft; | |
y=coordinates.y-document.body.scrollTop+window.screenTop; | |
} | |
else if (document.layers) { | |
x=coordinates.x+window.screenX+(window.outerWidth-window.innerWidth)-window.pageXOffset; | |
y=coordinates.y+window.screenY+(window.outerHeight-24-window.innerHeight)-window.pageYOffset; | |
} | |
coordinates.x=x; | |
coordinates.y=y; | |
return coordinates; | |
} | |
// Functions for IE to get position of an object | |
function AnchorPosition_getPageOffsetLeft (el) { | |
var ol=el.offsetLeft; | |
while ((el=el.offsetParent) != null) { ol += el.offsetLeft; } | |
return ol; | |
} | |
function AnchorPosition_getWindowOffsetLeft (el) { | |
return AnchorPosition_getPageOffsetLeft(el)-document.body.scrollLeft; | |
} | |
function AnchorPosition_getPageOffsetTop (el) { | |
var ot=el.offsetTop; | |
while((el=el.offsetParent) != null) { ot += el.offsetTop; } | |
return ot; | |
} | |
function AnchorPosition_getWindowOffsetTop (el) { | |
return AnchorPosition_getPageOffsetTop(el)-document.body.scrollTop; | |
} | |
/* SOURCE FILE: date.js */ | |
// HISTORY | |
// ------------------------------------------------------------------ | |
// May 17, 2003: Fixed bug in parseDate() for dates <1970 | |
// March 11, 2003: Added parseDate() function | |
// March 11, 2003: Added "NNN" formatting option. Doesn't match up | |
// perfectly with SimpleDateFormat formats, but | |
// backwards-compatability was required. | |
// ------------------------------------------------------------------ | |
// These functions use the same 'format' strings as the | |
// java.text.SimpleDateFormat class, with minor exceptions. | |
// The format string consists of the following abbreviations: | |
// | |
// Field | Full Form | Short Form | |
// -------------+--------------------+----------------------- | |
// Year | yyyy (4 digits) | yy (2 digits), y (2 or 4 digits) | |
// Month | MMM (name or abbr.)| MM (2 digits), M (1 or 2 digits) | |
// | NNN (abbr.) | | |
// Day of Month | dd (2 digits) | d (1 or 2 digits) | |
// Day of Week | EE (name) | E (abbr) | |
// Hour (1-12) | hh (2 digits) | h (1 or 2 digits) | |
// Hour (0-23) | HH (2 digits) | H (1 or 2 digits) | |
// Hour (0-11) | KK (2 digits) | K (1 or 2 digits) | |
// Hour (1-24) | kk (2 digits) | k (1 or 2 digits) | |
// Minute | mm (2 digits) | m (1 or 2 digits) | |
// Second | ss (2 digits) | s (1 or 2 digits) | |
// AM/PM | a | | |
// | |
// NOTE THE DIFFERENCE BETWEEN MM and mm! Month=MM, not mm! | |
// Examples: | |
// "MMM d, y" matches: January 01, 2000 | |
// Dec 1, 1900 | |
// Nov 20, 00 | |
// "M/d/yy" matches: 01/20/00 | |
// 9/2/00 | |
// "MMM dd, yyyy hh:mm:ssa" matches: "January 01, 2000 12:30:45AM" | |
// ------------------------------------------------------------------ | |
var MONTH_NAMES=new Array('January','February','March','April','May','June','July','August','September','October','November','December','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'); | |
var DAY_NAMES=new Array('Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sun','Mon','Tue','Wed','Thu','Fri','Sat'); | |
function LZ(x) {return(x<0||x>9?"":"0")+x} | |
// ------------------------------------------------------------------ | |
// isDate ( date_string, format_string ) | |
// Returns true if date string matches format of format string and | |
// is a valid date. Else returns false. | |
// It is recommended that you trim whitespace around the value before | |
// passing it to this function, as whitespace is NOT ignored! | |
// ------------------------------------------------------------------ | |
function isDate(val,format) { | |
var date=getDateFromFormat(val,format); | |
if (date==0) { return false; } | |
return true; | |
} | |
// ------------------------------------------------------------------- | |
// compareDates(date1,date1format,date2,date2format) | |
// Compare two date strings to see which is greater. | |
// Returns: | |
// 1 if date1 is greater than date2 | |
// 0 if date2 is greater than date1 of if they are the same | |
// -1 if either of the dates is in an invalid format | |
// ------------------------------------------------------------------- | |
function compareDates(date1,dateformat1,date2,dateformat2) { | |
var d1=getDateFromFormat(date1,dateformat1); | |
var d2=getDateFromFormat(date2,dateformat2); | |
if (d1==0 || d2==0) { | |
return -1; | |
} | |
else if (d1 > d2) { | |
return 1; | |
} | |
return 0; | |
} | |
// ------------------------------------------------------------------ | |
// formatDate (date_object, format) | |
// Returns a date in the output format specified. | |
// The format string uses the same abbreviations as in getDateFromFormat() | |
// ------------------------------------------------------------------ | |
function formatDate(date,format) { | |
format=format+""; | |
var result=""; | |
var i_format=0; | |
var c=""; | |
var token=""; | |
var y=date.getYear()+""; | |
var M=date.getMonth()+1; | |
var d=date.getDate(); | |
var E=date.getDay(); | |
var H=date.getHours(); | |
var m=date.getMinutes(); | |
var s=date.getSeconds(); | |
var yyyy,yy,MMM,MM,dd,hh,h,mm,ss,ampm,HH,H,KK,K,kk,k; | |
// Convert real date parts into formatted versions | |
var value=new Object(); | |
if (y.length < 4) {y=""+(y-0+1900);} | |
value["y"]=""+y; | |
value["yyyy"]=y; | |
value["yy"]=y.substring(2,4); | |
value["M"]=M; | |
value["MM"]=LZ(M); | |
value["MMM"]=MONTH_NAMES[M-1]; | |
value["NNN"]=MONTH_NAMES[M+11]; | |
value["d"]=d; | |
value["dd"]=LZ(d); | |
value["E"]=DAY_NAMES[E+7]; | |
value["EE"]=DAY_NAMES[E]; | |
value["H"]=H; | |
value["HH"]=LZ(H); | |
if (H==0){value["h"]=12;} | |
else if (H>12){value["h"]=H-12;} | |
else {value["h"]=H;} | |
value["hh"]=LZ(value["h"]); | |
if (H>11){value["K"]=H-12;} else {value["K"]=H;} | |
value["k"]=H+1; | |
value["KK"]=LZ(value["K"]); | |
value["kk"]=LZ(value["k"]); | |
if (H > 11) { value["a"]="PM"; } | |
else { value["a"]="AM"; } | |
value["m"]=m; | |
value["mm"]=LZ(m); | |
value["s"]=s; | |
value["ss"]=LZ(s); | |
while (i_format < format.length) { | |
c=format.charAt(i_format); | |
token=""; | |
while ((format.charAt(i_format)==c) && (i_format < format.length)) { | |
token += format.charAt(i_format++); | |
} | |
if (value[token] != null) { result=result + value[token]; } | |
else { result=result + token; } | |
} | |
return result; | |
} | |
// ------------------------------------------------------------------ | |
// Utility functions for parsing in getDateFromFormat() | |
// ------------------------------------------------------------------ | |
function _isInteger(val) { | |
var digits="1234567890"; | |
for (var i=0; i < val.length; i++) { | |
if (digits.indexOf(val.charAt(i))==-1) { return false; } | |
} | |
return true; | |
} | |
function _getInt(str,i,minlength,maxlength) { | |
for (var x=maxlength; x>=minlength; x--) { | |
var token=str.substring(i,i+x); | |
if (token.length < minlength) { return null; } | |
if (_isInteger(token)) { return token; } | |
} | |
return null; | |
} | |
// ------------------------------------------------------------------ | |
// getDateFromFormat( date_string , format_string ) | |
// | |
// This function takes a date string and a format string. It matches | |
// If the date string matches the format string, it returns the | |
// getTime() of the date. If it does not match, it returns 0. | |
// ------------------------------------------------------------------ | |
function getDateFromFormat(val,format) { | |
val=val+""; | |
format=format+""; | |
var i_val=0; | |
var i_format=0; | |
var c=""; | |
var token=""; | |
var token2=""; | |
var x,y; | |
var now=new Date(); | |
var year=now.getYear(); | |
var month=now.getMonth()+1; | |
var date=1; | |
var hh=now.getHours(); | |
var mm=now.getMinutes(); | |
var ss=now.getSeconds(); | |
var ampm=""; | |
while (i_format < format.length) { | |
// Get next token from format string | |
c=format.charAt(i_format); | |
token=""; | |
while ((format.charAt(i_format)==c) && (i_format < format.length)) { | |
token += format.charAt(i_format++); | |
} | |
// Extract contents of value based on format token | |
if (token=="yyyy" || token=="yy" || token=="y") { | |
if (token=="yyyy") { x=4;y=4; } | |
if (token=="yy") { x=2;y=2; } | |
if (token=="y") { x=2;y=4; } | |
year=_getInt(val,i_val,x,y); | |
if (year==null) { return 0; } | |
i_val += year.length; | |
if (year.length==2) { | |
if (year > 70) { year=1900+(year-0); } | |
else { year=2000+(year-0); } | |
} | |
} | |
else if (token=="MMM"||token=="NNN"){ | |
month=0; | |
for (var i=0; i<MONTH_NAMES.length; i++) { | |
var month_name=MONTH_NAMES[i]; | |
if (val.substring(i_val,i_val+month_name.length).toLowerCase()==month_name.toLowerCase()) { | |
if (token=="MMM"||(token=="NNN"&&i>11)) { | |
month=i+1; | |
if (month>12) { month -= 12; } | |
i_val += month_name.length; | |
break; | |
} | |
} | |
} | |
if ((month < 1)||(month>12)){return 0;} | |
} | |
else if (token=="EE"||token=="E"){ | |
for (var i=0; i<DAY_NAMES.length; i++) { | |
var day_name=DAY_NAMES[i]; | |
if (val.substring(i_val,i_val+day_name.length).toLowerCase()==day_name.toLowerCase()) { | |
i_val += day_name.length; | |
break; | |
} | |
} | |
} | |
else if (token=="MM"||token=="M") { | |
month=_getInt(val,i_val,token.length,2); | |
if(month==null||(month<1)||(month>12)){return 0;} | |
i_val+=month.length;} | |
else if (token=="dd"||token=="d") { | |
date=_getInt(val,i_val,token.length,2); | |
if(date==null||(date<1)||(date>31)){return 0;} | |
i_val+=date.length;} | |
else if (token=="hh"||token=="h") { | |
hh=_getInt(val,i_val,token.length,2); | |
if(hh==null||(hh<1)||(hh>12)){return 0;} | |
i_val+=hh.length;} | |
else if (token=="HH"||token=="H") { | |
hh=_getInt(val,i_val,token.length,2); | |
if(hh==null||(hh<0)||(hh>23)){return 0;} | |
i_val+=hh.length;} | |
else if (token=="KK"||token=="K") { | |
hh=_getInt(val,i_val,token.length,2); | |
if(hh==null||(hh<0)||(hh>11)){return 0;} | |
i_val+=hh.length;} | |
else if (token=="kk"||token=="k") { | |
hh=_getInt(val,i_val,token.length,2); | |
if(hh==null||(hh<1)||(hh>24)){return 0;} | |
i_val+=hh.length;hh--;} | |
else if (token=="mm"||token=="m") { | |
mm=_getInt(val,i_val,token.length,2); | |
if(mm==null||(mm<0)||(mm>59)){return 0;} | |
i_val+=mm.length;} | |
else if (token=="ss"||token=="s") { | |
ss=_getInt(val,i_val,token.length,2); | |
if(ss==null||(ss<0)||(ss>59)){return 0;} | |
i_val+=ss.length;} | |
else if (token=="a") { | |
if (val.substring(i_val,i_val+2).toLowerCase()=="am") {ampm="AM";} | |
else if (val.substring(i_val,i_val+2).toLowerCase()=="pm") {ampm="PM";} | |
else {return 0;} | |
i_val+=2;} | |
else { | |
if (val.substring(i_val,i_val+token.length)!=token) {return 0;} | |
else {i_val+=token.length;} | |
} | |
} | |
// If there are any trailing characters left in the value, it doesn't match | |
if (i_val != val.length) { return 0; } | |
// Is date valid for month? | |
if (month==2) { | |
// Check for leap year | |
if ( ( (year%4==0)&&(year%100 != 0) ) || (year%400==0) ) { // leap year | |
if (date > 29){ return 0; } | |
} | |
else { if (date > 28) { return 0; } } | |
} | |
if ((month==4)||(month==6)||(month==9)||(month==11)) { | |
if (date > 30) { return 0; } | |
} | |
// Correct hours value | |
if (hh<12 && ampm=="PM") { hh=hh-0+12; } | |
else if (hh>11 && ampm=="AM") { hh-=12; } | |
var newdate=new Date(year,month-1,date,hh,mm,ss); | |
return newdate.getTime(); | |
} | |
// ------------------------------------------------------------------ | |
// parseDate( date_string [, prefer_euro_format] ) | |
// | |
// This function takes a date string and tries to match it to a | |
// number of possible date formats to get the value. It will try to | |
// match against the following international formats, in this order: | |
// y-M-d MMM d, y MMM d,y y-MMM-d d-MMM-y MMM d | |
// M/d/y M-d-y M.d.y MMM-d M/d M-d | |
// d/M/y d-M-y d.M.y d-MMM d/M d-M | |
// A second argument may be passed to instruct the method to search | |
// for formats like d/M/y (european format) before M/d/y (American). | |
// Returns a Date object or null if no patterns match. | |
// ------------------------------------------------------------------ | |
function parseDate(val) { | |
var preferEuro=(arguments.length==2)?arguments[1]:false; | |
generalFormats=new Array('y-M-d','MMM d, y','MMM d,y','y-MMM-d','d-MMM-y','MMM d'); | |
monthFirst=new Array('M/d/y','M-d-y','M.d.y','MMM-d','M/d','M-d'); | |
dateFirst =new Array('d/M/y','d-M-y','d.M.y','d-MMM','d/M','d-M'); | |
var checkList=new Array('generalFormats',preferEuro?'dateFirst':'monthFirst',preferEuro?'monthFirst':'dateFirst'); | |
var d=null; | |
for (var i=0; i<checkList.length; i++) { | |
var l=window[checkList[i]]; | |
for (var j=0; j<l.length; j++) { | |
d=getDateFromFormat(val,l[j]); | |
if (d!=0) { return new Date(d); } | |
} | |
} | |
return null; | |
} | |
/* SOURCE FILE: PopupWindow.js */ | |
/* | |
PopupWindow.js | |
Author: Matt Kruse | |
Last modified: 02/16/04 | |
DESCRIPTION: This object allows you to easily and quickly popup a window | |
in a certain place. The window can either be a DIV or a separate browser | |
window. | |
COMPATABILITY: Works with Netscape 4.x, 6.x, IE 5.x on Windows. Some small | |
positioning errors - usually with Window positioning - occur on the | |
Macintosh platform. Due to bugs in Netscape 4.x, populating the popup | |
window with <STYLE> tags may cause errors. | |
USAGE: | |
// Create an object for a WINDOW popup | |
var win = new PopupWindow(); | |
// Create an object for a DIV window using the DIV named 'mydiv' | |
var win = new PopupWindow('mydiv'); | |
// Set the window to automatically hide itself when the user clicks | |
// anywhere else on the page except the popup | |
win.autoHide(); | |
// Show the window relative to the anchor name passed in | |
win.showPopup(anchorname); | |
// Hide the popup | |
win.hidePopup(); | |
// Set the size of the popup window (only applies to WINDOW popups | |
win.setSize(width,height); | |
// Populate the contents of the popup window that will be shown. If you | |
// change the contents while it is displayed, you will need to refresh() | |
win.populate(string); | |
// set the URL of the window, rather than populating its contents | |
// manually | |
win.setUrl("http://www.site.com/"); | |
// Refresh the contents of the popup | |
win.refresh(); | |
// Specify how many pixels to the right of the anchor the popup will appear | |
win.offsetX = 50; | |
// Specify how many pixels below the anchor the popup will appear | |
win.offsetY = 100; | |
NOTES: | |
1) Requires the functions in AnchorPosition.js | |
2) Your anchor tag MUST contain both NAME and ID attributes which are the | |
same. For example: | |
<A NAME="test" ID="test"> </A> | |
3) There must be at least a space between <A> </A> for IE5.5 to see the | |
anchor tag correctly. Do not do <A></A> with no space. | |
4) When a PopupWindow object is created, a handler for 'onmouseup' is | |
attached to any event handler you may have already defined. Do NOT define | |
an event handler for 'onmouseup' after you define a PopupWindow object or | |
the autoHide() will not work correctly. | |
*/ | |
// Set the position of the popup window based on the anchor | |
function PopupWindow_getXYPosition(anchorname) { | |
var coordinates; | |
if (this.type == "WINDOW") { | |
coordinates = getAnchorWindowPosition(anchorname); | |
} | |
else { | |
coordinates = getAnchorPosition(anchorname); | |
} | |
this.x = coordinates.x; | |
this.y = coordinates.y; | |
} | |
// Set width/height of DIV/popup window | |
function PopupWindow_setSize(width,height) { | |
this.width = width; | |
this.height = height; | |
} | |
// Fill the window with contents | |
function PopupWindow_populate(contents) { | |
this.contents = contents; | |
this.populated = false; | |
} | |
// Set the URL to go to | |
function PopupWindow_setUrl(url) { | |
this.url = url; | |
} | |
// Set the window popup properties | |
function PopupWindow_setWindowProperties(props) { | |
this.windowProperties = props; | |
} | |
// Refresh the displayed contents of the popup | |
function PopupWindow_refresh() { | |
if (this.divName != null) { | |
// refresh the DIV object | |
if (this.use_gebi) { | |
document.getElementById(this.divName).innerHTML = this.contents; | |
} | |
else if (this.use_css) { | |
document.all[this.divName].innerHTML = this.contents; | |
} | |
else if (this.use_layers) { | |
var d = document.layers[this.divName]; | |
d.document.open(); | |
d.document.writeln(this.contents); | |
d.document.close(); | |
} | |
} | |
else { | |
if (this.popupWindow != null && !this.popupWindow.closed) { | |
if (this.url!="") { | |
this.popupWindow.location.href=this.url; | |
} | |
else { | |
this.popupWindow.document.open(); | |
this.popupWindow.document.writeln(this.contents); | |
this.popupWindow.document.close(); | |
} | |
this.popupWindow.focus(); | |
} | |
} | |
} | |
// Position and show the popup, relative to an anchor object | |
function PopupWindow_showPopup(anchorname) { | |
this.getXYPosition(anchorname); | |
this.x += this.offsetX; | |
this.y += this.offsetY; | |
if (!this.populated && (this.contents != "")) { | |
this.populated = true; | |
this.refresh(); | |
} | |
if (this.divName != null) { | |
// Show the DIV object | |
if (this.use_gebi) { | |
document.getElementById(this.divName).style.left = this.x + "px"; | |
document.getElementById(this.divName).style.top = this.y + "px"; | |
document.getElementById(this.divName).style.visibility = "visible"; | |
} | |
else if (this.use_css) { | |
document.all[this.divName].style.left = this.x; | |
document.all[this.divName].style.top = this.y; | |
document.all[this.divName].style.visibility = "visible"; | |
} | |
else if (this.use_layers) { | |
document.layers[this.divName].left = this.x; | |
document.layers[this.divName].top = this.y; | |
document.layers[this.divName].visibility = "visible"; | |
} | |
} | |
else { | |
if (this.popupWindow == null || this.popupWindow.closed) { | |
// If the popup window will go off-screen, move it so it doesn't | |
if (this.x<0) { this.x=0; } | |
if (this.y<0) { this.y=0; } | |
if (screen && screen.availHeight) { | |
if ((this.y + this.height) > screen.availHeight) { | |
this.y = screen.availHeight - this.height; | |
} | |
} | |
if (screen && screen.availWidth) { | |
if ((this.x + this.width) > screen.availWidth) { | |
this.x = screen.availWidth - this.width; | |
} | |
} | |
var avoidAboutBlank = window.opera || ( document.layers && !navigator.mimeTypes['*'] ) || navigator.vendor == 'KDE' || ( document.childNodes && !document.all && !navigator.taintEnabled ); | |
this.popupWindow = window.open(avoidAboutBlank?"":"about:blank","window_"+anchorname,this.windowProperties+",width="+this.width+",height="+this.height+",screenX="+this.x+",left="+this.x+",screenY="+this.y+",top="+this.y+""); | |
} | |
this.refresh(); | |
} | |
} | |
// Hide the popup | |
function PopupWindow_hidePopup() { | |
if (this.divName != null) { | |
if (this.use_gebi) { | |
document.getElementById(this.divName).style.visibility = "hidden"; | |
} | |
else if (this.use_css) { | |
document.all[this.divName].style.visibility = "hidden"; | |
} | |
else if (this.use_layers) { | |
document.layers[this.divName].visibility = "hidden"; | |
} | |
} | |
else { | |
if (this.popupWindow && !this.popupWindow.closed) { | |
this.popupWindow.close(); | |
this.popupWindow = null; | |
} | |
} | |
} | |
// Pass an event and return whether or not it was the popup DIV that was clicked | |
function PopupWindow_isClicked(e) { | |
if (this.divName != null) { | |
if (this.use_layers) { | |
var clickX = e.pageX; | |
var clickY = e.pageY; | |
var t = document.layers[this.divName]; | |
if ((clickX > t.left) && (clickX < t.left+t.clip.width) && (clickY > t.top) && (clickY < t.top+t.clip.height)) { | |
return true; | |
} | |
else { return false; } | |
} | |
else if (document.all) { // Need to hard-code this to trap IE for error-handling | |
var t = window.event.srcElement; | |
while (t.parentElement != null) { | |
if (t.id==this.divName) { | |
return true; | |
} | |
t = t.parentElement; | |
} | |
return false; | |
} | |
else if (this.use_gebi && e) { | |
var t = e.originalTarget; | |
while (t.parentNode != null) { | |
if (t.id==this.divName) { | |
return true; | |
} | |
t = t.parentNode; | |
} | |
return false; | |
} | |
return false; | |
} | |
return false; | |
} | |
// Check an onMouseDown event to see if we should hide | |
function PopupWindow_hideIfNotClicked(e) { | |
if (this.autoHideEnabled && !this.isClicked(e)) { | |
this.hidePopup(); | |
} | |
} | |
// Call this to make the DIV disable automatically when mouse is clicked outside it | |
function PopupWindow_autoHide() { | |
this.autoHideEnabled = true; | |
} | |
// This global function checks all PopupWindow objects onmouseup to see if they should be hidden | |
function PopupWindow_hidePopupWindows(e) { | |
for (var i=0; i<popupWindowObjects.length; i++) { | |
if (popupWindowObjects[i] != null) { | |
var p = popupWindowObjects[i]; | |
p.hideIfNotClicked(e); | |
} | |
} | |
} | |
// Run this immediately to attach the event listener | |
function PopupWindow_attachListener() { | |
if (document.layers) { | |
document.captureEvents(Event.MOUSEUP); | |
} | |
window.popupWindowOldEventListener = document.onmouseup; | |
if (window.popupWindowOldEventListener != null) { | |
document.onmouseup = new Function("window.popupWindowOldEventListener(); PopupWindow_hidePopupWindows();"); | |
} | |
else { | |
document.onmouseup = PopupWindow_hidePopupWindows; | |
} | |
} | |
// CONSTRUCTOR for the PopupWindow object | |
// Pass it a DIV name to use a DHTML popup, otherwise will default to window popup | |
function PopupWindow() { | |
if (!window.popupWindowIndex) { window.popupWindowIndex = 0; } | |
if (!window.popupWindowObjects) { window.popupWindowObjects = new Array(); } | |
if (!window.listenerAttached) { | |
window.listenerAttached = true; | |
PopupWindow_attachListener(); | |
} | |
this.index = popupWindowIndex++; | |
popupWindowObjects[this.index] = this; | |
this.divName = null; | |
this.popupWindow = null; | |
this.width=0; | |
this.height=0; | |
this.populated = false; | |
this.visible = false; | |
this.autoHideEnabled = false; | |
this.contents = ""; | |
this.url=""; | |
this.windowProperties="toolbar=no,location=no,status=no,menubar=no,scrollbars=auto,resizable,alwaysRaised,dependent,titlebar=no"; | |
if (arguments.length>0) { | |
this.type="DIV"; | |
this.divName = arguments[0]; | |
} | |
else { | |
this.type="WINDOW"; | |
} | |
this.use_gebi = false; | |
this.use_css = false; | |
this.use_layers = false; | |
if (document.getElementById) { this.use_gebi = true; } | |
else if (document.all) { this.use_css = true; } | |
else if (document.layers) { this.use_layers = true; } | |
else { this.type = "WINDOW"; } | |
this.offsetX = 0; | |
this.offsetY = 0; | |
// Method mappings | |
this.getXYPosition = PopupWindow_getXYPosition; | |
this.populate = PopupWindow_populate; | |
this.setUrl = PopupWindow_setUrl; | |
this.setWindowProperties = PopupWindow_setWindowProperties; | |
this.refresh = PopupWindow_refresh; | |
this.showPopup = PopupWindow_showPopup; | |
this.hidePopup = PopupWindow_hidePopup; | |
this.setSize = PopupWindow_setSize; | |
this.isClicked = PopupWindow_isClicked; | |
this.autoHide = PopupWindow_autoHide; | |
this.hideIfNotClicked = PopupWindow_hideIfNotClicked; | |
} | |
/* SOURCE FILE: CalendarPopup.js */ | |
// HISTORY | |
// ------------------------------------------------------------------ | |
// Feb 7, 2005: Fixed a CSS styles to use px unit | |
// March 29, 2004: Added check in select() method for the form field | |
// being disabled. If it is, just return and don't do anything. | |
// March 24, 2004: Fixed bug - when month name and abbreviations were | |
// changed, date format still used original values. | |
// January 26, 2004: Added support for drop-down month and year | |
// navigation (Thanks to Chris Reid for the idea) | |
// September 22, 2003: Fixed a minor problem in YEAR calendar with | |
// CSS prefix. | |
// August 19, 2003: Renamed the function to get styles, and made it | |
// work correctly without an object reference | |
// August 18, 2003: Changed showYearNavigation and | |
// showYearNavigationInput to optionally take an argument of | |
// true or false | |
// July 31, 2003: Added text input option for year navigation. | |
// Added a per-calendar CSS prefix option to optionally use | |
// different styles for different calendars. | |
// July 29, 2003: Fixed bug causing the Today link to be clickable | |
// even though today falls in a disabled date range. | |
// Changed formatting to use pure CSS, allowing greater control | |
// over look-and-feel options. | |
// June 11, 2003: Fixed bug causing the Today link to be unselectable | |
// under certain cases when some days of week are disabled | |
// March 14, 2003: Added ability to disable individual dates or date | |
// ranges, display as light gray and strike-through | |
// March 14, 2003: Removed dependency on graypixel.gif and instead | |
/// use table border coloring | |
// March 12, 2003: Modified showCalendar() function to allow optional | |
// start-date parameter | |
// March 11, 2003: Modified select() function to allow optional | |
// start-date parameter | |
/* | |
DESCRIPTION: This object implements a popup calendar to allow the user to | |
select a date, month, quarter, or year. | |
COMPATABILITY: Works with Netscape 4.x, 6.x, IE 5.x on Windows. Some small | |
positioning errors - usually with Window positioning - occur on the | |
Macintosh platform. | |
The calendar can be modified to work for any location in the world by | |
changing which weekday is displayed as the first column, changing the month | |
names, and changing the column headers for each day. | |
USAGE: | |
// Create a new CalendarPopup object of type WINDOW | |
var cal = new CalendarPopup(); | |
// Create a new CalendarPopup object of type DIV using the DIV named 'mydiv' | |
var cal = new CalendarPopup('mydiv'); | |
// Easy method to link the popup calendar with an input box. | |
cal.select(inputObject, anchorname, dateFormat); | |
// Same method, but passing a default date other than the field's current value | |
cal.select(inputObject, anchorname, dateFormat, '01/02/2000'); | |
// This is an example call to the popup calendar from a link to populate an | |
// input box. Note that to use this, date.js must also be included!! | |
<A HREF="#" onClick="cal.select(document.forms[0].date,'anchorname','MM/dd/yyyy'); return false;">Select</A> | |
// Set the type of date select to be used. By default it is 'date'. | |
cal.setDisplayType(type); | |
// When a date, month, quarter, or year is clicked, a function is called and | |
// passed the details. You must write this function, and tell the calendar | |
// popup what the function name is. | |
// Function to be called for 'date' select receives y, m, d | |
cal.setReturnFunction(functionname); | |
// Function to be called for 'month' select receives y, m | |
cal.setReturnMonthFunction(functionname); | |
// Function to be called for 'quarter' select receives y, q | |
cal.setReturnQuarterFunction(functionname); | |
// Function to be called for 'year' select receives y | |
cal.setReturnYearFunction(functionname); | |
// Show the calendar relative to a given anchor | |
cal.showCalendar(anchorname); | |
// Hide the calendar. The calendar is set to autoHide automatically | |
cal.hideCalendar(); | |
// Set the month names to be used. Default are English month names | |
cal.setMonthNames("January","February","March",...); | |
// Set the month abbreviations to be used. Default are English month abbreviations | |
cal.setMonthAbbreviations("Jan","Feb","Mar",...); | |
// Show navigation for changing by the year, not just one month at a time | |
cal.showYearNavigation(); | |
// Show month and year dropdowns, for quicker selection of month of dates | |
cal.showNavigationDropdowns(); | |
// Set the text to be used above each day column. The days start with | |
// sunday regardless of the value of WeekStartDay | |
cal.setDayHeaders("S","M","T",...); | |
// Set the day for the first column in the calendar grid. By default this | |
// is Sunday (0) but it may be changed to fit the conventions of other | |
// countries. | |
cal.setWeekStartDay(1); // week is Monday - Sunday | |
// Set the weekdays which should be disabled in the 'date' select popup. You can | |
// then allow someone to only select week end dates, or Tuedays, for example | |
cal.setDisabledWeekDays(0,1); // To disable selecting the 1st or 2nd days of the week | |
// Selectively disable individual days or date ranges. Disabled days will not | |
// be clickable, and show as strike-through text on current browsers. | |
// Date format is any format recognized by parseDate() in date.js | |
// Pass a single date to disable: | |
cal.addDisabledDates("2003-01-01"); | |
// Pass null as the first parameter to mean "anything up to and including" the | |
// passed date: | |
cal.addDisabledDates(null, "01/02/03"); | |
// Pass null as the second parameter to mean "including the passed date and | |
// anything after it: | |
cal.addDisabledDates("Jan 01, 2003", null); | |
// Pass two dates to disable all dates inbetween and including the two | |
cal.addDisabledDates("January 01, 2003", "Dec 31, 2003"); | |
// When the 'year' select is displayed, set the number of years back from the | |
// current year to start listing years. Default is 2. | |
// This is also used for year drop-down, to decide how many years +/- to display | |
cal.setYearSelectStartOffset(2); | |
// Text for the word "Today" appearing on the calendar | |
cal.setTodayText("Today"); | |
// The calendar uses CSS classes for formatting. If you want your calendar to | |
// have unique styles, you can set the prefix that will be added to all the | |
// classes in the output. | |
// For example, normal output may have this: | |
// <SPAN CLASS="cpTodayTextDisabled">Today<SPAN> | |
// But if you set the prefix like this: | |
cal.setCssPrefix("Test"); | |
// The output will then look like: | |
// <SPAN CLASS="TestcpTodayTextDisabled">Today<SPAN> | |
// And you can define that style somewhere in your page. | |
// When using Year navigation, you can make the year be an input box, so | |
// the user can manually change it and jump to any year | |
cal.showYearNavigationInput(); | |
// Set the calendar offset to be different than the default. By default it | |
// will appear just below and to the right of the anchorname. So if you have | |
// a text box where the date will go and and anchor immediately after the | |
// text box, the calendar will display immediately under the text box. | |
cal.offsetX = 20; | |
cal.offsetY = 20; | |
NOTES: | |
1) Requires the functions in AnchorPosition.js and PopupWindow.js | |
2) Your anchor tag MUST contain both NAME and ID attributes which are the | |
same. For example: | |
<A NAME="test" ID="test"> </A> | |
3) There must be at least a space between <A> </A> for IE5.5 to see the | |
anchor tag correctly. Do not do <A></A> with no space. | |
4) When a CalendarPopup object is created, a handler for 'onmouseup' is | |
attached to any event handler you may have already defined. Do NOT define | |
an event handler for 'onmouseup' after you define a CalendarPopup object | |
or the autoHide() will not work correctly. | |
5) The calendar popup display uses style sheets to make it look nice. | |
*/ | |
// Quick fix for FF3 | |
function CP_stop(e) { if (e && e.stopPropagation) { e.stopPropagation(); } } | |
// CONSTRUCTOR for the CalendarPopup Object | |
function CalendarPopup() { | |
var c; | |
if (arguments.length>0) { | |
c = new PopupWindow(arguments[0]); | |
} | |
else { | |
c = new PopupWindow(); | |
c.setSize(150,175); | |
} | |
c.offsetX = -152; | |
c.offsetY = 25; | |
c.autoHide(); | |
// Calendar-specific properties | |
c.monthNames = new Array("January","February","March","April","May","June","July","August","September","October","November","December"); | |
c.monthAbbreviations = new Array("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"); | |
c.dayHeaders = new Array("S","M","T","W","T","F","S"); | |
c.returnFunction = "CP_tmpReturnFunction"; | |
c.returnMonthFunction = "CP_tmpReturnMonthFunction"; | |
c.returnQuarterFunction = "CP_tmpReturnQuarterFunction"; | |
c.returnYearFunction = "CP_tmpReturnYearFunction"; | |
c.weekStartDay = 0; | |
c.isShowYearNavigation = false; | |
c.displayType = "date"; | |
c.disabledWeekDays = new Object(); | |
c.disabledDatesExpression = ""; | |
c.yearSelectStartOffset = 2; | |
c.currentDate = null; | |
c.todayText="Today"; | |
c.cssPrefix=""; | |
c.isShowNavigationDropdowns=false; | |
c.isShowYearNavigationInput=false; | |
window.CP_calendarObject = null; | |
window.CP_targetInput = null; | |
window.CP_dateFormat = "MM/dd/yyyy"; | |
// Method mappings | |
c.copyMonthNamesToWindow = CP_copyMonthNamesToWindow; | |
c.setReturnFunction = CP_setReturnFunction; | |
c.setReturnMonthFunction = CP_setReturnMonthFunction; | |
c.setReturnQuarterFunction = CP_setReturnQuarterFunction; | |
c.setReturnYearFunction = CP_setReturnYearFunction; | |
c.setMonthNames = CP_setMonthNames; | |
c.setMonthAbbreviations = CP_setMonthAbbreviations; | |
c.setDayHeaders = CP_setDayHeaders; | |
c.setWeekStartDay = CP_setWeekStartDay; | |
c.setDisplayType = CP_setDisplayType; | |
c.setDisabledWeekDays = CP_setDisabledWeekDays; | |
c.addDisabledDates = CP_addDisabledDates; | |
c.setYearSelectStartOffset = CP_setYearSelectStartOffset; | |
c.setTodayText = CP_setTodayText; | |
c.showYearNavigation = CP_showYearNavigation; | |
c.showCalendar = CP_showCalendar; | |
c.hideCalendar = CP_hideCalendar; | |
c.getStyles = getCalendarStyles; | |
c.refreshCalendar = CP_refreshCalendar; | |
c.getCalendar = CP_getCalendar; | |
c.select = CP_select; | |
c.setCssPrefix = CP_setCssPrefix; | |
c.showNavigationDropdowns = CP_showNavigationDropdowns; | |
c.showYearNavigationInput = CP_showYearNavigationInput; | |
c.copyMonthNamesToWindow(); | |
// Return the object | |
return c; | |
} | |
function CP_copyMonthNamesToWindow() { | |
// Copy these values over to the date.js | |
if (typeof(window.MONTH_NAMES)!="undefined" && window.MONTH_NAMES!=null) { | |
window.MONTH_NAMES = new Array(); | |
for (var i=0; i<this.monthNames.length; i++) { | |
window.MONTH_NAMES[window.MONTH_NAMES.length] = this.monthNames[i]; | |
} | |
for (var i=0; i<this.monthAbbreviations.length; i++) { | |
window.MONTH_NAMES[window.MONTH_NAMES.length] = this.monthAbbreviations[i]; | |
} | |
} | |
} | |
// Temporary default functions to be called when items clicked, so no error is thrown | |
function CP_tmpReturnFunction(y,m,d) { | |
if (window.CP_targetInput!=null) { | |
var dt = new Date(y,m-1,d,0,0,0); | |
if (window.CP_calendarObject!=null) { window.CP_calendarObject.copyMonthNamesToWindow(); } | |
window.CP_targetInput.value = formatDate(dt,window.CP_dateFormat); | |
} | |
else { | |
alert('Use setReturnFunction() to define which function will get the clicked results!'); | |
} | |
} | |
function CP_tmpReturnMonthFunction(y,m) { | |
alert('Use setReturnMonthFunction() to define which function will get the clicked results!\nYou clicked: year='+y+' , month='+m); | |
} | |
function CP_tmpReturnQuarterFunction(y,q) { | |
alert('Use setReturnQuarterFunction() to define which function will get the clicked results!\nYou clicked: year='+y+' , quarter='+q); | |
} | |
function CP_tmpReturnYearFunction(y) { | |
alert('Use setReturnYearFunction() to define which function will get the clicked results!\nYou clicked: year='+y); | |
} | |
// Set the name of the functions to call to get the clicked item | |
function CP_setReturnFunction(name) { this.returnFunction = name; } | |
function CP_setReturnMonthFunction(name) { this.returnMonthFunction = name; } | |
function CP_setReturnQuarterFunction(name) { this.returnQuarterFunction = name; } | |
function CP_setReturnYearFunction(name) { this.returnYearFunction = name; } | |
// Over-ride the built-in month names | |
function CP_setMonthNames() { | |
for (var i=0; i<arguments.length; i++) { this.monthNames[i] = arguments[i]; } | |
this.copyMonthNamesToWindow(); | |
} | |
// Over-ride the built-in month abbreviations | |
function CP_setMonthAbbreviations() { | |
for (var i=0; i<arguments.length; i++) { this.monthAbbreviations[i] = arguments[i]; } | |
this.copyMonthNamesToWindow(); | |
} | |
// Over-ride the built-in column headers for each day | |
function CP_setDayHeaders() { | |
for (var i=0; i<arguments.length; i++) { this.dayHeaders[i] = arguments[i]; } | |
} | |
// Set the day of the week (0-7) that the calendar display starts on | |
// This is for countries other than the US whose calendar displays start on Monday(1), for example | |
function CP_setWeekStartDay(day) { this.weekStartDay = day; } | |
// Show next/last year navigation links | |
function CP_showYearNavigation() { this.isShowYearNavigation = (arguments.length>0)?arguments[0]:true; } | |
// Which type of calendar to display | |
function CP_setDisplayType(type) { | |
if (type!="date"&&type!="week-end"&&type!="month"&&type!="quarter"&&type!="year") { alert("Invalid display type! Must be one of: date,week-end,month,quarter,year"); return false; } | |
this.displayType=type; | |
} | |
// How many years back to start by default for year display | |
function CP_setYearSelectStartOffset(num) { this.yearSelectStartOffset=num; } | |
// Set which weekdays should not be clickable | |
function CP_setDisabledWeekDays() { | |
this.disabledWeekDays = new Object(); | |
for (var i=0; i<arguments.length; i++) { this.disabledWeekDays[arguments[i]] = true; } | |
} | |
// Disable individual dates or ranges | |
// Builds an internal logical test which is run via eval() for efficiency | |
function CP_addDisabledDates(start, end) { | |
if (arguments.length==1) { end=start; } | |
if (start==null && end==null) { return; } | |
if (this.disabledDatesExpression!="") { this.disabledDatesExpression+= "||"; } | |
if (start!=null) { start = parseDate(start); start=""+start.getFullYear()+LZ(start.getMonth()+1)+LZ(start.getDate());} | |
if (end!=null) { end=parseDate(end); end=""+end.getFullYear()+LZ(end.getMonth()+1)+LZ(end.getDate());} | |
if (start==null) { this.disabledDatesExpression+="(ds<="+end+")"; } | |
else if (end ==null) { this.disabledDatesExpression+="(ds>="+start+")"; } | |
else { this.disabledDatesExpression+="(ds>="+start+"&&ds<="+end+")"; } | |
} | |
// Set the text to use for the "Today" link | |
function CP_setTodayText(text) { | |
this.todayText = text; | |
} | |
// Set the prefix to be added to all CSS classes when writing output | |
function CP_setCssPrefix(val) { | |
this.cssPrefix = val; | |
} | |
// Show the navigation as an dropdowns that can be manually changed | |
function CP_showNavigationDropdowns() { this.isShowNavigationDropdowns = (arguments.length>0)?arguments[0]:true; } | |
// Show the year navigation as an input box that can be manually changed | |
function CP_showYearNavigationInput() { this.isShowYearNavigationInput = (arguments.length>0)?arguments[0]:true; } | |
// Hide a calendar object | |
function CP_hideCalendar() { | |
if (arguments.length > 0) { window.popupWindowObjects[arguments[0]].hidePopup(); } | |
else { this.hidePopup(); } | |
} | |
// Refresh the contents of the calendar display | |
function CP_refreshCalendar(index) { | |
var calObject = window.popupWindowObjects[index]; | |
if (arguments.length>1) { | |
calObject.populate(calObject.getCalendar(arguments[1],arguments[2],arguments[3],arguments[4],arguments[5])); | |
} | |
else { | |
calObject.populate(calObject.getCalendar()); | |
} | |
calObject.refresh(); | |
} | |
// Populate the calendar and display it | |
function CP_showCalendar(anchorname) { | |
if (arguments.length>1) { | |
if (arguments[1]==null||arguments[1]=="") { | |
this.currentDate=new Date(); | |
} | |
else { | |
this.currentDate=new Date(parseDate(arguments[1])); | |
} | |
} | |
this.populate(this.getCalendar()); | |
this.showPopup(anchorname); | |
} | |
// Simple method to interface popup calendar with a text-entry box | |
function CP_select(inputobj, linkname, format) { | |
var selectedDate=(arguments.length>3)?arguments[3]:null; | |
if (!window.getDateFromFormat) { | |
alert("calendar.select: To use this method you must also include 'date.js' for date formatting"); | |
return; | |
} | |
if (this.displayType!="date"&&this.displayType!="week-end") { | |
alert("calendar.select: This function can only be used with displayType 'date' or 'week-end'"); | |
return; | |
} | |
if (inputobj.type!="text" && inputobj.type!="hidden" && inputobj.type!="textarea") { | |
alert("calendar.select: Input object passed is not a valid form input object"); | |
window.CP_targetInput=null; | |
return; | |
} | |
if (inputobj.disabled) { return; } // Can't use calendar input on disabled form input! | |
window.CP_targetInput = inputobj; | |
window.CP_calendarObject = this; | |
this.currentDate=null; | |
var time=0; | |
if (selectedDate!=null) { | |
time = getDateFromFormat(selectedDate,format) | |
} | |
else if (inputobj.value!="") { | |
time = getDateFromFormat(inputobj.value,format); | |
} | |
if (selectedDate!=null || inputobj.value!="") { | |
if (time==0) { this.currentDate=null; } | |
else { this.currentDate=new Date(time); } | |
} | |
window.CP_dateFormat = format; | |
this.showCalendar(linkname); | |
} | |
// Get style block needed to display the calendar correctly | |
function getCalendarStyles() { | |
var result = ""; | |
var p = ""; | |
if (this!=null && typeof(this.cssPrefix)!="undefined" && this.cssPrefix!=null && this.cssPrefix!="") { p=this.cssPrefix; } | |
result += "<STYLE>\n"; | |
result += "."+p+"cpYearNavigation,."+p+"cpMonthNavigation { background-color:#C0C0C0; text-align:center; vertical-align:center; text-decoration:none; color:#000000; font-weight:bold; }\n"; | |
result += "."+p+"cpDayColumnHeader, ."+p+"cpYearNavigation,."+p+"cpMonthNavigation,."+p+"cpCurrentMonthDate,."+p+"cpCurrentMonthDateDisabled,."+p+"cpOtherMonthDate,."+p+"cpOtherMonthDateDisabled,."+p+"cpCurrentDate,."+p+"cpCurrentDateDisabled,."+p+"cpTodayText,."+p+"cpTodayTextDisabled,."+p+"cpText { font-family:arial; font-size:8pt; }\n"; | |
result += "TD."+p+"cpDayColumnHeader { text-align:right; border:solid thin #C0C0C0;border-width:0px 0px 1px 0px; }\n"; | |
result += "."+p+"cpCurrentMonthDate, ."+p+"cpOtherMonthDate, ."+p+"cpCurrentDate { text-align:right; text-decoration:none; }\n"; | |
result += "."+p+"cpCurrentMonthDateDisabled, ."+p+"cpOtherMonthDateDisabled, ."+p+"cpCurrentDateDisabled { color:#D0D0D0; text-align:right; text-decoration:line-through; }\n"; | |
result += "."+p+"cpCurrentMonthDate, .cpCurrentDate { color:#000000; }\n"; | |
result += "."+p+"cpOtherMonthDate { color:#808080; }\n"; | |
result += "TD."+p+"cpCurrentDate { color:white; background-color: #C0C0C0; border-width:1px; border:solid thin #800000; }\n"; | |
result += "TD."+p+"cpCurrentDateDisabled { border-width:1px; border:solid thin #FFAAAA; }\n"; | |
result += "TD."+p+"cpTodayText, TD."+p+"cpTodayTextDisabled { border:solid thin #C0C0C0; border-width:1px 0px 0px 0px;}\n"; | |
result += "A."+p+"cpTodayText, SPAN."+p+"cpTodayTextDisabled { height:20px; }\n"; | |
result += "A."+p+"cpTodayText { color:black; }\n"; | |
result += "."+p+"cpTodayTextDisabled { color:#D0D0D0; }\n"; | |
result += "."+p+"cpBorder { border:solid thin #808080; }\n"; | |
result += "</STYLE>\n"; | |
return result; | |
} | |
// Return a string containing all the calendar code to be displayed | |
function CP_getCalendar() { | |
var now = new Date(); | |
// Reference to window | |
if (this.type == "WINDOW") { var windowref = "window.opener."; } | |
else { var windowref = ""; } | |
var result = ""; | |
// If POPUP, write entire HTML document | |
if (this.type == "WINDOW") { | |
result += "<HTML><HEAD><TITLE>Calendar</TITLE>"+this.getStyles()+"</HEAD><BODY MARGINWIDTH=0 MARGINHEIGHT=0 TOPMARGIN=0 RIGHTMARGIN=0 LEFTMARGIN=0>\n"; | |
result += '<CENTER><TABLE WIDTH=100% BORDER=0 BORDERWIDTH=0 CELLSPACING=0 CELLPADDING=0>\n'; | |
} | |
else { | |
result += '<TABLE CLASS="'+this.cssPrefix+'cpBorder" WIDTH=144 BORDER=1 BORDERWIDTH=1 CELLSPACING=0 CELLPADDING=1>\n'; | |
result += '<TR><TD ALIGN=CENTER>\n'; | |
result += '<CENTER>\n'; | |
} | |
// Code for DATE display (default) | |
// ------------------------------- | |
if (this.displayType=="date" || this.displayType=="week-end") { | |
if (this.currentDate==null) { this.currentDate = now; } | |
if (arguments.length > 0) { var month = arguments[0]; } | |
else { var month = this.currentDate.getMonth()+1; } | |
if (arguments.length > 1 && arguments[1]>0 && arguments[1]-0==arguments[1]) { var year = arguments[1]; } | |
else { var year = this.currentDate.getFullYear(); } | |
var daysinmonth= new Array(0,31,28,31,30,31,30,31,31,30,31,30,31); | |
if ( ( (year%4 == 0)&&(year%100 != 0) ) || (year%400 == 0) ) { | |
daysinmonth[2] = 29; | |
} | |
var current_month = new Date(year,month-1,1); | |
var display_year = year; | |
var display_month = month; | |
var display_date = 1; | |
var weekday= current_month.getDay(); | |
var offset = 0; | |
offset = (weekday >= this.weekStartDay) ? weekday-this.weekStartDay : 7-this.weekStartDay+weekday ; | |
if (offset > 0) { | |
display_month--; | |
if (display_month < 1) { display_month = 12; display_year--; } | |
display_date = daysinmonth[display_month]-offset+1; | |
} | |
var next_month = month+1; | |
var next_month_year = year; | |
if (next_month > 12) { next_month=1; next_month_year++; } | |
var last_month = month-1; | |
var last_month_year = year; | |
if (last_month < 1) { last_month=12; last_month_year--; } | |
var date_class; | |
if (this.type!="WINDOW") { | |
result += "<TABLE WIDTH=144 BORDER=0 BORDERWIDTH=0 CELLSPACING=0 CELLPADDING=0>"; | |
} | |
result += '<TR>\n'; | |
var refresh = windowref+'CP_refreshCalendar'; | |
var refreshLink = 'javascript:' + refresh; | |
if (this.isShowNavigationDropdowns) { | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="78" COLSPAN="3"><select CLASS="'+this.cssPrefix+'cpMonthNavigation" name="cpMonth" onmouseup="CP_stop(event)" onChange="'+refresh+'('+this.index+',this.options[this.selectedIndex].value-0,'+(year-0)+');">'; | |
for( var monthCounter=1; monthCounter<=12; monthCounter++ ) { | |
var selected = (monthCounter==month) ? 'SELECTED' : ''; | |
result += '<option value="'+monthCounter+'" '+selected+'>'+this.monthNames[monthCounter-1]+'</option>'; | |
} | |
result += '</select></TD>'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="10"> </TD>'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="56" COLSPAN="3"><select CLASS="'+this.cssPrefix+'cpYearNavigation" name="cpYear" onmouseup="CP_stop(event)" onChange="'+refresh+'('+this.index+','+month+',this.options[this.selectedIndex].value-0);">'; | |
for( var yearCounter=year-this.yearSelectStartOffset; yearCounter<=year+this.yearSelectStartOffset; yearCounter++ ) { | |
var selected = (yearCounter==year) ? 'SELECTED' : ''; | |
result += '<option value="'+yearCounter+'" '+selected+'>'+yearCounter+'</option>'; | |
} | |
result += '</select></TD>'; | |
} | |
else { | |
if (this.isShowYearNavigation) { | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="10"><A CLASS="'+this.cssPrefix+'cpMonthNavigation" HREF="'+refreshLink+'('+this.index+','+last_month+','+last_month_year+');"><</A></TD>'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="58"><SPAN CLASS="'+this.cssPrefix+'cpMonthNavigation">'+this.monthNames[month-1]+'</SPAN></TD>'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="10"><A CLASS="'+this.cssPrefix+'cpMonthNavigation" HREF="'+refreshLink+'('+this.index+','+next_month+','+next_month_year+');">></A></TD>'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="10"> </TD>'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="10"><A CLASS="'+this.cssPrefix+'cpYearNavigation" HREF="'+refreshLink+'('+this.index+','+month+','+(year-1)+');"><</A></TD>'; | |
if (this.isShowYearNavigationInput) { | |
result += '<TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="36"><INPUT NAME="cpYear" CLASS="'+this.cssPrefix+'cpYearNavigation" SIZE="4" MAXLENGTH="4" VALUE="'+year+'" onBlur="'+refresh+'('+this.index+','+month+',this.value-0);"></TD>'; | |
} | |
else { | |
result += '<TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="36"><SPAN CLASS="'+this.cssPrefix+'cpYearNavigation">'+year+'</SPAN></TD>'; | |
} | |
result += '<TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="10"><A CLASS="'+this.cssPrefix+'cpYearNavigation" HREF="'+refreshLink+'('+this.index+','+month+','+(year+1)+');">></A></TD>'; | |
} | |
else { | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="22"><A CLASS="'+this.cssPrefix+'cpMonthNavigation" HREF="'+refreshLink+'('+this.index+','+last_month+','+last_month_year+');"><<</A></TD>\n'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="100"><SPAN CLASS="'+this.cssPrefix+'cpMonthNavigation">'+this.monthNames[month-1]+' '+year+'</SPAN></TD>\n'; | |
result += '<TD CLASS="'+this.cssPrefix+'cpMonthNavigation" WIDTH="22"><A CLASS="'+this.cssPrefix+'cpMonthNavigation" HREF="'+refreshLink+'('+this.index+','+next_month+','+next_month_year+');">>></A></TD>\n'; | |
} | |
} | |
result += '</TR></TABLE>\n'; | |
result += '<TABLE WIDTH=120 BORDER=0 CELLSPACING=0 CELLPADDING=1 ALIGN=CENTER>\n'; | |
result += '<TR>\n'; | |
for (var j=0; j<7; j++) { | |
result += '<TD CLASS="'+this.cssPrefix+'cpDayColumnHeader" WIDTH="14%"><SPAN CLASS="'+this.cssPrefix+'cpDayColumnHeader">'+this.dayHeaders[(this.weekStartDay+j)%7]+'</TD>\n'; | |
} | |
result += '</TR>\n'; | |
for (var row=1; row<=6; row++) { | |
result += '<TR>\n'; | |
for (var col=1; col<=7; col++) { | |
var disabled=false; | |
if (this.disabledDatesExpression!="") { | |
var ds=""+display_year+LZ(display_month)+LZ(display_date); | |
eval("disabled=("+this.disabledDatesExpression+")"); | |
} | |
var dateClass = ""; | |
if ((display_month == this.currentDate.getMonth()+1) && (display_date==this.currentDate.getDate()) && (display_year==this.currentDate.getFullYear())) { | |
dateClass = "cpCurrentDate"; | |
} | |
else if (display_month == month) { | |
dateClass = "cpCurrentMonthDate"; | |
} | |
else { | |
dateClass = "cpOtherMonthDate"; | |
} | |
if (disabled || this.disabledWeekDays[col-1]) { | |
result += ' <TD CLASS="'+this.cssPrefix+dateClass+'"><SPAN CLASS="'+this.cssPrefix+dateClass+'Disabled">'+display_date+'</SPAN></TD>\n'; | |
} | |
else { | |
var selected_date = display_date; | |
var selected_month = display_month; | |
var selected_year = display_year; | |
if (this.displayType=="week-end") { | |
var d = new Date(selected_year,selected_month-1,selected_date,0,0,0,0); | |
d.setDate(d.getDate() + (7-col)); | |
selected_year = d.getYear(); | |
if (selected_year < 1000) { selected_year += 1900; } | |
selected_month = d.getMonth()+1; | |
selected_date = d.getDate(); | |
} | |
result += ' <TD CLASS="'+this.cssPrefix+dateClass+'"><A HREF="javascript:'+windowref+this.returnFunction+'('+selected_year+','+selected_month+','+selected_date+');'+windowref+'CP_hideCalendar(\''+this.index+'\');" CLASS="'+this.cssPrefix+dateClass+'">'+display_date+'</A></TD>\n'; | |
} | |
display_date++; | |
if (display_date > daysinmonth[display_month]) { | |
display_date=1; | |
display_month++; | |
} | |
if (display_month > 12) { | |
display_month=1; | |
display_year++; | |
} | |
} | |
result += '</TR>'; | |
} | |
var current_weekday = now.getDay() - this.weekStartDay; | |
if (current_weekday < 0) { | |
current_weekday += 7; | |
} | |
result += '<TR>\n'; | |
result += ' <TD COLSPAN=7 ALIGN=CENTER CLASS="'+this.cssPrefix+'cpTodayText">\n'; | |
if (this.disabledDatesExpression!="") { | |
var ds=""+now.getFullYear()+LZ(now.getMonth()+1)+LZ(now.getDate()); | |
eval("disabled=("+this.disabledDatesExpression+")"); | |
} | |
if (disabled || this.disabledWeekDays[current_weekday+1]) { | |
result += ' <SPAN CLASS="'+this.cssPrefix+'cpTodayTextDisabled">'+this.todayText+'</SPAN>\n'; | |
} | |
else { | |
result += ' <A CLASS="'+this.cssPrefix+'cpTodayText" HREF="javascript:'+windowref+this.returnFunction+'(\''+now.getFullYear()+'\',\''+(now.getMonth()+1)+'\',\''+now.getDate()+'\');'+windowref+'CP_hideCalendar(\''+this.index+'\');">'+this.todayText+'</A>\n'; | |
} | |
result += ' <BR>\n'; | |
result += ' </TD></TR></TABLE></CENTER></TD></TR></TABLE>\n'; | |
} | |
// Code common for MONTH, QUARTER, YEAR | |
// ------------------------------------ | |
if (this.displayType=="month" || this.displayType=="quarter" || this.displayType=="year") { | |
if (arguments.length > 0) { var year = arguments[0]; } | |
else { | |
if (this.displayType=="year") { var year = now.getFullYear()-this.yearSelectStartOffset; } | |
else { var year = now.getFullYear(); } | |
} | |
if (this.displayType!="year" && this.isShowYearNavigation) { | |
result += "<TABLE WIDTH=144 BORDER=0 BORDERWIDTH=0 CELLSPACING=0 CELLPADDING=0>"; | |
result += '<TR>\n'; | |
result += ' <TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="22"><A CLASS="'+this.cssPrefix+'cpYearNavigation" HREF="javascript:'+windowref+'CP_refreshCalendar('+this.index+','+(year-1)+');"><<</A></TD>\n'; | |
result += ' <TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="100">'+year+'</TD>\n'; | |
result += ' <TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="22"><A CLASS="'+this.cssPrefix+'cpYearNavigation" HREF="javascript:'+windowref+'CP_refreshCalendar('+this.index+','+(year+1)+');">>></A></TD>\n'; | |
result += '</TR></TABLE>\n'; | |
} | |
} | |
// Code for MONTH display | |
// ---------------------- | |
if (this.displayType=="month") { | |
// If POPUP, write entire HTML document | |
result += '<TABLE WIDTH=120 BORDER=0 CELLSPACING=1 CELLPADDING=0 ALIGN=CENTER>\n'; | |
for (var i=0; i<4; i++) { | |
result += '<TR>'; | |
for (var j=0; j<3; j++) { | |
var monthindex = ((i*3)+j); | |
result += '<TD WIDTH=33% ALIGN=CENTER><A CLASS="'+this.cssPrefix+'cpText" HREF="javascript:'+windowref+this.returnMonthFunction+'('+year+','+(monthindex+1)+');'+windowref+'CP_hideCalendar(\''+this.index+'\');" CLASS="'+date_class+'">'+this.monthAbbreviations[monthindex]+'</A></TD>'; | |
} | |
result += '</TR>'; | |
} | |
result += '</TABLE></CENTER></TD></TR></TABLE>\n'; | |
} | |
// Code for QUARTER display | |
// ------------------------ | |
if (this.displayType=="quarter") { | |
result += '<BR><TABLE WIDTH=120 BORDER=1 CELLSPACING=0 CELLPADDING=0 ALIGN=CENTER>\n'; | |
for (var i=0; i<2; i++) { | |
result += '<TR>'; | |
for (var j=0; j<2; j++) { | |
var quarter = ((i*2)+j+1); | |
result += '<TD WIDTH=50% ALIGN=CENTER><BR><A CLASS="'+this.cssPrefix+'cpText" HREF="javascript:'+windowref+this.returnQuarterFunction+'('+year+','+quarter+');'+windowref+'CP_hideCalendar(\''+this.index+'\');" CLASS="'+date_class+'">Q'+quarter+'</A><BR><BR></TD>'; | |
} | |
result += '</TR>'; | |
} | |
result += '</TABLE></CENTER></TD></TR></TABLE>\n'; | |
} | |
// Code for YEAR display | |
// --------------------- | |
if (this.displayType=="year") { | |
var yearColumnSize = 4; | |
result += "<TABLE WIDTH=144 BORDER=0 BORDERWIDTH=0 CELLSPACING=0 CELLPADDING=0>"; | |
result += '<TR>\n'; | |
result += ' <TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="50%"><A CLASS="'+this.cssPrefix+'cpYearNavigation" HREF="javascript:'+windowref+'CP_refreshCalendar('+this.index+','+(year-(yearColumnSize*2))+');"><<</A></TD>\n'; | |
result += ' <TD CLASS="'+this.cssPrefix+'cpYearNavigation" WIDTH="50%"><A CLASS="'+this.cssPrefix+'cpYearNavigation" HREF="javascript:'+windowref+'CP_refreshCalendar('+this.index+','+(year+(yearColumnSize*2))+');">>></A></TD>\n'; | |
result += '</TR></TABLE>\n'; | |
result += '<TABLE WIDTH=120 BORDER=0 CELLSPACING=1 CELLPADDING=0 ALIGN=CENTER>\n'; | |
for (var i=0; i<yearColumnSize; i++) { | |
for (var j=0; j<2; j++) { | |
var currentyear = year+(j*yearColumnSize)+i; | |
result += '<TD WIDTH=50% ALIGN=CENTER><A CLASS="'+this.cssPrefix+'cpText" HREF="javascript:'+windowref+this.returnYearFunction+'('+currentyear+');'+windowref+'CP_hideCalendar(\''+this.index+'\');" CLASS="'+date_class+'">'+currentyear+'</A></TD>'; | |
} | |
result += '</TR>'; | |
} | |
result += '</TABLE></CENTER></TD></TR></TABLE>\n'; | |
} | |
// Common | |
if (this.type == "WINDOW") { | |
result += "</BODY></HTML>\n"; | |
} | |
return result; | |
} | |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" | |
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> | |
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml"> | |
<head> | |
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7"/> | |
<meta http-equiv="content-type" content="text/html; charset=utf-8"/> | |
<title>[agency]</title> | |
<link href="file/style.css" rel="stylesheet" type="text/css" /> | |
<style type="text/css"> | |
v\:* { | |
behavior:url(#default#VML); | |
} | |
</style> | |
<script src="http://[host]/maps?file=api&v=2&key=[key]" type="text/javascript"></script> | |
<script src="/file/labeled_marker.js" type="text/javascript"></script> | |
<script src="/file/calendarpopup.js" type="text/javascript"></script> | |
<script language="VBScript" src="/file/svgcheck.vbs"></script> | |
<script type="text/javascript"> | |
//<![CDATA[ | |
var map; | |
// Set to true when debugging for log statements about HTTP requests. | |
var log = false; | |
var twelveHourTime = false; // set to true to see AM/PM | |
var selectedRoute = null; | |
var forbid_editing = [forbid_editing]; | |
function load() { | |
if (GBrowserIsCompatible()) { | |
sizeRouteList(); | |
var map_dom = document.getElementById("map"); | |
map = new GMap2(map_dom); | |
map.addControl(new GLargeMapControl()); | |
map.addControl(new GMapTypeControl()); | |
map.addControl(new GOverviewMapControl()); | |
map.enableScrollWheelZoom(); | |
var bb = new GLatLngBounds(new GLatLng([min_lat], [min_lon]),new GLatLng([max_lat], [max_lon])); | |
map.setCenter(bb.getCenter(), map.getBoundsZoomLevel(bb)); | |
map.enableDoubleClickZoom(); | |
initIcons(); | |
GEvent.addListener(map, "moveend", callbackMoveEnd); | |
GEvent.addListener(map, "zoomend", callbackZoomEnd); | |
callbackMoveEnd(); // Pretend we just moved to current center | |
fetchRoutes(); | |
} | |
} | |
function callbackZoomEnd() { | |
} | |
function callbackMoveEnd() { | |
// Map moved, search for stops near the center | |
fetchStopsInBounds(map.getBounds()); | |
} | |
/** | |
* Fetch a sample of stops in the bounding box. | |
*/ | |
function fetchStopsInBounds(bounds) { | |
url = "/json/boundboxstops?n=" + bounds.getNorthEast().lat() | |
+ "&e=" + bounds.getNorthEast().lng() | |
+ "&s=" + bounds.getSouthWest().lat() | |
+ "&w=" + bounds.getSouthWest().lng() | |
+ "&limit=50"; | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayStopsBackground); | |
} | |
/** | |
* Displays stops returned by the server on the map. Expected to be called | |
* when GDownloadUrl finishes. | |
* | |
* @param {String} data JSON encoded list of list, each | |
* containing a row of stops.txt | |
* @param {Number} responseCode Response code from server | |
*/ | |
function callbackDisplayStops(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
clearMap(); | |
var stops = eval(data); | |
if (stops.length == 1) { | |
var marker = addStopMarkerFromList(stops[0], true); | |
fetchStopInfoWindow(marker); | |
} else { | |
for (var i=0; i<stops.length; ++i) { | |
addStopMarkerFromList(stops[i], true); | |
} | |
} | |
} | |
function stopTextSearchSubmit() { | |
var text = document.getElementById("stopTextSearchInput").value; | |
var url = "/json/stopsearch?q=" + text; // TODO URI escape | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayStops); | |
} | |
function tripTextSearchSubmit() { | |
var text = document.getElementById("tripTextSearchInput").value; | |
selectTrip(text); | |
} | |
/** | |
* Add stops markers to the map and remove stops no longer in the | |
* background. | |
*/ | |
function callbackDisplayStopsBackground(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var stops = eval(data); | |
// Make a list of all background markers | |
var oldStopMarkers = {}; | |
for (var stopId in stopMarkersBackground) { | |
oldStopMarkers[stopId] = 1; | |
} | |
// Add new markers to the map and remove from oldStopMarkers | |
for (var i=0; i<stops.length; ++i) { | |
var marker = addStopMarkerFromList(stops[i], false); | |
if (oldStopMarkers[marker.stopId]) { | |
delete oldStopMarkers[marker.stopId]; | |
} | |
} | |
// Delete all markers that remain in oldStopMarkers | |
for (var stopId in oldStopMarkers) { | |
GEvent.removeListener(stopMarkersBackground[stopId].clickListener); | |
map.removeOverlay(stopMarkersBackground[stopId]); | |
delete stopMarkersBackground[stopId] | |
} | |
} | |
/** | |
* Remove all overlays from the map | |
*/ | |
function clearMap() { | |
boundsOfPolyLine = null; | |
for (var stopId in stopMarkersSelected) { | |
GEvent.removeListener(stopMarkersSelected[stopId].clickListener); | |
} | |
for (var stopId in stopMarkersBackground) { | |
GEvent.removeListener(stopMarkersBackground[stopId].clickListener); | |
} | |
stopMarkersSelected = {}; | |
stopMarkersBackground = {}; | |
map.clearOverlays(); | |
} | |
/** | |
* Return a new GIcon used for stops | |
*/ | |
function makeStopIcon() { | |
var icon = new GIcon(); | |
icon.iconSize = new GSize(12, 20); | |
icon.shadowSize = new GSize(22, 20); | |
icon.iconAnchor = new GPoint(6, 20); | |
icon.infoWindowAnchor = new GPoint(5, 1); | |
return icon; | |
} | |
/** | |
* Initialize icons. Call once during load. | |
*/ | |
function initIcons() { | |
iconSelected = makeStopIcon(); | |
iconSelected.image = "/file/mm_20_yellow.png"; | |
iconSelected.shadow = "/file/mm_20_shadow.png"; | |
iconBackground = makeStopIcon(); | |
iconBackground.image = "/file/mm_20_blue_trans.png"; | |
iconBackground.shadow = "/file/mm_20_shadow_trans.png"; | |
iconBackgroundStation = makeStopIcon(); | |
iconBackgroundStation.image = "/file/mm_20_red_trans.png"; | |
iconBackgroundStation.shadow = "/file/mm_20_shadow_trans.png"; | |
} | |
var iconSelected; | |
var iconBackground; | |
var iconBackgroundStation; | |
// Map from stopId to GMarker object for stops selected because they are | |
// part of a trip, etc | |
var stopMarkersSelected = {}; | |
// Map from stopId to GMarker object for stops found by the background | |
// passive search | |
var stopMarkersBackground = {}; | |
/** | |
* Add a stop to the map, given a row from stops.txt. | |
*/ | |
function addStopMarkerFromList(list, selected, text) { | |
return addStopMarker(list[0], list[1], list[2], list[3], list[4], selected, text); | |
} | |
/** | |
* Add a stop to the map, returning the new marker | |
*/ | |
function addStopMarker(stopId, stopName, stopLat, stopLon, locationType, selected, text) { | |
if (stopMarkersSelected[stopId]) { | |
// stop was selected | |
var marker = stopMarkersSelected[stopId]; | |
if (text) { | |
oldText = marker.getText(); | |
if (oldText) { | |
oldText = oldText + "<br>"; | |
} | |
marker.setText(oldText + text); | |
} | |
return marker; | |
} | |
if (stopMarkersBackground[stopId]) { | |
// Stop was in the background. Either delete it from the background or | |
// leave it where it is. | |
if (selected) { | |
map.removeOverlay(stopMarkersBackground[stopId]); | |
delete stopMarkersBackground[stopId]; | |
} else { | |
return stopMarkersBackground[stopId]; | |
} | |
} | |
var icon; | |
if (selected) { | |
icon = iconSelected; | |
} else if (locationType == 1) { | |
icon = iconBackgroundStation | |
} else { | |
icon = iconBackground; | |
} | |
var ll = new GLatLng(stopLat,stopLon); | |
var marker; | |
if (selected || text) { | |
if (!text) { | |
text = ""; // Make sure every selected icon has a text box, even if empty | |
} | |
var markerOpts = new Object(); | |
markerOpts.icon = icon; | |
markerOpts.labelText = text; | |
markerOpts.labelClass = "tooltip"; | |
markerOpts.labelOffset = new GSize(6, -20); | |
marker = new LabeledMarker(ll, markerOpts); | |
} else { | |
marker = new GMarker(ll, {icon: icon, draggable: !forbid_editing}); | |
} | |
marker.stopName = stopName; | |
marker.stopId = stopId; | |
if (selected) { | |
stopMarkersSelected[stopId] = marker; | |
} else { | |
stopMarkersBackground[stopId] = marker; | |
} | |
map.addOverlay(marker); | |
marker.clickListener = GEvent.addListener(marker, "click", function() {fetchStopInfoWindow(marker);}); | |
GEvent.addListener(marker, "dragend", function() { | |
document.getElementById("edit").style.visibility = "visible"; | |
document.getElementById("edit_status").innerHTML = "updating..." | |
changeStopLocation(marker); | |
}); | |
return marker; | |
} | |
/** | |
* Sends new location of a stop to server. | |
*/ | |
function changeStopLocation(marker) { | |
var url = "/json/setstoplocation?id=" + | |
encodeURIComponent(marker.stopId) + | |
"&lat=" + encodeURIComponent(marker.getLatLng().lat()) + | |
"&lng=" + encodeURIComponent(marker.getLatLng().lng()); | |
GDownloadUrl(url, function(data, responseCode) { | |
document.getElementById("edit_status").innerHTML = unescape(data); | |
} ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
/** | |
* Saves the current state of the data file opened at server side to file. | |
*/ | |
function saveData() { | |
var url = "/json/savedata"; | |
GDownloadUrl(url, function(data, responseCode) { | |
document.getElementById("edit_status").innerHTML = data;} ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
/** | |
* Fetch the next departing trips from the stop for display in an info | |
* window. | |
*/ | |
function fetchStopInfoWindow(marker) { | |
var url = "/json/stoptrips?stop=" + encodeURIComponent(marker.stopId) + "&time=" + parseTimeInput() + "&date=" + parseDateInput(); | |
GDownloadUrl(url, function(data, responseCode) { | |
callbackDisplayStopInfoWindow(marker, data, responseCode); } ); | |
if (log) | |
GLog.writeUrl(url); | |
} | |
function callbackDisplayStopInfoWindow(marker, data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var timeTrips = eval(data); | |
var html = "<b>" + marker.stopName + "</b> (" + marker.stopId + ")<br>"; | |
var latLng = marker.getLatLng(); | |
html = html + "(" + latLng.lat() + ", " + latLng.lng() + ")<br>"; | |
html = html + "<table><tr><th>service_id<th>time<th>name</tr>"; | |
for (var i=0; i < timeTrips.length; ++i) { | |
var time = timeTrips[i][0]; | |
var tripid = timeTrips[i][1][0]; | |
var tripname = timeTrips[i][1][1]; | |
var service_id = timeTrips[i][1][2]; | |
var timepoint = timeTrips[i][2]; | |
html = html + "<tr onClick='map.closeInfoWindow();selectTrip(\"" + | |
tripid + "\")'>" + | |
"<td>" + service_id + | |
"<td align='right'>" + (timepoint ? "" : "~") + | |
formatTime(time) + "<td>" + tripname + "</tr>"; | |
} | |
html = html + "</table>"; | |
marker.openInfoWindowHtml(html); | |
} | |
function leadingZero(digit) { | |
if (digit < 10) | |
return "0" + digit; | |
else | |
return "" + digit; | |
} | |
function formatTime(secSinceMidnight) { | |
var hours = Math.floor(secSinceMidnight / 3600); | |
var suffix = ""; | |
if (twelveHourTime) { | |
suffix = (hours >= 12) ? "p" : "a"; | |
suffix += (hours >= 24) ? " next day" : ""; | |
hours = hours % 12; | |
if (hours == 0) | |
hours = 12; | |
} | |
var minutes = Math.floor(secSinceMidnight / 60) % 60; | |
var seconds = secSinceMidnight % 60; | |
if (seconds == 0) { | |
return hours + ":" + leadingZero(minutes) + suffix; | |
} else { | |
return hours + ":" + leadingZero(minutes) + ":" + leadingZero(seconds) + suffix; | |
} | |
} | |
function parseTimeInput() { | |
var text = document.getElementById("timeInput").value; | |
var m = text.match(/([012]?\d):([012345]?\d)(:([012345]?\d))?/); | |
if (m) { | |
var seconds = parseInt(m[1], 10) * 3600; | |
seconds += parseInt(m[2], 10) * 60; | |
if (m[4]) { | |
second += parseInt(m[4], 10); | |
} | |
return seconds; | |
} else { | |
if (log) | |
GLog.write("Couldn't match " + text + " to time"); | |
return ""; | |
} | |
} | |
function parseDateInput() { | |
var text = document.getElementById("startDateInput").value; | |
var m = text.match(/(19|20\d\d)(0[1-9]|1[012])(0[1-9]|[12][0-9]|3[01])/); | |
if (m) { | |
return text; | |
} else { | |
if (log) | |
GLog.write("Couldn't match " + text + " to date"); | |
return ""; | |
} | |
} | |
/** | |
* Create a string of dots that gets longer with the log of count. | |
*/ | |
function countToRepeatedDots(count) { | |
// Find ln_2(count) + 1 | |
var logCount = Math.ceil(Math.log(count) / 0.693148) + 1; | |
return new Array(logCount + 1).join("."); | |
} | |
function fetchRoutes() { | |
url = "/json/routes"; | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayRoutes); | |
} | |
function callbackDisplayRoutes(data, responseCode) { | |
if (responseCode != 200) { | |
patternDiv.appendChild(div); | |
} | |
var routes = eval(data); | |
var routesList = document.getElementById("routeList"); | |
while (routesList.hasChildNodes()) { | |
routesList.removeChild(routesList.firstChild); | |
} | |
for (i = 0; i < routes.length; ++i) { | |
var routeId = routes[i][0]; | |
var shortName = document.createElement("span"); | |
shortName.className = "shortName"; | |
shortName.appendChild(document.createTextNode(routes[i][1] + " ")); | |
var routeName = routes[i][2]; | |
var elem = document.createElement("div"); | |
elem.appendChild(shortName); | |
elem.appendChild(document.createTextNode(routeName)); | |
elem.id = "route_" + routeId; | |
elem.className = "routeChoice"; | |
elem.title = routeName; | |
GEvent.addDomListener(elem, "click", makeClosure(selectRoute, routeId)); | |
var routeContainer = document.createElement("div"); | |
routeContainer.id = "route_container_" + routeId; | |
routeContainer.className = "routeContainer"; | |
routeContainer.appendChild(elem); | |
routesList.appendChild(routeContainer); | |
} | |
} | |
function selectRoute(routeId) { | |
var routesList = document.getElementById("routeList"); | |
routeSpans = routesList.getElementsByTagName("div"); | |
for (var i = 0; i < routeSpans.length; ++i) { | |
if (routeSpans[i].className == "routeChoiceSelected") { | |
routeSpans[i].className = "routeChoice"; | |
} | |
} | |
// remove any previously-expanded route | |
var tripInfo = document.getElementById("tripInfo"); | |
if (tripInfo) | |
tripInfo.parentNode.removeChild(tripInfo); | |
selectedRoute = routeId; | |
var span = document.getElementById("route_" + routeId); | |
span.className = "routeChoiceSelected"; | |
fetchPatterns(routeId); | |
} | |
function fetchPatterns(routeId) { | |
url = "/json/routepatterns?route=" + encodeURIComponent(routeId) + "&time=" + parseTimeInput() + "&date=" + parseDateInput(); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayPatterns); | |
} | |
function callbackDisplayPatterns(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var div = document.createElement("div"); | |
div.className = "tripSection"; | |
div.id = "tripInfo"; | |
var firstTrip = null; | |
var patterns = eval(data); | |
clearMap(); | |
for (i = 0; i < patterns.length; ++i) { | |
patternDiv = document.createElement("div") | |
patternDiv.className = 'patternSection'; | |
div.appendChild(patternDiv) | |
var pat = patterns[i]; // [patName, patId, len(early trips), trips, len(later trips), has_non_zero_trip_type] | |
if (pat[5] == '1') { | |
patternDiv.className += " unusualPattern" | |
} | |
patternDiv.appendChild(document.createTextNode(pat[0])); | |
patternDiv.appendChild(document.createTextNode(", " + (pat[2] + pat[3].length + pat[4]) + " trips: ")); | |
if (pat[2] > 0) { | |
patternDiv.appendChild(document.createTextNode(countToRepeatedDots(pat[2]) + " ")); | |
} | |
for (j = 0; j < pat[3].length; ++j) { | |
var trip = pat[3][j]; | |
var tripId = trip[1]; | |
if ((i == 0) && (j == 0)) | |
firstTrip = tripId; | |
patternDiv.appendChild(document.createTextNode(" ")); | |
var span = document.createElement("span"); | |
span.appendChild(document.createTextNode(formatTime(trip[0]))); | |
span.id = "trip_" + tripId; | |
GEvent.addDomListener(span, "click", makeClosure(selectTrip, tripId)); | |
patternDiv.appendChild(span) | |
span.className = "tripChoice"; | |
} | |
if (pat[4] > 0) { | |
patternDiv.appendChild(document.createTextNode(" " + countToRepeatedDots(pat[4]))); | |
} | |
patternDiv.appendChild(document.createElement("br")); | |
} | |
route = document.getElementById("route_container_" + selectedRoute); | |
route.appendChild(div); | |
if (tripId != null) | |
selectTrip(firstTrip); | |
} | |
// Needed to get around limitation in javascript scope rules. | |
// See http://calculist.blogspot.com/2005/12/gotcha-gotcha.html | |
function makeClosure(f, a, b, c) { | |
return function() { f(a, b, c); }; | |
} | |
function make1ArgClosure(f, a, b, c) { | |
return function(x) { f(x, a, b, c); }; | |
} | |
function make2ArgClosure(f, a, b, c) { | |
return function(x, y) { f(x, y, a, b, c); }; | |
} | |
function selectTrip(tripId) { | |
var tripInfo = document.getElementById("tripInfo"); | |
if (tripInfo) { | |
tripSpans = tripInfo.getElementsByTagName('span'); | |
for (var i = 0; i < tripSpans.length; ++i) { | |
tripSpans[i].className = 'tripChoice'; | |
} | |
} | |
var span = document.getElementById("trip_" + tripId); | |
// Won't find the span if a different route is selected | |
if (span) { | |
span.className = 'tripChoiceSelected'; | |
} | |
clearMap(); | |
url = "/json/tripstoptimes?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayTripStopTimes); | |
fetchTripPolyLine(tripId); | |
fetchTripRows(tripId); | |
} | |
function callbackDisplayTripStopTimes(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var stopsTimes = eval(data); | |
if (!stopsTimes) return; | |
displayTripStopTimes(stopsTimes[0], stopsTimes[1]); | |
} | |
function fetchTripPolyLine(tripId) { | |
url = "/json/tripshape?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, callbackDisplayTripPolyLine); | |
} | |
function callbackDisplayTripPolyLine(data, responseCode) { | |
if (responseCode != 200) { | |
return; | |
} | |
var points = eval(data); | |
if (!points) return; | |
displayPolyLine(points); | |
} | |
var boundsOfPolyLine = null; | |
function expandBoundingBox(latLng) { | |
if (boundsOfPolyLine == null) { | |
boundsOfPolyLine = new GLatLngBounds(latLng, latLng); | |
} else { | |
boundsOfPolyLine.extend(latLng); | |
} | |
} | |
/** | |
* Display a line given a list of points | |
* | |
* @param {Array} List of lat,lng pairs | |
*/ | |
function displayPolyLine(points) { | |
var linePoints = Array(); | |
for (i = 0; i < points.length; ++i) { | |
var ll = new GLatLng(points[i][0], points[i][1]); | |
expandBoundingBox(ll); | |
linePoints[linePoints.length] = ll; | |
} | |
var polyline = new GPolyline(linePoints, "#FF0000", 4); | |
map.addOverlay(polyline); | |
map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine)); | |
} | |
function displayTripStopTimes(stops, times) { | |
for (i = 0; i < stops.length; ++i) { | |
var marker; | |
if (times && times[i] != null) { | |
marker = addStopMarkerFromList(stops[i], true, formatTime(times[i])); | |
} else { | |
marker = addStopMarkerFromList(stops[i], true); | |
} | |
expandBoundingBox(marker.getPoint()); | |
} | |
map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine)); | |
} | |
function fetchTripRows(tripId) { | |
url = "/json/triprows?trip=" + encodeURIComponent(tripId); | |
if (log) | |
GLog.writeUrl(url); | |
GDownloadUrl(url, make2ArgClosure(callbackDisplayTripRows, tripId)); | |
} | |
function callbackDisplayTripRows(data, responseCode, tripId) { | |
if (responseCode != 200) { | |
return; | |
} | |
var rows = eval(data); | |
if (!rows) return; | |
var html = ""; | |
for (var i = 0; i < rows.length; ++i) { | |
var filename = rows[i][0]; | |
var row = rows[i][1]; | |
html += "<b>" + filename + "</b>: " + formatDictionary(row) + "<br>"; | |
} | |
html += svgTag("/ttablegraph?height=100&trip=" + tripId, "height='115' width='100%'"); | |
var bottombarDiv = document.getElementById("bottombar"); | |
bottombarDiv.style.display = "block"; | |
bottombarDiv.style.height = "175px"; | |
bottombarDiv.innerHTML = html; | |
sizeRouteList(); | |
} | |
/** | |
* Return HTML to embed a SVG object in this page. src is the location of | |
* the SVG and attributes is inserted directly into the object or embed | |
* tag. | |
*/ | |
function svgTag(src, attributes) { | |
if (navigator.userAgent.toLowerCase().indexOf("msie") != -1) { | |
if (isSVGControlInstalled()) { | |
return "<embed pluginspage='http://www.adobe.com/svg/viewer/install/' src='" + src + "' " + attributes +"></embed>"; | |
} else { | |
return "<p>Please install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a> to get SVG support in IE</p>"; | |
} | |
} else { | |
return "<object data='" + src + "' type='image/svg+xml' " + attributes + "><p>No SVG support in your browser. Try Firefox 1.5 or newer or install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a></p></object>"; | |
} | |
} | |
/** | |
* Format an Array object containing key-value pairs into a human readable | |
* string. | |
*/ | |
function formatDictionary(d) { | |
var output = ""; | |
var first = 1; | |
for (var k in d) { | |
if (first) { | |
first = 0; | |
} else { | |
output += " "; | |
} | |
output += "<b>" + k + "</b>=" + d[k]; | |
} | |
return output; | |
} | |
function windowHeight() { | |
// Standard browsers (Mozilla, Safari, etc.) | |
if (self.innerHeight) | |
return self.innerHeight; | |
// IE 6 | |
if (document.documentElement && document.documentElement.clientHeight) | |
return document.documentElement.clientHeight; | |
// IE 5 | |
if (document.body) | |
return document.body.clientHeight; | |
// Just in case. | |
return 0; | |
} | |
function sizeRouteList() { | |
var bottombarHeight = 0; | |
var bottombarDiv = document.getElementById('bottombar'); | |
if (bottombarDiv.style.display != 'none') { | |
bottombarHeight = document.getElementById('bottombar').offsetHeight | |
+ document.getElementById('bottombar').style.marginTop; | |
} | |
var height = windowHeight() - document.getElementById('topbar').offsetHeight - 15 - bottombarHeight; | |
document.getElementById('content').style.height = height + 'px'; | |
if (map) { | |
// Without this displayPolyLine does not use the correct map size | |
map.checkResize(); | |
} | |
} | |
var calStartDate = new CalendarPopup(); | |
calStartDate.setReturnFunction("setStartDate"); | |
function maybeAddLeadingZero(number) { | |
if(number > 10) | |
{ | |
return number; | |
} | |
return '0' + number; | |
} | |
function setStartDate(y,m,d) { | |
document.getElementById('startDateInput').value = y + maybeAddLeadingZero(m) + maybeAddLeadingZero(d); | |
} | |
//]]> | |
</script> | |
</head> | |
<body class='sidebar-left' onload="load();" onunload="GUnload()" onresize="sizeRouteList()"> | |
<div id='topbar'> | |
<div id="edit"> | |
<span id="edit_status">...</span> | |
<form onSubmit="saveData(); return false;"><input value="Save" type="submit"> | |
</div> | |
<div id="agencyHeader">[agency]</div> | |
</div> | |
<div id='content'> | |
<div id='sidebar-wrapper'><div id='sidebar'> | |
Time: <input type="text" value="8:00" width="9" id="timeInput"><br> | |
Date: <input type="text" value="" size="8" id="startDateInput" name="startDateInput"> <a href="#" onclick="calStartDate.select(document.getElementById('startDateInput'),'startDateInput','yyyyMMdd'); return false;">select</a><br> | |
<form onSubmit="stopTextSearchSubmit(); return false;"> | |
Find Station: <input type="text" id="stopTextSearchInput"><input value="Search" type="submit"></form><br> | |
<form onSubmit="tripTextSearchSubmit(); return false;"> | |
Find Trip ID: <input type="text" id="tripTextSearchInput"><input value="Search" type="submit"></form><br> | |
<div id="routeList">routelist</div> | |
</div></div> | |
<div id='map-wrapper'> <div id='map'></div> </div> | |
</div> | |
<div id='bottombar'>bottom bar</div> | |
</body> | |
</html> | |
/* | |
* LabeledMarker Class | |
* | |
* Copyright 2007 Mike Purvis (http://uwmike.com) | |
* | |
* Licensed under the Apache License, Version 2.0 (the "License"); | |
* you may not use this file except in compliance with the License. | |
* You may obtain a copy of the License at | |
* | |
* http://www.apache.org/licenses/LICENSE-2.0 | |
* | |
* Unless required by applicable law or agreed to in writing, software | |
* distributed under the License is distributed on an "AS IS" BASIS, | |
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
* See the License for the specific language governing permissions and | |
* limitations under the License. | |
* | |
* This class extends the Maps API's standard GMarker class with the ability | |
* to support markers with textual labels. Please see articles here: | |
* | |
* http://googlemapsbook.com/2007/01/22/extending-gmarker/ | |
* http://googlemapsbook.com/2007/03/06/clickable-labeledmarker/ | |
*/ | |
/** | |
* Constructor for LabeledMarker, which picks up on strings from the GMarker | |
* options array, and then calls the GMarker constructor. | |
* | |
* @param {GLatLng} latlng | |
* @param {GMarkerOptions} Named optional arguments: | |
* opt_opts.labelText {String} text to place in the overlay div. | |
* opt_opts.labelClass {String} class to use for the overlay div. | |
* (default "markerLabel") | |
* opt_opts.labelOffset {GSize} label offset, the x- and y-distance between | |
* the marker's latlng and the upper-left corner of the text div. | |
*/ | |
function LabeledMarker(latlng, opt_opts){ | |
this.latlng_ = latlng; | |
this.opts_ = opt_opts; | |
this.initText_ = opt_opts.labelText || ""; | |
this.labelClass_ = opt_opts.labelClass || "markerLabel"; | |
this.labelOffset_ = opt_opts.labelOffset || new GSize(0, 0); | |
this.clickable_ = opt_opts.clickable || true; | |
if (opt_opts.draggable) { | |
// This version of LabeledMarker doesn't support dragging. | |
opt_opts.draggable = false; | |
} | |
GMarker.apply(this, arguments); | |
} | |
// It's a limitation of JavaScript inheritance that we can't conveniently | |
// inherit from GMarker without having to run its constructor. In order for | |
// the constructor to run, it requires some dummy GLatLng. | |
LabeledMarker.prototype = new GMarker(new GLatLng(0, 0)); | |
/** | |
* Is called by GMap2's addOverlay method. Creates the text div and adds it | |
* to the relevant parent div. | |
* | |
* @param {GMap2} map the map that has had this labeledmarker added to it. | |
*/ | |
LabeledMarker.prototype.initialize = function(map) { | |
// Do the GMarker constructor first. | |
GMarker.prototype.initialize.apply(this, arguments); | |
this.map_ = map; | |
this.setText(this.initText_); | |
} | |
/** | |
* Create a new div for this label. | |
*/ | |
LabeledMarker.prototype.makeDiv_ = function(map) { | |
if (this.div_) { | |
return; | |
} | |
this.div_ = document.createElement("div"); | |
this.div_.className = this.labelClass_; | |
this.div_.style.position = "absolute"; | |
this.div_.style.cursor = "pointer"; | |
this.map_.getPane(G_MAP_MARKER_PANE).appendChild(this.div_); | |
if (this.clickable_) { | |
/** | |
* Creates a closure for passing events through to the source marker | |
* This is located in here to avoid cluttering the global namespace. | |
* The downside is that the local variables from initialize() continue | |
* to occupy space on the stack. | |
* | |
* @param {Object} object to receive event trigger. | |
* @param {GEventListener} event to be triggered. | |
*/ | |
function newEventPassthru(obj, event) { | |
return function() { | |
GEvent.trigger(obj, event); | |
}; | |
} | |
// Pass through events fired on the text div to the marker. | |
var eventPassthrus = ['click', 'dblclick', 'mousedown', 'mouseup', 'mouseover', 'mouseout']; | |
for(var i = 0; i < eventPassthrus.length; i++) { | |
var name = eventPassthrus[i]; | |
GEvent.addDomListener(this.div_, name, newEventPassthru(this, name)); | |
} | |
} | |
} | |
/** | |
* Return the html in the div of this label, or "" if none is set | |
*/ | |
LabeledMarker.prototype.getText = function(text) { | |
if (this.div_) { | |
return this.div_.innerHTML; | |
} else { | |
return ""; | |
} | |
} | |
/** | |
* Set the html in the div of this label to text. If text is "" or null remove | |
* the div. | |
*/ | |
LabeledMarker.prototype.setText = function(text) { | |
if (this.div_) { | |
if (text) { | |
this.div_.innerHTML = text; | |
} else { | |
// remove div | |
GEvent.clearInstanceListeners(this.div_); | |
this.div_.parentNode.removeChild(this.div_); | |
this.div_ = null; | |
} | |
} else { | |
if (text) { | |
this.makeDiv_(); | |
this.div_.innerHTML = text; | |
this.redraw(); | |
} | |
} | |
} | |
/** | |
* Move the text div based on current projection and zoom level, call the redraw() | |
* handler in GMarker. | |
* | |
* @param {Boolean} force will be true when pixel coordinates need to be recomputed. | |
*/ | |
LabeledMarker.prototype.redraw = function(force) { | |
GMarker.prototype.redraw.apply(this, arguments); | |
if (this.div_) { | |
// Calculate the DIV coordinates of two opposite corners of our bounds to | |
// get the size and position of our rectangle | |
var p = this.map_.fromLatLngToDivPixel(this.latlng_); | |
var z = GOverlay.getZIndex(this.latlng_.lat()); | |
// Now position our div based on the div coordinates of our bounds | |
this.div_.style.left = (p.x + this.labelOffset_.width) + "px"; | |
this.div_.style.top = (p.y + this.labelOffset_.height) + "px"; | |
this.div_.style.zIndex = z; // in front of the marker | |
} | |
} | |
/** | |
* Remove the text div from the map pane, destroy event passthrus, and calls the | |
* default remove() handler in GMarker. | |
*/ | |
LabeledMarker.prototype.remove = function() { | |
this.setText(null); | |
GMarker.prototype.remove.apply(this, arguments); | |
} | |
/** | |
* Return a copy of this overlay, for the parent Map to duplicate itself in full. This | |
* is part of the Overlay interface and is used, for example, to copy everything in the | |
* main view into the mini-map. | |
*/ | |
LabeledMarker.prototype.copy = function() { | |
return new LabeledMarker(this.latlng_, this.opt_opts_); | |
} | |
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/gtfsscheduleviewer/files/mm_20_blue.png differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/gtfsscheduleviewer/files/mm_20_blue_trans.png differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/gtfsscheduleviewer/files/mm_20_red_trans.png differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/gtfsscheduleviewer/files/mm_20_shadow.png differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/gtfsscheduleviewer/files/mm_20_shadow_trans.png differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/gtfsscheduleviewer/files/mm_20_yellow.png differ
html { overflow: hidden; } | |
html, body { | |
margin: 0; | |
padding: 0; | |
height: 100%; | |
} | |
body { margin: 5px; } | |
#content { | |
position: relative; | |
margin-top: 5px; | |
} | |
#map-wrapper { | |
position: relative; | |
height: 100%; | |
width: auto; | |
left: 0; | |
top: 0; | |
z-index: 100; | |
} | |
#map { | |
position: relative; | |
height: 100%; | |
width: auto; | |
border: 1px solid #aaa; | |
} | |
#sidebar-wrapper { | |
position: absolute; | |
height: 100%; | |
width: 220px; | |
top: 0; | |
border: 1px solid #aaa; | |
overflow: auto; | |
z-index: 300; | |
} | |
#sidebar { | |
position: relative; | |
width: auto; | |
padding: 4px; | |
overflow: hidden; | |
} | |
#topbar { | |
position: relative; | |
padding: 2px; | |
border: 1px solid #aaa; | |
margin: 0; | |
} | |
#topbar h1 { | |
white-space: nowrap; | |
overflow: hidden; | |
font-size: 14pt; | |
font-weight: bold; | |
font-face: | |
margin: 0; | |
} | |
body.sidebar-right #map-wrapper { margin-right: 229px; } | |
body.sidebar-right #sidebar-wrapper { right: 0; } | |
body.sidebar-left #map { margin-left: 229px; } | |
body.sidebar-left #sidebar { left: 0; } | |
body.nosidebar #map { margin: 0; } | |
body.nosidebar #sidebar { display: none; } | |
#bottombar { | |
position: relative; | |
padding: 2px; | |
border: 1px solid #aaa; | |
margin-top: 5px; | |
display: none; | |
} | |
/* holly hack for IE to get position:bottom right | |
see: http://www.positioniseverything.net/abs_relbugs.html | |
\*/ | |
* html #topbar { height: 1px; } | |
/* */ | |
body { | |
font-family:helvetica,arial,sans, sans-serif; | |
} | |
h1 { | |
margin-top: 0.5em; | |
margin-bottom: 0.5em; | |
} | |
h2 { | |
margin-top: 0.2em; | |
margin-bottom: 0.2em; | |
} | |
h3 { | |
margin-top: 0.2em; | |
margin-bottom: 0.2em; | |
} | |
.tooltip { | |
white-space: nowrap; | |
padding: 2px; | |
color: black; | |
font-size: 12px; | |
background-color: white; | |
border: 1px solid black; | |
cursor: pointer; | |
filter:alpha(opacity=60); | |
-moz-opacity: 0.6; | |
opacity: 0.6; | |
} | |
#routeList { | |
border: 1px solid black; | |
overflow: auto; | |
} | |
.shortName { | |
font-size: bigger; | |
font-weight: bold; | |
} | |
.routeChoice,.tripChoice,.routeChoiceSelected,.tripChoiceSelected { | |
white-space: nowrap; | |
cursor: pointer; | |
padding: 0px 2px; | |
color: black; | |
line-height: 1.4em; | |
font-size: smaller; | |
overflow: hidden; | |
} | |
.tripChoice { | |
color: blue; | |
} | |
.routeChoiceSelected,.tripChoiceSelected { | |
background-color: blue; | |
color: white; | |
} | |
.tripSection { | |
padding-left: 0px; | |
font-size: 10pt; | |
background-color: lightblue; | |
} | |
.patternSection { | |
margin-left: 8px; | |
padding-left: 2px; | |
border-bottom: 1px solid grey; | |
} | |
.unusualPattern { | |
background-color: #aaa; | |
color: #444; | |
} | |
/* Following styles are used by location_editor.py */ | |
#edit { | |
visibility: hidden; | |
float: right; | |
font-size: 80%; | |
} | |
#edit form { | |
display: inline; | |
} |
' Copyright 1999-2000 Adobe Systems Inc. All rights reserved. Permission to redistribute | |
' granted provided that this file is not modified in any way. This file is provided with | |
' absolutely no warranties of any kind. | |
Function isSVGControlInstalled() | |
on error resume next | |
isSVGControlInstalled = IsObject(CreateObject("Adobe.SVGCtl")) | |
end Function | |
#!/usr/bin/python2.5 | |
# | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Output svg/xml data for a marey graph | |
Marey graphs are a visualization form typically used for timetables. Time | |
is on the x-axis and position on the y-axis. This module reads data from a | |
transitfeed.Schedule and creates a marey graph in svg/xml format. The graph | |
shows the speed between stops for each trip of a route. | |
TODO: This module was taken from an internal Google tool. It works but is not | |
well intergrated into transitfeed and schedule_viewer. Also, it has lots of | |
ugly hacks to compensate set canvas size and so on which could be cleaned up. | |
For a little more information see (I didn't make this URL ;-) | |
http://transliteracies.english.ucsb.edu/post/research-project/research-clearinghouse-individual/research-reports/the-indexical-imagination-marey%e2%80%99s-graphic-method-and-the-technological-transformation-of-writing-in-the-nineteenth-century | |
MareyGraph: Class, keeps cache of graph data and graph properties | |
and draws marey graphs in svg/xml format on request. | |
""" | |
import itertools | |
import transitfeed | |
class MareyGraph: | |
"""Produces and caches marey graph from transit feed data.""" | |
_MAX_ZOOM = 5.0 # change docstring of ChangeScaleFactor if this changes | |
_DUMMY_SEPARATOR = 10 #pixel | |
def __init__(self): | |
# Timetablerelated state | |
self._cache = str() | |
self._stoplist = [] | |
self._tlist = [] | |
self._stations = [] | |
self._decorators = [] | |
# TODO: Initialize default values via constructor parameters | |
# or via a class constants | |
# Graph properties | |
self._tspan = 30 # number of hours to display | |
self._offset = 0 # starting hour | |
self._hour_grid = 60 # number of pixels for an hour | |
self._min_grid = 5 # number of pixels between subhour lines | |
# Canvas properties | |
self._zoomfactor = 0.9 # svg Scaling factor | |
self._xoffset = 0 # move graph horizontally | |
self._yoffset = 0 # move graph veritcally | |
self._bgcolor = "lightgrey" | |
# height/width of graph canvas before transform | |
self._gwidth = self._tspan * self._hour_grid | |
def Draw(self, stoplist=None, triplist=None, height=520): | |
"""Main interface for drawing the marey graph. | |
If called without arguments, the data generated in the previous call | |
will be used. New decorators can be added between calls. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
Returns: | |
# A string that contain a svg/xml web-page with a marey graph. | |
" <svg width="1440" height="520" version="1.1" ... " | |
""" | |
output = str() | |
if not triplist: | |
triplist = [] | |
if not stoplist: | |
stoplist = [] | |
if not self._cache or triplist or stoplist: | |
self._gheight = height | |
self._tlist=triplist | |
self._slist=stoplist | |
self._decorators = [] | |
self._stations = self._BuildStations(stoplist) | |
self._cache = "%s %s %s %s" % (self._DrawBox(), | |
self._DrawHours(), | |
self._DrawStations(), | |
self._DrawTrips(triplist)) | |
output = "%s %s %s %s" % (self._DrawHeader(), | |
self._cache, | |
self._DrawDecorators(), | |
self._DrawFooter()) | |
return output | |
def _DrawHeader(self): | |
svg_header = """ | |
<svg width="%s" height="%s" version="1.1" | |
xmlns="http://www.w3.org/2000/svg"> | |
<script type="text/ecmascript"><![CDATA[ | |
function init(evt) { | |
if ( window.svgDocument == null ) | |
svgDocument = evt.target.ownerDocument; | |
} | |
var oldLine = 0; | |
var oldStroke = 0; | |
var hoffset= %s; // Data from python | |
function parseLinePoints(pointnode){ | |
var wordlist = pointnode.split(" "); | |
var xlist = new Array(); | |
var h; | |
var m; | |
// TODO: add linebreaks as appropriate | |
var xstr = " Stop Times :"; | |
for (i=0;i<wordlist.length;i=i+2){ | |
var coord = wordlist[i].split(","); | |
h = Math.floor(parseInt((coord[0])-20)/60); | |
m = parseInt((coord[0]-20))%%60; | |
xstr = xstr +" "+ (hoffset+h) +":"+m; | |
} | |
return xstr; | |
} | |
function LineClick(tripid, x) { | |
var line = document.getElementById(tripid); | |
if (oldLine) | |
oldLine.setAttribute("stroke",oldStroke); | |
oldLine = line; | |
oldStroke = line.getAttribute("stroke"); | |
line.setAttribute("stroke","#fff"); | |
var dynTxt = document.getElementById("dynamicText"); | |
var tripIdTxt = document.createTextNode(x); | |
while (dynTxt.hasChildNodes()){ | |
dynTxt.removeChild(dynTxt.firstChild); | |
} | |
dynTxt.appendChild(tripIdTxt); | |
} | |
]]> </script> | |
<style type="text/css"><![CDATA[ | |
.T { fill:none; stroke-width:1.5 } | |
.TB { fill:none; stroke:#e20; stroke-width:2 } | |
.Station { fill:none; stroke-width:1 } | |
.Dec { fill:none; stroke-width:1.5 } | |
.FullHour { fill:none; stroke:#eee; stroke-width:1 } | |
.SubHour { fill:none; stroke:#ddd; stroke-width:1 } | |
.Label { fill:#aaa; font-family:Helvetica,Arial,sans; | |
text-anchor:middle } | |
.Info { fill:#111; font-family:Helvetica,Arial,sans; | |
text-anchor:start; } | |
]]></style> | |
<text class="Info" id="dynamicText" x="0" y="%d"></text> | |
<g id="mcanvas" transform="translate(%s,%s)"> | |
<g id="zcanvas" transform="scale(%s)"> | |
""" % (self._gwidth + self._xoffset + 20, self._gheight + 15, | |
self._offset, self._gheight + 10, | |
self._xoffset, self._yoffset, self._zoomfactor) | |
return svg_header | |
def _DrawFooter(self): | |
return "</g></g></svg>" | |
def _DrawDecorators(self): | |
"""Used to draw fancy overlays on trip graphs.""" | |
return " ".join(self._decorators) | |
def _DrawBox(self): | |
tmpstr = """<rect x="%s" y="%s" width="%s" height="%s" | |
fill="lightgrey" stroke="%s" stroke-width="2" /> | |
""" % (0, 0, self._gwidth + 20, self._gheight, self._bgcolor) | |
return tmpstr | |
def _BuildStations(self, stoplist): | |
"""Dispatches the best algorithm for calculating station line position. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
Returns: | |
# One integer y-coordinate for each station normalized between | |
# 0 and X, where X is the height of the graph in pixels | |
[0, 33, 140, ... , X] | |
""" | |
stations = [] | |
dists = self._EuclidianDistances(stoplist) | |
stations = self._CalculateYLines(dists) | |
return stations | |
def _EuclidianDistances(self,slist): | |
"""Calculate euclidian distances between stops. | |
Uses the stoplists long/lats to approximate distances | |
between stations and build a list with y-coordinates for the | |
horizontal lines in the graph. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stoplist: [Stop, Stop, ...] | |
Returns: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
[0,33,140, ... ,X] | |
""" | |
e_dists2 = [transitfeed.ApproximateDistanceBetweenStops(stop, tail) for | |
(stop,tail) in itertools.izip(slist, slist[1:])] | |
return e_dists2 | |
def _CalculateYLines(self, dists): | |
"""Builds a list with y-coordinates for the horizontal lines in the graph. | |
Args: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
dists: [0,33,140, ... ,X] | |
Returns: | |
# One integer y-coordinate for each station normalized between | |
# 0 and X, where X is the height of the graph in pixels | |
[0, 33, 140, ... , X] | |
""" | |
tot_dist = sum(dists) | |
if tot_dist > 0: | |
pixel_dist = [float(d * (self._gheight-20))/tot_dist for d in dists] | |
pixel_grid = [0]+[int(pd + sum(pixel_dist[0:i])) for i,pd in | |
enumerate(pixel_dist)] | |
else: | |
pixel_grid = [] | |
return pixel_grid | |
def _TravelTimes(self,triplist,index=0): | |
""" Calculate distances and plot stops. | |
Uses a timetable to approximate distances | |
between stations | |
Args: | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
# (Optional) Index of Triplist prefered for timetable Calculation | |
index: 3 | |
Returns: | |
# One integer for each pair of stations | |
# indicating the approximate distance | |
[0,33,140, ... ,X] | |
""" | |
def DistanceInTravelTime(dep_secs, arr_secs): | |
t_dist = arr_secs-dep_secs | |
if t_dist<0: | |
t_dist = self._DUMMY_SEPARATOR # min separation | |
return t_dist | |
if not triplist: | |
return [] | |
if 0 < index < len(triplist): | |
trip = triplist[index] | |
else: | |
trip = triplist[0] | |
t_dists2 = [DistanceInTravelTime(stop[3],tail[2]) for (stop,tail) | |
in itertools.izip(trip.GetTimeStops(),trip.GetTimeStops()[1:])] | |
return t_dists2 | |
def _AddWarning(self, str): | |
print str | |
def _DrawTrips(self,triplist,colpar=""): | |
"""Generates svg polylines for each transit trip. | |
Args: | |
# Class Trip is defined in transitfeed.py | |
[Trip, Trip, ...] | |
Returns: | |
# A string containing a polyline tag for each trip | |
' <polyline class="T" stroke="#336633" points="433,0 ...' | |
""" | |
stations = [] | |
if not self._stations and triplist: | |
self._stations = self._CalculateYLines(self._TravelTimes(triplist)) | |
if not self._stations: | |
self._AddWarning("Failed to use traveltimes for graph") | |
self._stations = self._CalculateYLines(self._Uniform(triplist)) | |
if not self._stations: | |
self._AddWarning("Failed to calculate station distances") | |
return | |
stations = self._stations | |
tmpstrs = [] | |
servlist = [] | |
for t in triplist: | |
if not colpar: | |
if t.service_id not in servlist: | |
servlist.append(t.service_id) | |
shade = int(servlist.index(t.service_id) * (200/len(servlist))+55) | |
color = "#00%s00" % hex(shade)[2:4] | |
else: | |
color=colpar | |
start_offsets = [0] | |
first_stop = t.GetTimeStops()[0] | |
for j,freq_offset in enumerate(start_offsets): | |
if j>0 and not colpar: | |
color="purple" | |
scriptcall = 'onmouseover="LineClick(\'%s\',\'Trip %s starting %s\')"' % (t.trip_id, | |
t.trip_id, transitfeed.FormatSecondsSinceMidnight(t.GetStartTime())) | |
tmpstrhead = '<polyline class="T" id="%s" stroke="%s" %s points="' % \ | |
(str(t.trip_id),color, scriptcall) | |
tmpstrs.append(tmpstrhead) | |
for i, s in enumerate(t.GetTimeStops()): | |
arr_t = s[0] | |
dep_t = s[1] | |
if arr_t is None or dep_t is None: | |
continue | |
arr_x = int(arr_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset | |
dep_x = int(dep_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset | |
tmpstrs.append("%s,%s " % (int(arr_x+20), int(stations[i]+20))) | |
tmpstrs.append("%s,%s " % (int(dep_x+20), int(stations[i]+20))) | |
tmpstrs.append('" />') | |
return "".join(tmpstrs) | |
def _Uniform(self, triplist): | |
"""Fallback to assuming uniform distance between stations""" | |
# This should not be neseccary, but we are in fallback mode | |
longest = max([len(t.GetTimeStops()) for t in triplist]) | |
return [100] * longest | |
def _DrawStations(self, color="#aaa"): | |
"""Generates svg with a horizontal line for each station/stop. | |
Args: | |
# Class Stop is defined in transitfeed.py | |
stations: [Stop, Stop, ...] | |
Returns: | |
# A string containing a polyline tag for each stop | |
" <polyline class="Station" stroke="#336633" points="20,0 ..." | |
""" | |
stations=self._stations | |
tmpstrs = [] | |
for y in stations: | |
tmpstrs.append(' <polyline class="Station" stroke="%s" \ | |
points="%s,%s, %s,%s" />' %(color,20,20+y+.5,self._gwidth+20,20+y+.5)) | |
return "".join(tmpstrs) | |
def _DrawHours(self): | |
"""Generates svg to show a vertical hour and sub-hour grid | |
Returns: | |
# A string containing a polyline tag for each grid line | |
" <polyline class="FullHour" points="20,0 ..." | |
""" | |
tmpstrs = [] | |
for i in range(0, self._gwidth, self._min_grid): | |
if i % self._hour_grid == 0: | |
tmpstrs.append('<polyline class="FullHour" points="%d,%d, %d,%d" />' \ | |
% (i + .5 + 20, 20, i + .5 + 20, self._gheight)) | |
tmpstrs.append('<text class="Label" x="%d" y="%d">%d</text>' | |
% (i + 20, 20, | |
(i / self._hour_grid + self._offset) % 24)) | |
else: | |
tmpstrs.append('<polyline class="SubHour" points="%d,%d,%d,%d" />' \ | |
% (i + .5 + 20, 20, i + .5 + 20, self._gheight)) | |
return "".join(tmpstrs) | |
def AddStationDecoration(self, index, color="#f00"): | |
"""Flushes existing decorations and highlights the given station-line. | |
Args: | |
# Integer, index of stop to be highlighted. | |
index: 4 | |
# An optional string with a html color code | |
color: "#fff" | |
""" | |
tmpstr = str() | |
num_stations = len(self._stations) | |
ind = int(index) | |
if self._stations: | |
if 0<ind<num_stations: | |
y = self._stations[ind] | |
tmpstr = '<polyline class="Dec" stroke="%s" points="%s,%s,%s,%s" />' \ | |
% (color, 20, 20+y+.5, self._gwidth+20, 20+y+.5) | |
self._decorators.append(tmpstr) | |
def AddTripDecoration(self, triplist, color="#f00"): | |
"""Flushes existing decorations and highlights the given trips. | |
Args: | |
# Class Trip is defined in transitfeed.py | |
triplist: [Trip, Trip, ...] | |
# An optional string with a html color code | |
color: "#fff" | |
""" | |
tmpstr = self._DrawTrips(triplist,color) | |
self._decorators.append(tmpstr) | |
def ChangeScaleFactor(self, newfactor): | |
"""Changes the zoom of the graph manually. | |
1.0 is the original canvas size. | |
Args: | |
# float value between 0.0 and 5.0 | |
newfactor: 0.7 | |
""" | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ScaleLarger(self): | |
"""Increases the zoom of the graph one step (0.1 units).""" | |
newfactor = self._zoomfactor + 0.1 | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ScaleSmaller(self): | |
"""Decreases the zoom of the graph one step(0.1 units).""" | |
newfactor = self._zoomfactor - 0.1 | |
if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM: | |
self._zoomfactor = newfactor | |
def ClearDecorators(self): | |
"""Removes all the current decorators. | |
""" | |
self._decorators = [] | |
def AddTextStripDecoration(self,txtstr): | |
tmpstr = '<text class="Info" x="%d" y="%d">%s</text>' % (0, | |
20 + self._gheight, txtstr) | |
self._decorators.append(tmpstr) | |
def SetSpan(self, first_arr, last_arr, mint=5 ,maxt=30): | |
s_hour = (first_arr / 3600) - 1 | |
e_hour = (last_arr / 3600) + 1 | |
self._offset = max(min(s_hour, 23), 0) | |
self._tspan = max(min(e_hour - s_hour, maxt), mint) | |
self._gwidth = self._tspan * self._hour_grid | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
This package provides implementation of a converter from a kml | |
file format into Google transit feed format. | |
The KmlParser class is the main class implementing the parser. | |
Currently only information about stops is extracted from a kml file. | |
The extractor expects the stops to be represented as placemarks with | |
a single point. | |
""" | |
import re | |
import string | |
import sys | |
import transitfeed | |
from transitfeed import util | |
import xml.dom.minidom as minidom | |
import zipfile | |
class Placemark(object): | |
def __init__(self): | |
self.name = "" | |
self.coordinates = [] | |
def IsPoint(self): | |
return len(self.coordinates) == 1 | |
def IsLine(self): | |
return len(self.coordinates) > 1 | |
class KmlParser(object): | |
def __init__(self, stopNameRe = '(.*)'): | |
""" | |
Args: | |
stopNameRe - a regular expression to extract a stop name from a | |
placemaker name | |
""" | |
self.stopNameRe = re.compile(stopNameRe) | |
def Parse(self, filename, feed): | |
""" | |
Reads the kml file, parses it and updated the Google transit feed | |
object with the extracted information. | |
Args: | |
filename - kml file name | |
feed - an instance of Schedule class to be updated | |
""" | |
dom = minidom.parse(filename) | |
self.ParseDom(dom, feed) | |
def ParseDom(self, dom, feed): | |
""" | |
Parses the given kml dom tree and updates the Google transit feed object. | |
Args: | |
dom - kml dom tree | |
feed - an instance of Schedule class to be updated | |
""" | |
shape_num = 0 | |
for node in dom.getElementsByTagName('Placemark'): | |
p = self.ParsePlacemark(node) | |
if p.IsPoint(): | |
(lon, lat) = p.coordinates[0] | |
m = self.stopNameRe.search(p.name) | |
feed.AddStop(lat, lon, m.group(1)) | |
elif p.IsLine(): | |
shape_num = shape_num + 1 | |
shape = transitfeed.Shape("kml_shape_" + str(shape_num)) | |
for (lon, lat) in p.coordinates: | |
shape.AddPoint(lat, lon) | |
feed.AddShapeObject(shape) | |
def ParsePlacemark(self, node): | |
ret = Placemark() | |
for child in node.childNodes: | |
if child.nodeName == 'name': | |
ret.name = self.ExtractText(child) | |
if child.nodeName == 'Point' or child.nodeName == 'LineString': | |
ret.coordinates = self.ExtractCoordinates(child) | |
return ret | |
def ExtractText(self, node): | |
for child in node.childNodes: | |
if child.nodeType == child.TEXT_NODE: | |
return child.wholeText # is a unicode string | |
return "" | |
def ExtractCoordinates(self, node): | |
coordinatesText = "" | |
for child in node.childNodes: | |
if child.nodeName == 'coordinates': | |
coordinatesText = self.ExtractText(child) | |
break | |
ret = [] | |
for point in coordinatesText.split(): | |
coords = point.split(',') | |
ret.append((float(coords[0]), float(coords[1]))) | |
return ret | |
def main(): | |
usage = \ | |
"""%prog <input.kml> <output GTFS.zip> | |
Reads KML file <input.kml> and creates GTFS file <output GTFS.zip> with | |
placemarks in the KML represented as stops. | |
""" | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
(options, args) = parser.parse_args() | |
if len(args) != 2: | |
parser.error('You did not provide all required command line arguments.') | |
if args[0] == 'IWantMyCrash': | |
raise Exception('For testCrashHandler') | |
parser = KmlParser() | |
feed = transitfeed.Schedule() | |
feed.save_all_stops = True | |
parser.Parse(args[0], feed) | |
feed.WriteGoogleTransitFeed(args[1]) | |
print "Done." | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# | |
# Copyright 2008 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A module for writing GTFS feeds out into Google Earth KML format. | |
For usage information run kmlwriter.py --help | |
If no output filename is specified, the output file will be given the same | |
name as the feed file (with ".kml" appended) and will be placed in the same | |
directory as the input feed. | |
The resulting KML file has a folder hierarchy which looks like this: | |
- Stops | |
* stop1 | |
* stop2 | |
- Routes | |
- route1 | |
- Shapes | |
* shape1 | |
* shape2 | |
- Patterns | |
- pattern1 | |
- pattern2 | |
- Trips | |
* trip1 | |
* trip2 | |
- Shapes | |
* shape1 | |
- Shape Points | |
* shape_point1 | |
* shape_point2 | |
* shape2 | |
- Shape Points | |
* shape_point1 | |
* shape_point2 | |
where the hyphens represent folders and the asteriks represent placemarks. | |
In a trip, a vehicle visits stops in a certain sequence. Such a sequence of | |
stops is called a pattern. A pattern is represented by a linestring connecting | |
the stops. The "Shapes" subfolder of a route folder contains placemarks for | |
each shape used by a trip in the route. The "Patterns" subfolder contains a | |
placemark for each unique pattern used by a trip in the route. The "Trips" | |
subfolder contains a placemark for each trip in the route. | |
Since there can be many trips and trips for the same route are usually similar, | |
they are not exported unless the --showtrips option is used. There is also | |
another option --splitroutes that groups the routes by vehicle type resulting | |
in a folder hierarchy which looks like this at the top level: | |
- Stops | |
- Routes - Bus | |
- Routes - Tram | |
- Routes - Rail | |
- Shapes | |
""" | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
import optparse | |
import os.path | |
import sys | |
import transitfeed | |
from transitfeed import util | |
class KMLWriter(object): | |
"""This class knows how to write out a transit feed as KML. | |
Sample usage: | |
KMLWriter().Write(<transitfeed.Schedule object>, <output filename>) | |
Attributes: | |
show_trips: True if the individual trips should be included in the routes. | |
show_trips: True if the individual trips should be placed on ground. | |
split_routes: True if the routes should be split by type. | |
shape_points: True if individual shape points should be plotted. | |
""" | |
def __init__(self): | |
"""Initialise.""" | |
self.show_trips = False | |
self.split_routes = False | |
self.shape_points = False | |
self.altitude_per_sec = 0.0 | |
self.date_filter = None | |
def _SetIndentation(self, elem, level=0): | |
"""Indented the ElementTree DOM. | |
This is the recommended way to cause an ElementTree DOM to be | |
prettyprinted on output, as per: http://effbot.org/zone/element-lib.htm | |
Run this on the root element before outputting the tree. | |
Args: | |
elem: The element to start indenting from, usually the document root. | |
level: Current indentation level for recursion. | |
""" | |
i = "\n" + level*" " | |
if len(elem): | |
if not elem.text or not elem.text.strip(): | |
elem.text = i + " " | |
for elem in elem: | |
self._SetIndentation(elem, level+1) | |
if not elem.tail or not elem.tail.strip(): | |
elem.tail = i | |
else: | |
if level and (not elem.tail or not elem.tail.strip()): | |
elem.tail = i | |
def _CreateFolder(self, parent, name, visible=True, description=None): | |
"""Create a KML Folder element. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
name: The folder name as a string. | |
visible: Whether the folder is initially visible or not. | |
description: A description string or None. | |
Returns: | |
The folder ElementTree.Element instance. | |
""" | |
folder = ET.SubElement(parent, 'Folder') | |
name_tag = ET.SubElement(folder, 'name') | |
name_tag.text = name | |
if description is not None: | |
desc_tag = ET.SubElement(folder, 'description') | |
desc_tag.text = description | |
if not visible: | |
visibility = ET.SubElement(folder, 'visibility') | |
visibility.text = '0' | |
return folder | |
def _CreateStyleForRoute(self, doc, route): | |
"""Create a KML Style element for the route. | |
The style sets the line colour if the route colour is specified. The | |
line thickness is set depending on the vehicle type. | |
Args: | |
doc: The KML Document ElementTree.Element instance. | |
route: The transitfeed.Route to create the style for. | |
Returns: | |
The id of the style as a string. | |
""" | |
style_id = 'route_%s' % route.route_id | |
style = ET.SubElement(doc, 'Style', {'id': style_id}) | |
linestyle = ET.SubElement(style, 'LineStyle') | |
width = ET.SubElement(linestyle, 'width') | |
type_to_width = {0: '3', # Tram | |
1: '3', # Subway | |
2: '5', # Rail | |
3: '1'} # Bus | |
width.text = type_to_width.get(route.route_type, '1') | |
if route.route_color: | |
color = ET.SubElement(linestyle, 'color') | |
red = route.route_color[0:2].lower() | |
green = route.route_color[2:4].lower() | |
blue = route.route_color[4:6].lower() | |
color.text = 'ff%s%s%s' % (blue, green, red) | |
return style_id | |
def _CreatePlacemark(self, parent, name, style_id=None, visible=True, | |
description=None): | |
"""Create a KML Placemark element. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
name: The placemark name as a string. | |
style_id: If not None, the id of a style to use for the placemark. | |
visible: Whether the placemark is initially visible or not. | |
description: A description string or None. | |
Returns: | |
The placemark ElementTree.Element instance. | |
""" | |
placemark = ET.SubElement(parent, 'Placemark') | |
placemark_name = ET.SubElement(placemark, 'name') | |
placemark_name.text = name | |
if description is not None: | |
desc_tag = ET.SubElement(placemark, 'description') | |
desc_tag.text = description | |
if style_id is not None: | |
styleurl = ET.SubElement(placemark, 'styleUrl') | |
styleurl.text = '#%s' % style_id | |
if not visible: | |
visibility = ET.SubElement(placemark, 'visibility') | |
visibility.text = '0' | |
return placemark | |
def _CreateLineString(self, parent, coordinate_list): | |
"""Create a KML LineString element. | |
The points of the string are given in coordinate_list. Every element of | |
coordinate_list should be one of a tuple (longitude, latitude) or a tuple | |
(longitude, latitude, altitude). | |
Args: | |
parent: The parent ElementTree.Element instance. | |
coordinate_list: The list of coordinates. | |
Returns: | |
The LineString ElementTree.Element instance or None if coordinate_list is | |
empty. | |
""" | |
if not coordinate_list: | |
return None | |
linestring = ET.SubElement(parent, 'LineString') | |
tessellate = ET.SubElement(linestring, 'tessellate') | |
tessellate.text = '1' | |
if len(coordinate_list[0]) == 3: | |
altitude_mode = ET.SubElement(linestring, 'altitudeMode') | |
altitude_mode.text = 'absolute' | |
coordinates = ET.SubElement(linestring, 'coordinates') | |
if len(coordinate_list[0]) == 3: | |
coordinate_str_list = ['%f,%f,%f' % t for t in coordinate_list] | |
else: | |
coordinate_str_list = ['%f,%f' % t for t in coordinate_list] | |
coordinates.text = ' '.join(coordinate_str_list) | |
return linestring | |
def _CreateLineStringForShape(self, parent, shape): | |
"""Create a KML LineString using coordinates from a shape. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
shape: The transitfeed.Shape instance. | |
Returns: | |
The LineString ElementTree.Element instance or None if coordinate_list is | |
empty. | |
""" | |
coordinate_list = [(longitude, latitude) for | |
(latitude, longitude, distance) in shape.points] | |
return self._CreateLineString(parent, coordinate_list) | |
def _CreateStopsFolder(self, schedule, doc): | |
"""Create a KML Folder containing placemarks for each stop in the schedule. | |
If there are no stops in the schedule then no folder is created. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
Returns: | |
The Folder ElementTree.Element instance or None if there are no stops. | |
""" | |
if not schedule.GetStopList(): | |
return None | |
stop_folder = self._CreateFolder(doc, 'Stops') | |
stops = list(schedule.GetStopList()) | |
stops.sort(key=lambda x: x.stop_name) | |
for stop in stops: | |
desc_items = [] | |
if stop.stop_desc: | |
desc_items.append(stop.stop_desc) | |
if stop.stop_url: | |
desc_items.append('Stop info page: <a href="%s">%s</a>' % ( | |
stop.stop_url, stop.stop_url)) | |
description = '<br/>'.join(desc_items) or None | |
placemark = self._CreatePlacemark(stop_folder, stop.stop_name, | |
description=description) | |
point = ET.SubElement(placemark, 'Point') | |
coordinates = ET.SubElement(point, 'coordinates') | |
coordinates.text = '%.6f,%.6f' % (stop.stop_lon, stop.stop_lat) | |
return stop_folder | |
def _CreateRoutePatternsFolder(self, parent, route, | |
style_id=None, visible=True): | |
"""Create a KML Folder containing placemarks for each pattern in the route. | |
A pattern is a sequence of stops used by one of the trips in the route. | |
If there are not patterns for the route then no folder is created and None | |
is returned. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: The id of a style to use if not None. | |
visible: Whether the folder is initially visible or not. | |
Returns: | |
The Folder ElementTree.Element instance or None if there are no patterns. | |
""" | |
pattern_id_to_trips = route.GetPatternIdTripDict() | |
if not pattern_id_to_trips: | |
return None | |
# sort by number of trips using the pattern | |
pattern_trips = pattern_id_to_trips.values() | |
pattern_trips.sort(lambda a, b: cmp(len(b), len(a))) | |
folder = self._CreateFolder(parent, 'Patterns', visible) | |
for n, trips in enumerate(pattern_trips): | |
trip_ids = [trip.trip_id for trip in trips] | |
name = 'Pattern %d (trips: %d)' % (n+1, len(trips)) | |
description = 'Trips using this pattern (%d in total): %s' % ( | |
len(trips), ', '.join(trip_ids)) | |
placemark = self._CreatePlacemark(folder, name, style_id, visible, | |
description) | |
coordinates = [(stop.stop_lon, stop.stop_lat) | |
for stop in trips[0].GetPattern()] | |
self._CreateLineString(placemark, coordinates) | |
return folder | |
def _CreateRouteShapesFolder(self, schedule, parent, route, | |
style_id=None, visible=True): | |
"""Create a KML Folder for the shapes of a route. | |
The folder contains a placemark for each shape referenced by a trip in the | |
route. If there are no such shapes, no folder is created and None is | |
returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: The id of a style to use if not None. | |
visible: Whether the placemark is initially visible or not. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
shape_id_to_trips = {} | |
for trip in route.trips: | |
if trip.shape_id: | |
shape_id_to_trips.setdefault(trip.shape_id, []).append(trip) | |
if not shape_id_to_trips: | |
return None | |
# sort by the number of trips using the shape | |
shape_id_to_trips_items = shape_id_to_trips.items() | |
shape_id_to_trips_items.sort(lambda a, b: cmp(len(b[1]), len(a[1]))) | |
folder = self._CreateFolder(parent, 'Shapes', visible) | |
for shape_id, trips in shape_id_to_trips_items: | |
trip_ids = [trip.trip_id for trip in trips] | |
name = '%s (trips: %d)' % (shape_id, len(trips)) | |
description = 'Trips using this shape (%d in total): %s' % ( | |
len(trips), ', '.join(trip_ids)) | |
placemark = self._CreatePlacemark(folder, name, style_id, visible, | |
description) | |
self._CreateLineStringForShape(placemark, schedule.GetShape(shape_id)) | |
return folder | |
def _CreateRouteTripsFolder(self, parent, route, style_id=None, schedule=None): | |
"""Create a KML Folder containing all the trips in the route. | |
The folder contains a placemark for each of these trips. If there are no | |
trips in the route, no folder is created and None is returned. | |
Args: | |
parent: The parent ElementTree.Element instance. | |
route: The transitfeed.Route instance. | |
style_id: A style id string for the placemarks or None. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
if not route.trips: | |
return None | |
trips = list(route.trips) | |
trips.sort(key=lambda x: x.trip_id) | |
trips_folder = self._CreateFolder(parent, 'Trips', visible=False) | |
for trip in trips: | |
if (self.date_filter and | |
not trip.service_period.IsActiveOn(self.date_filter)): | |
continue | |
if trip.trip_headsign: | |
description = 'Headsign: %s' % trip.trip_headsign | |
else: | |
description = None | |
coordinate_list = [] | |
for secs, stoptime, tp in trip.GetTimeInterpolatedStops(): | |
if self.altitude_per_sec > 0: | |
coordinate_list.append((stoptime.stop.stop_lon, stoptime.stop.stop_lat, | |
(secs - 3600 * 4) * self.altitude_per_sec)) | |
else: | |
coordinate_list.append((stoptime.stop.stop_lon, | |
stoptime.stop.stop_lat)) | |
placemark = self._CreatePlacemark(trips_folder, | |
trip.trip_id, | |
style_id=style_id, | |
visible=False, | |
description=description) | |
self._CreateLineString(placemark, coordinate_list) | |
return trips_folder | |
def _CreateRoutesFolder(self, schedule, doc, route_type=None): | |
"""Create a KML Folder containing routes in a schedule. | |
The folder contains a subfolder for each route in the schedule of type | |
route_type. If route_type is None, then all routes are selected. Each | |
subfolder contains a flattened graph placemark, a route shapes placemark | |
and, if show_trips is True, a subfolder containing placemarks for each of | |
the trips in the route. | |
If there are no routes in the schedule then no folder is created and None | |
is returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
route_type: The route type integer or None. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
def GetRouteName(route): | |
"""Return a placemark name for the route. | |
Args: | |
route: The transitfeed.Route instance. | |
Returns: | |
The name as a string. | |
""" | |
name_parts = [] | |
if route.route_short_name: | |
name_parts.append('<b>%s</b>' % route.route_short_name) | |
if route.route_long_name: | |
name_parts.append(route.route_long_name) | |
return ' - '.join(name_parts) or route.route_id | |
def GetRouteDescription(route): | |
"""Return a placemark description for the route. | |
Args: | |
route: The transitfeed.Route instance. | |
Returns: | |
The description as a string. | |
""" | |
desc_items = [] | |
if route.route_desc: | |
desc_items.append(route.route_desc) | |
if route.route_url: | |
desc_items.append('Route info page: <a href="%s">%s</a>' % ( | |
route.route_url, route.route_url)) | |
description = '<br/>'.join(desc_items) | |
return description or None | |
routes = [route for route in schedule.GetRouteList() | |
if route_type is None or route.route_type == route_type] | |
if not routes: | |
return None | |
routes.sort(key=lambda x: GetRouteName(x)) | |
if route_type is not None: | |
route_type_names = {0: 'Tram, Streetcar or Light rail', | |
1: 'Subway or Metro', | |
2: 'Rail', | |
3: 'Bus', | |
4: 'Ferry', | |
5: 'Cable car', | |
6: 'Gondola or suspended cable car', | |
7: 'Funicular'} | |
type_name = route_type_names.get(route_type, str(route_type)) | |
folder_name = 'Routes - %s' % type_name | |
else: | |
folder_name = 'Routes' | |
routes_folder = self._CreateFolder(doc, folder_name, visible=False) | |
for route in routes: | |
style_id = self._CreateStyleForRoute(doc, route) | |
route_folder = self._CreateFolder(routes_folder, | |
GetRouteName(route), | |
description=GetRouteDescription(route)) | |
self._CreateRouteShapesFolder(schedule, route_folder, route, | |
style_id, False) | |
self._CreateRoutePatternsFolder(route_folder, route, style_id, False) | |
if self.show_trips: | |
self._CreateRouteTripsFolder(route_folder, route, style_id, schedule) | |
return routes_folder | |
def _CreateShapesFolder(self, schedule, doc): | |
"""Create a KML Folder containing all the shapes in a schedule. | |
The folder contains a placemark for each shape. If there are no shapes in | |
the schedule then the folder is not created and None is returned. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
doc: The KML Document ElementTree.Element instance. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
if not schedule.GetShapeList(): | |
return None | |
shapes_folder = self._CreateFolder(doc, 'Shapes') | |
shapes = list(schedule.GetShapeList()) | |
shapes.sort(key=lambda x: x.shape_id) | |
for shape in shapes: | |
placemark = self._CreatePlacemark(shapes_folder, shape.shape_id) | |
self._CreateLineStringForShape(placemark, shape) | |
if self.shape_points: | |
self._CreateShapePointFolder(shapes_folder, shape) | |
return shapes_folder | |
def _CreateShapePointFolder(self, shapes_folder, shape): | |
"""Create a KML Folder containing all the shape points in a shape. | |
The folder contains placemarks for each shapepoint. | |
Args: | |
shapes_folder: A KML Shape Folder ElementTree.Element instance | |
shape: The shape to plot. | |
Returns: | |
The Folder ElementTree.Element instance or None. | |
""" | |
folder_name = shape.shape_id + ' Shape Points' | |
folder = self._CreateFolder(shapes_folder, folder_name, visible=False) | |
for (index, (lat, lon, dist)) in enumerate(shape.points): | |
placemark = self._CreatePlacemark(folder, str(index+1)) | |
point = ET.SubElement(placemark, 'Point') | |
coordinates = ET.SubElement(point, 'coordinates') | |
coordinates.text = '%.6f,%.6f' % (lon, lat) | |
return folder | |
def Write(self, schedule, output_file): | |
"""Writes out a feed as KML. | |
Args: | |
schedule: A transitfeed.Schedule object containing the feed to write. | |
output_file: The name of the output KML file, or file object to use. | |
""" | |
# Generate the DOM to write | |
root = ET.Element('kml') | |
root.attrib['xmlns'] = 'http://earth.google.com/kml/2.1' | |
doc = ET.SubElement(root, 'Document') | |
open_tag = ET.SubElement(doc, 'open') | |
open_tag.text = '1' | |
self._CreateStopsFolder(schedule, doc) | |
if self.split_routes: | |
route_types = set() | |
for route in schedule.GetRouteList(): | |
route_types.add(route.route_type) | |
route_types = list(route_types) | |
route_types.sort() | |
for route_type in route_types: | |
self._CreateRoutesFolder(schedule, doc, route_type) | |
else: | |
self._CreateRoutesFolder(schedule, doc) | |
self._CreateShapesFolder(schedule, doc) | |
# Make sure we pretty-print | |
self._SetIndentation(root) | |
# Now write the output | |
if isinstance(output_file, file): | |
output = output_file | |
else: | |
output = open(output_file, 'w') | |
output.write("""<?xml version="1.0" encoding="UTF-8"?>\n""") | |
ET.ElementTree(root).write(output, 'utf-8') | |
def main(): | |
usage = \ | |
'''%prog [options] <input GTFS.zip> [<output.kml>] | |
Reads GTFS file or directory <input GTFS.zip> and creates a KML file | |
<output.kml> that contains the geographical features of the input. If | |
<output.kml> is omitted a default filename is picked based on | |
<input GTFS.zip>. By default the KML contains all stops and shapes. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/KMLWriter | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-t', '--showtrips', action='store_true', | |
dest='show_trips', | |
help='include the individual trips for each route') | |
parser.add_option('-a', '--altitude_per_sec', action='store', type='float', | |
dest='altitude_per_sec', | |
help='if greater than 0 trips are drawn with time axis ' | |
'set to this many meters high for each second of time') | |
parser.add_option('-s', '--splitroutes', action='store_true', | |
dest='split_routes', | |
help='split the routes by type') | |
parser.add_option('-d', '--date_filter', action='store', type='string', | |
dest='date_filter', | |
help='Restrict to trips active on date YYYYMMDD') | |
parser.add_option('-p', '--display_shape_points', action='store_true', | |
dest='shape_points', | |
help='shows the actual points along shapes') | |
parser.set_defaults(altitude_per_sec=1.0) | |
options, args = parser.parse_args() | |
if len(args) < 1: | |
parser.error('You must provide the path of an input GTFS file.') | |
if args[0] == 'IWantMyCrash': | |
raise Exception('For testCrashHandler') | |
input_path = args[0] | |
if len(args) >= 2: | |
output_path = args[1] | |
else: | |
path = os.path.normpath(input_path) | |
(feed_dir, feed) = os.path.split(path) | |
if '.' in feed: | |
feed = feed.rsplit('.', 1)[0] # strip extension | |
output_filename = '%s.kml' % feed | |
output_path = os.path.join(feed_dir, output_filename) | |
loader = transitfeed.Loader(input_path, | |
problems=transitfeed.ProblemReporter()) | |
feed = loader.Load() | |
print "Writing %s" % output_path | |
writer = KMLWriter() | |
writer.show_trips = options.show_trips | |
writer.altitude_per_sec = options.altitude_per_sec | |
writer.split_routes = options.split_routes | |
writer.date_filter = options.date_filter | |
writer.shape_points = options.shape_points | |
writer.Write(feed, output_path) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A tool for merging two Google Transit feeds. | |
Given two Google Transit feeds intending to cover two disjoint calendar | |
intervals, this tool will attempt to produce a single feed by merging as much | |
of the two feeds together as possible. | |
For example, most stops remain the same throughout the year. Therefore, many | |
of the stops given in stops.txt for the first feed represent the same stops | |
given in the second feed. This tool will try to merge these stops so they | |
only appear once in the resultant feed. | |
A note on terminology: The first schedule is referred to as the "old" schedule; | |
the second as the "new" schedule. The resultant schedule is referred to as | |
the "merged" schedule. Names of things in the old schedule are variations of | |
the letter "a" while names of things from the new schedule are variations of | |
"b". The objects that represents routes, agencies and so on are called | |
"entities". | |
usage: merge.py [options] old_feed_path new_feed_path merged_feed_path | |
Run merge.py --help for a list of the possible options. | |
""" | |
__author__ = 'timothy.stranex@gmail.com (Timothy Stranex)' | |
import datetime | |
import optparse | |
import os | |
import re | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import util | |
import webbrowser | |
# TODO: | |
# 1. write unit tests that use actual data | |
# 2. write a proper trip and stop_times merger | |
# 3. add a serialised access method for stop_times and shapes to transitfeed | |
# 4. add support for merging schedules which have some service period overlap | |
def ApproximateDistanceBetweenPoints(pa, pb): | |
"""Finds the distance between two points on the Earth's surface. | |
This is an approximate distance based on assuming that the Earth is a sphere. | |
The points are specified by their lattitude and longitude. | |
Args: | |
pa: the first (lat, lon) point tuple | |
pb: the second (lat, lon) point tuple | |
Returns: | |
The distance as a float in metres. | |
""" | |
alat, alon = pa | |
blat, blon = pb | |
sa = transitfeed.Stop(lat=alat, lng=alon) | |
sb = transitfeed.Stop(lat=blat, lng=blon) | |
return transitfeed.ApproximateDistanceBetweenStops(sa, sb) | |
class Error(Exception): | |
"""The base exception class for this module.""" | |
class MergeError(Error): | |
"""An error produced when two entities could not be merged.""" | |
class MergeProblemWithContext(transitfeed.ExceptionWithContext): | |
"""The base exception class for problem reporting in the merge module. | |
Attributes: | |
dataset_merger: The DataSetMerger that generated this problem. | |
entity_type_name: The entity type of the dataset_merger. This is just | |
dataset_merger.ENTITY_TYPE_NAME. | |
ERROR_TEXT: The text used for generating the problem message. | |
""" | |
def __init__(self, dataset_merger, problem_type=transitfeed.TYPE_WARNING, | |
**kwargs): | |
"""Initialise the exception object. | |
Args: | |
dataset_merger: The DataSetMerger instance that generated this problem. | |
problem_type: The problem severity. This should be set to one of the | |
corresponding constants in transitfeed. | |
kwargs: Keyword arguments to be saved as instance attributes. | |
""" | |
kwargs['type'] = problem_type | |
kwargs['entity_type_name'] = dataset_merger.ENTITY_TYPE_NAME | |
transitfeed.ExceptionWithContext.__init__(self, None, None, **kwargs) | |
self.dataset_merger = dataset_merger | |
def FormatContext(self): | |
return "In files '%s'" % self.dataset_merger.FILE_NAME | |
class SameIdButNotMerged(MergeProblemWithContext): | |
ERROR_TEXT = ("There is a %(entity_type_name)s in the old feed with id " | |
"'%(id)s' and one from the new feed with the same id but " | |
"they could not be merged:") | |
class CalendarsNotDisjoint(MergeProblemWithContext): | |
ERROR_TEXT = ("The service periods could not be merged since they are not " | |
"disjoint.") | |
class MergeNotImplemented(MergeProblemWithContext): | |
ERROR_TEXT = ("The feed merger does not currently support merging in this " | |
"file. The entries have been duplicated instead.") | |
class FareRulesBroken(MergeProblemWithContext): | |
ERROR_TEXT = ("The feed merger is currently unable to handle fare rules " | |
"properly.") | |
class MergeProblemReporter(transitfeed.ProblemReporter): | |
"""The base problem reporter class for the merge module.""" | |
def __init__(self, accumulator): | |
transitfeed.ProblemReporter.__init__(self, accumulator) | |
def SameIdButNotMerged(self, dataset, entity_id, reason): | |
self.AddToAccumulator( | |
SameIdButNotMerged(dataset, id=entity_id, reason=reason)) | |
def CalendarsNotDisjoint(self, dataset): | |
self.AddToAccumulator( | |
CalendarsNotDisjoint(dataset, problem_type=transitfeed.TYPE_ERROR)) | |
def MergeNotImplemented(self, dataset): | |
self.AddToAccumulator(MergeNotImplemented(dataset)) | |
def FareRulesBroken(self, dataset): | |
self.AddToAccumulator(FareRulesBroken(dataset)) | |
class HTMLProblemAccumulator(transitfeed.ProblemAccumulatorInterface): | |
"""A problem reporter which generates HTML output.""" | |
def __init__(self): | |
"""Initialise.""" | |
self._dataset_warnings = {} # a map from DataSetMergers to their warnings | |
self._dataset_errors = {} | |
self._warning_count = 0 | |
self._error_count = 0 | |
def _Report(self, merge_problem): | |
if merge_problem.IsWarning(): | |
dataset_problems = self._dataset_warnings | |
self._warning_count += 1 | |
else: | |
dataset_problems = self._dataset_errors | |
self._error_count += 1 | |
problem_html = '<li>%s</li>' % ( | |
merge_problem.FormatProblem().replace('\n', '<br>')) | |
dataset_problems.setdefault(merge_problem.dataset_merger, []).append( | |
problem_html) | |
def _GenerateStatsTable(self, feed_merger): | |
"""Generate an HTML table of merge statistics. | |
Args: | |
feed_merger: The FeedMerger instance. | |
Returns: | |
The generated HTML as a string. | |
""" | |
rows = [] | |
rows.append('<tr><th class="header"/><th class="header">Merged</th>' | |
'<th class="header">Copied from old feed</th>' | |
'<th class="header">Copied from new feed</th></tr>') | |
for merger in feed_merger.GetMergerList(): | |
stats = merger.GetMergeStats() | |
if stats is None: | |
continue | |
merged, not_merged_a, not_merged_b = stats | |
rows.append('<tr><th class="header">%s</th>' | |
'<td class="header">%d</td>' | |
'<td class="header">%d</td>' | |
'<td class="header">%d</td></tr>' % | |
(merger.DATASET_NAME, merged, not_merged_a, not_merged_b)) | |
return '<table>%s</table>' % '\n'.join(rows) | |
def _GenerateSection(self, problem_type): | |
"""Generate a listing of the given type of problems. | |
Args: | |
problem_type: The type of problem. This is one of the problem type | |
constants from transitfeed. | |
Returns: | |
The generated HTML as a string. | |
""" | |
if problem_type == transitfeed.TYPE_WARNING: | |
dataset_problems = self._dataset_warnings | |
heading = 'Warnings' | |
else: | |
dataset_problems = self._dataset_errors | |
heading = 'Errors' | |
if not dataset_problems: | |
return '' | |
prefix = '<h2 class="issueHeader">%s:</h2>' % heading | |
dataset_sections = [] | |
for dataset_merger, problems in dataset_problems.items(): | |
dataset_sections.append('<h3>%s</h3><ol>%s</ol>' % ( | |
dataset_merger.FILE_NAME, '\n'.join(problems))) | |
body = '\n'.join(dataset_sections) | |
return prefix + body | |
def _GenerateSummary(self): | |
"""Generate a summary of the warnings and errors. | |
Returns: | |
The generated HTML as a string. | |
""" | |
items = [] | |
if self._dataset_errors: | |
items.append('errors: %d' % self._error_count) | |
if self._dataset_warnings: | |
items.append('warnings: %d' % self._warning_count) | |
if items: | |
return '<p><span class="fail">%s</span></p>' % '<br>'.join(items) | |
else: | |
return '<p><span class="pass">feeds merged successfully</span></p>' | |
def WriteOutput(self, output_file, feed_merger, | |
old_feed_path, new_feed_path, merged_feed_path): | |
"""Write the HTML output to a file. | |
Args: | |
output_file: The file object that the HTML output will be written to. | |
feed_merger: The FeedMerger instance. | |
old_feed_path: The path to the old feed file as a string. | |
new_feed_path: The path to the new feed file as a string | |
merged_feed_path: The path to the merged feed file as a string. This | |
may be None if no merged feed was written. | |
""" | |
if merged_feed_path is None: | |
html_merged_feed_path = '' | |
else: | |
html_merged_feed_path = '<p>Merged feed created: <code>%s</code></p>' % ( | |
merged_feed_path) | |
html_header = """<html> | |
<head> | |
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> | |
<title>Feed Merger Results</title> | |
<style> | |
body {font-family: Georgia, serif; background-color: white} | |
.path {color: gray} | |
div.problem {max-width: 500px} | |
td,th {background-color: khaki; padding: 2px; font-family:monospace} | |
td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; | |
font-family:monospace} | |
table {border-spacing: 5px 0px; margin-top: 3px} | |
h3.issueHeader {padding-left: 1em} | |
span.pass {background-color: lightgreen} | |
span.fail {background-color: yellow} | |
.pass, .fail {font-size: 16pt; padding: 3px} | |
ol,.unused {padding-left: 40pt} | |
.header {background-color: white; font-family: Georgia, serif; padding: 0px} | |
th.header {text-align: right; font-weight: normal; color: gray} | |
.footer {font-size: 10pt} | |
</style> | |
</head> | |
<body> | |
<h1>Feed merger results</h1> | |
<p>Old feed: <code>%(old_feed_path)s</code></p> | |
<p>New feed: <code>%(new_feed_path)s</code></p> | |
%(html_merged_feed_path)s""" % locals() | |
html_stats = self._GenerateStatsTable(feed_merger) | |
html_summary = self._GenerateSummary() | |
html_errors = self._GenerateSection(transitfeed.TYPE_ERROR) | |
html_warnings = self._GenerateSection(transitfeed.TYPE_WARNING) | |
html_footer = """ | |
<div class="footer"> | |
Generated using transitfeed version %s on %s. | |
</div> | |
</body> | |
</html>""" % (transitfeed.__version__, | |
time.strftime('%B %d, %Y at %I:%M %p %Z')) | |
output_file.write(transitfeed.EncodeUnicode(html_header)) | |
output_file.write(transitfeed.EncodeUnicode(html_stats)) | |
output_file.write(transitfeed.EncodeUnicode(html_summary)) | |
output_file.write(transitfeed.EncodeUnicode(html_errors)) | |
output_file.write(transitfeed.EncodeUnicode(html_warnings)) | |
output_file.write(transitfeed.EncodeUnicode(html_footer)) | |
def LoadWithoutErrors(path, memory_db): | |
""""Return a Schedule object loaded from path; sys.exit for any error.""" | |
accumulator = transitfeed.ExceptionProblemAccumulator() | |
loading_problem_handler = MergeProblemReporter(accumulator) | |
try: | |
schedule = transitfeed.Loader(path, | |
memory_db=memory_db, | |
problems=loading_problem_handler).Load() | |
except transitfeed.ExceptionWithContext, e: | |
print >>sys.stderr, ( | |
"\n\nFeeds to merge must load without any errors.\n" | |
"While loading %s the following error was found:\n%s\n%s\n" % | |
(path, e.FormatContext(), transitfeed.EncodeUnicode(e.FormatProblem()))) | |
sys.exit(1) | |
return schedule | |
class DataSetMerger(object): | |
"""A DataSetMerger is in charge of merging a set of entities. | |
This is an abstract class and should be subclassed for each different entity | |
type. | |
Attributes: | |
ENTITY_TYPE_NAME: The name of the entity type like 'agency' or 'stop'. | |
FILE_NAME: The name of the file containing this data set like 'agency.txt'. | |
DATASET_NAME: A name for the dataset like 'Agencies' or 'Stops'. | |
""" | |
def __init__(self, feed_merger): | |
"""Initialise. | |
Args: | |
feed_merger: The FeedMerger. | |
""" | |
self.feed_merger = feed_merger | |
self._num_merged = 0 | |
self._num_not_merged_a = 0 | |
self._num_not_merged_b = 0 | |
def _MergeIdentical(self, a, b): | |
"""Tries to merge two values. The values are required to be identical. | |
Args: | |
a: The first value. | |
b: The second value. | |
Returns: | |
The trivially merged value. | |
Raises: | |
MergeError: The values were not identical. | |
""" | |
if a != b: | |
raise MergeError("values must be identical ('%s' vs '%s')" % | |
(transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return b | |
def _MergeIdenticalCaseInsensitive(self, a, b): | |
"""Tries to merge two strings. | |
The string are required to be the same ignoring case. The second string is | |
always used as the merged value. | |
Args: | |
a: The first string. | |
b: The second string. | |
Returns: | |
The merged string. This is equal to the second string. | |
Raises: | |
MergeError: The strings were not the same ignoring case. | |
""" | |
if a.lower() != b.lower(): | |
raise MergeError("values must be the same (case insensitive) " | |
"('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return b | |
def _MergeOptional(self, a, b): | |
"""Tries to merge two values which may be None. | |
If both values are not None, they are required to be the same and the | |
merge is trivial. If one of the values is None and the other is not None, | |
the merge results in the one which is not None. If both are None, the merge | |
results in None. | |
Args: | |
a: The first value. | |
b: The second value. | |
Returns: | |
The merged value. | |
Raises: | |
MergeError: If both values are not None and are not the same. | |
""" | |
if a and b: | |
if a != b: | |
raise MergeError("values must be identical if both specified " | |
"('%s' vs '%s')" % (transitfeed.EncodeUnicode(a), | |
transitfeed.EncodeUnicode(b))) | |
return a or b | |
def _MergeSameAgency(self, a_agency_id, b_agency_id): | |
"""Merge agency ids to the corresponding agency id in the merged schedule. | |
Args: | |
a_agency_id: an agency id from the old schedule | |
b_agency_id: an agency id from the new schedule | |
Returns: | |
The agency id of the corresponding merged agency. | |
Raises: | |
MergeError: If a_agency_id and b_agency_id do not correspond to the same | |
merged agency. | |
KeyError: Either aaid or baid is not a valid agency id. | |
""" | |
a_agency_id = (a_agency_id or | |
self.feed_merger.a_schedule.GetDefaultAgency().agency_id) | |
b_agency_id = (b_agency_id or | |
self.feed_merger.b_schedule.GetDefaultAgency().agency_id) | |
a_agency = self.feed_merger.a_schedule.GetAgency( | |
a_agency_id)._migrated_entity | |
b_agency = self.feed_merger.b_schedule.GetAgency( | |
b_agency_id)._migrated_entity | |
if a_agency != b_agency: | |
raise MergeError('agency must be the same') | |
return a_agency.agency_id | |
def _SchemedMerge(self, scheme, a, b): | |
"""Tries to merge two entities according to a merge scheme. | |
A scheme is specified by a map where the keys are entity attributes and the | |
values are merge functions like Merger._MergeIdentical or | |
Merger._MergeOptional. The entity is first migrated to the merged schedule. | |
Then the attributes are individually merged as specified by the scheme. | |
Args: | |
scheme: The merge scheme, a map from entity attributes to merge | |
functions. | |
a: The entity from the old schedule. | |
b: The entity from the new schedule. | |
Returns: | |
The migrated and merged entity. | |
Raises: | |
MergeError: One of the attributes was not able to be merged. | |
""" | |
migrated = self._Migrate(b, self.feed_merger.b_schedule, False) | |
for attr, merger in scheme.items(): | |
a_attr = getattr(a, attr, None) | |
b_attr = getattr(b, attr, None) | |
try: | |
merged_attr = merger(a_attr, b_attr) | |
except MergeError, merge_error: | |
raise MergeError("Attribute '%s' could not be merged: %s." % ( | |
attr, merge_error)) | |
setattr(migrated, attr, merged_attr) | |
return migrated | |
def _MergeSameId(self): | |
"""Tries to merge entities based on their ids. | |
This tries to merge only the entities from the old and new schedules which | |
have the same id. These are added into the merged schedule. Entities which | |
do not merge or do not have the same id as another entity in the other | |
schedule are simply migrated into the merged schedule. | |
This method is less flexible than _MergeDifferentId since it only tries | |
to merge entities which have the same id while _MergeDifferentId tries to | |
merge everything. However, it is faster and so should be used whenever | |
possible. | |
This method makes use of various methods like _Merge and _Migrate which | |
are not implemented in the abstract DataSetMerger class. These method | |
should be overwritten in a subclass to allow _MergeSameId to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
a_not_merged = [] | |
b_not_merged = [] | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
try: | |
b = self._GetById(self.feed_merger.b_schedule, self._GetId(a)) | |
except KeyError: | |
# there was no entity in B with the same id as a | |
a_not_merged.append(a) | |
continue | |
try: | |
self._Add(a, b, self._MergeEntities(a, b)) | |
self._num_merged += 1 | |
except MergeError, merge_error: | |
a_not_merged.append(a) | |
b_not_merged.append(b) | |
self._ReportSameIdButNotMerged(self._GetId(a), merge_error) | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
try: | |
a = self._GetById(self.feed_merger.a_schedule, self._GetId(b)) | |
except KeyError: | |
# there was no entity in A with the same id as b | |
b_not_merged.append(b) | |
# migrate the remaining entities | |
for a in a_not_merged: | |
newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a)) | |
self._Add(a, None, self._Migrate(a, self.feed_merger.a_schedule, newid)) | |
for b in b_not_merged: | |
newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b)) | |
self._Add(None, b, self._Migrate(b, self.feed_merger.b_schedule, newid)) | |
self._num_not_merged_a = len(a_not_merged) | |
self._num_not_merged_b = len(b_not_merged) | |
return self._num_merged | |
def _MergeByIdKeepNew(self): | |
"""Migrate all entities, discarding duplicates from the old/a schedule. | |
This method migrates all entities from the new/b schedule. It then migrates | |
entities in the old schedule where there isn't already an entity with the | |
same ID. | |
Unlike _MergeSameId this method migrates entities to the merged schedule | |
before comparing their IDs. This allows transfers to be compared when they | |
refer to stops that had their ID updated by migration. | |
This method makes use of various methods like _Migrate and _Add which | |
are not implemented in the abstract DataSetMerger class. These methods | |
should be overwritten in a subclass to allow _MergeByIdKeepNew to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
# Maps from migrated ID to tuple(original object, migrated object) | |
a_orig_migrated = {} | |
b_orig_migrated = {} | |
for orig in self._GetIter(self.feed_merger.a_schedule): | |
migrated = self._Migrate(orig, self.feed_merger.a_schedule) | |
a_orig_migrated[self._GetId(migrated)] = (orig, migrated) | |
for orig in self._GetIter(self.feed_merger.b_schedule): | |
migrated = self._Migrate(orig, self.feed_merger.b_schedule) | |
b_orig_migrated[self._GetId(migrated)] = (orig, migrated) | |
for migrated_id, (orig, migrated) in b_orig_migrated.items(): | |
self._Add(None, orig, migrated) | |
self._num_not_merged_b += 1 | |
for migrated_id, (orig, migrated) in a_orig_migrated.items(): | |
if migrated_id not in b_orig_migrated: | |
self._Add(orig, None, migrated) | |
self._num_not_merged_a += 1 | |
return self._num_merged | |
def _MergeDifferentId(self): | |
"""Tries to merge all possible combinations of entities. | |
This tries to merge every entity in the old schedule with every entity in | |
the new schedule. Unlike _MergeSameId, the ids do not need to match. | |
However, _MergeDifferentId is much slower than _MergeSameId. | |
This method makes use of various methods like _Merge and _Migrate which | |
are not implemented in the abstract DataSetMerger class. These method | |
should be overwritten in a subclass to allow _MergeSameId to work with | |
different entity types. | |
Returns: | |
The number of merged entities. | |
""" | |
# TODO: The same entity from A could merge with multiple from B. | |
# This should either generate an error or should be prevented from | |
# happening. | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
try: | |
self._Add(a, b, self._MergeEntities(a, b)) | |
self._num_merged += 1 | |
except MergeError: | |
continue | |
for a in self._GetIter(self.feed_merger.a_schedule): | |
if a not in self.feed_merger.a_merge_map: | |
self._num_not_merged_a += 1 | |
newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a)) | |
self._Add(a, None, | |
self._Migrate(a, self.feed_merger.a_schedule, newid)) | |
for b in self._GetIter(self.feed_merger.b_schedule): | |
if b not in self.feed_merger.b_merge_map: | |
self._num_not_merged_b += 1 | |
newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b)) | |
self._Add(None, b, | |
self._Migrate(b, self.feed_merger.b_schedule, newid)) | |
return self._num_merged | |
def _ReportSameIdButNotMerged(self, entity_id, reason): | |
"""Report that two entities have the same id but could not be merged. | |
Args: | |
entity_id: The id of the entities. | |
reason: A string giving a reason why they could not be merged. | |
""" | |
self.feed_merger.problem_reporter.SameIdButNotMerged(self, | |
entity_id, | |
reason) | |
def _GetIter(self, schedule): | |
"""Returns an iterator of entities for this data set in the given schedule. | |
This method usually corresponds to one of the methods from | |
transitfeed.Schedule like GetAgencyList() or GetRouteList(). | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
schedule: Either the old or new schedule from the FeedMerger. | |
Returns: | |
An iterator of entities. | |
""" | |
raise NotImplementedError() | |
def _GetById(self, schedule, entity_id): | |
"""Returns an entity given its id. | |
This method usually corresponds to one of the methods from | |
transitfeed.Schedule like GetAgency() or GetRoute(). | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
schedule: Either the old or new schedule from the FeedMerger. | |
entity_id: The id string of the entity. | |
Returns: | |
The entity with the given id. | |
Raises: | |
KeyError: There is not entity with the given id. | |
""" | |
raise NotImplementedError() | |
def _HasId(self, schedule, entity_id): | |
"""Check if the schedule has an entity with the given id. | |
Args: | |
schedule: The transitfeed.Schedule instance to look in. | |
entity_id: The id of the entity. | |
Returns: | |
True if the schedule has an entity with the id or False if not. | |
""" | |
try: | |
self._GetById(schedule, entity_id) | |
has = True | |
except KeyError: | |
has = False | |
return has | |
def _MergeEntities(self, a, b): | |
"""Tries to merge the two entities. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
a: The entity from the old schedule. | |
b: The entity from the new schedule. | |
Returns: | |
The merged migrated entity. | |
Raises: | |
MergeError: The entities were not able to be merged. | |
""" | |
raise NotImplementedError() | |
def _Migrate(self, entity, schedule, newid): | |
"""Migrates the entity to the merge schedule. | |
This involves copying the entity and updating any ids to point to the | |
corresponding entities in the merged schedule. If newid is True then | |
a unique id is generated for the migrated entity using the original id | |
as a prefix. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
entity: The entity to migrate. | |
schedule: The schedule from the FeedMerger that contains ent. | |
newid: Whether to generate a new id (True) or keep the original (False). | |
Returns: | |
The migrated entity. | |
""" | |
raise NotImplementedError() | |
def _Add(self, a, b, migrated): | |
"""Adds the migrated entity to the merged schedule. | |
If a and b are both not None, it means that a and b were merged to create | |
migrated. If one of a or b is None, it means that the other was not merged | |
but has been migrated. This mapping is registered with the FeedMerger. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
a: The original entity from the old schedule. | |
b: The original entity from the new schedule. | |
migrated: The migrated entity for the merged schedule. | |
""" | |
raise NotImplementedError() | |
def _GetId(self, entity): | |
"""Returns the id of the given entity. | |
Note: This method must be overwritten in a subclass if _MergeSameId or | |
_MergeDifferentId are to be used. | |
Args: | |
entity: The entity. | |
Returns: | |
The id of the entity as a string or None. | |
""" | |
raise NotImplementedError() | |
def MergeDataSets(self): | |
"""Merge the data sets. | |
This method is called in FeedMerger.MergeSchedule(). | |
Note: This method must be overwritten in a subclass. | |
Returns: | |
A boolean which is False if the dataset was unable to be merged and | |
as a result the entire merge should be aborted. In this case, the problem | |
will have been reported using the FeedMerger's problem reporter. | |
""" | |
raise NotImplementedError() | |
def GetMergeStats(self): | |
"""Returns some merge statistics. | |
These are given as a tuple (merged, not_merged_a, not_merged_b) where | |
"merged" is the number of merged entities, "not_merged_a" is the number of | |
entities from the old schedule that were not merged and "not_merged_b" is | |
the number of entities from the new schedule that were not merged. | |
The return value can also be None. This means that there are no statistics | |
for this entity type. | |
The statistics are only available after MergeDataSets() has been called. | |
Returns: | |
Either the statistics tuple or None. | |
""" | |
return (self._num_merged, self._num_not_merged_a, self._num_not_merged_b) | |
class AgencyMerger(DataSetMerger): | |
"""A DataSetMerger for agencies.""" | |
ENTITY_TYPE_NAME = 'agency' | |
FILE_NAME = 'agency.txt' | |
DATASET_NAME = 'Agencies' | |
def _GetIter(self, schedule): | |
return schedule.GetAgencyList() | |
def _GetById(self, schedule, agency_id): | |
return schedule.GetAgency(agency_id) | |
def _MergeEntities(self, a, b): | |
"""Merges two agencies. | |
To be merged, they are required to have the same id, name, url and | |
timezone. The remaining language attribute is taken from the new agency. | |
Args: | |
a: The first agency. | |
b: The second agency. | |
Returns: | |
The merged agency. | |
Raises: | |
MergeError: The agencies could not be merged. | |
""" | |
def _MergeAgencyId(a_agency_id, b_agency_id): | |
"""Merge two agency ids. | |
The only difference between this and _MergeIdentical() is that the values | |
None and '' are regarded as being the same. | |
Args: | |
a_agency_id: The first agency id. | |
b_agency_id: The second agency id. | |
Returns: | |
The merged agency id. | |
Raises: | |
MergeError: The agency ids could not be merged. | |
""" | |
a_agency_id = a_agency_id or None | |
b_agency_id = b_agency_id or None | |
return self._MergeIdentical(a_agency_id, b_agency_id) | |
scheme = {'agency_id': _MergeAgencyId, | |
'agency_name': self._MergeIdentical, | |
'agency_url': self._MergeIdentical, | |
'agency_timezone': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
a = transitfeed.Agency(field_dict=entity) | |
if newid: | |
a.agency_id = self.feed_merger.GenerateId(entity.agency_id) | |
return a | |
def _Add(self, a, b, migrated): | |
self.feed_merger.Register(a, b, migrated) | |
self.feed_merger.merged_schedule.AddAgencyObject(migrated) | |
def _GetId(self, entity): | |
return entity.agency_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class StopMerger(DataSetMerger): | |
"""A DataSetMerger for stops. | |
Attributes: | |
largest_stop_distance: The largest distance allowed between stops that | |
will be merged in metres. | |
""" | |
ENTITY_TYPE_NAME = 'stop' | |
FILE_NAME = 'stops.txt' | |
DATASET_NAME = 'Stops' | |
largest_stop_distance = 10.0 | |
def __init__(self, feed_merger): | |
DataSetMerger.__init__(self, feed_merger) | |
self._merged = [] | |
self._a_not_merged = [] | |
self._b_not_merged = [] | |
def SetLargestStopDistance(self, distance): | |
"""Sets largest_stop_distance.""" | |
self.largest_stop_distance = distance | |
def _GetIter(self, schedule): | |
return schedule.GetStopList() | |
def _GetById(self, schedule, stop_id): | |
return schedule.GetStop(stop_id) | |
def _MergeEntities(self, a, b): | |
"""Merges two stops. | |
For the stops to be merged, they must have: | |
- the same stop_id | |
- the same stop_name (case insensitive) | |
- the same zone_id | |
- locations less than largest_stop_distance apart | |
The other attributes can have arbitary changes. The merged attributes are | |
taken from the new stop. | |
Args: | |
a: The first stop. | |
b: The second stop. | |
Returns: | |
The merged stop. | |
Raises: | |
MergeError: The stops could not be merged. | |
""" | |
distance = transitfeed.ApproximateDistanceBetweenStops(a, b) | |
if distance > self.largest_stop_distance: | |
raise MergeError("Stops are too far apart: %.1fm " | |
"(largest_stop_distance is %.1fm)." % | |
(distance, self.largest_stop_distance)) | |
scheme = {'stop_id': self._MergeIdentical, | |
'stop_name': self._MergeIdenticalCaseInsensitive, | |
'zone_id': self._MergeIdentical, | |
'location_type': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
migrated_stop = transitfeed.Stop(field_dict=entity) | |
if newid: | |
migrated_stop.stop_id = self.feed_merger.GenerateId(entity.stop_id) | |
return migrated_stop | |
def _Add(self, a, b, migrated_stop): | |
self.feed_merger.Register(a, b, migrated_stop) | |
# The migrated_stop will be added to feed_merger.merged_schedule later | |
# since adding must be done after the zone_ids have been finalized. | |
if a and b: | |
self._merged.append((a, b, migrated_stop)) | |
elif a: | |
self._a_not_merged.append((a, migrated_stop)) | |
elif b: | |
self._b_not_merged.append((b, migrated_stop)) | |
def _GetId(self, entity): | |
return entity.stop_id | |
def MergeDataSets(self): | |
num_merged = self._MergeSameId() | |
fm = self.feed_merger | |
# now we do all the zone_id and parent_station mapping | |
# the zone_ids for merged stops can be preserved | |
for (a, b, merged_stop) in self._merged: | |
assert a.zone_id == b.zone_id | |
fm.a_zone_map[a.zone_id] = a.zone_id | |
fm.b_zone_map[b.zone_id] = b.zone_id | |
merged_stop.zone_id = a.zone_id | |
if merged_stop.parent_station: | |
# Merged stop has a parent. Update it to be the parent it had in b. | |
parent_in_b = fm.b_schedule.GetStop(b.parent_station) | |
merged_stop.parent_station = fm.b_merge_map[parent_in_b].stop_id | |
fm.merged_schedule.AddStopObject(merged_stop) | |
self._UpdateAndMigrateUnmerged(self._a_not_merged, fm.a_zone_map, | |
fm.a_merge_map, fm.a_schedule) | |
self._UpdateAndMigrateUnmerged(self._b_not_merged, fm.b_zone_map, | |
fm.b_merge_map, fm.b_schedule) | |
print 'Stops merged: %d of %d, %d' % ( | |
num_merged, | |
len(fm.a_schedule.GetStopList()), | |
len(fm.b_schedule.GetStopList())) | |
return True | |
def _UpdateAndMigrateUnmerged(self, not_merged_stops, zone_map, merge_map, | |
schedule): | |
"""Correct references in migrated unmerged stops and add to merged_schedule. | |
For stops migrated from one of the input feeds to the output feed update the | |
parent_station and zone_id references to point to objects in the output | |
feed. Then add the migrated stop to the new schedule. | |
Args: | |
not_merged_stops: list of stops from one input feed that have not been | |
merged | |
zone_map: map from zone_id in the input feed to zone_id in the output feed | |
merge_map: map from Stop objects in the input feed to Stop objects in | |
the output feed | |
schedule: the input Schedule object | |
""" | |
# for the unmerged stops, we use an already mapped zone_id if possible | |
# if not, we generate a new one and add it to the map | |
for stop, migrated_stop in not_merged_stops: | |
if stop.zone_id in zone_map: | |
migrated_stop.zone_id = zone_map[stop.zone_id] | |
else: | |
migrated_stop.zone_id = self.feed_merger.GenerateId(stop.zone_id) | |
zone_map[stop.zone_id] = migrated_stop.zone_id | |
if stop.parent_station: | |
parent_original = schedule.GetStop(stop.parent_station) | |
migrated_stop.parent_station = merge_map[parent_original].stop_id | |
self.feed_merger.merged_schedule.AddStopObject(migrated_stop) | |
class RouteMerger(DataSetMerger): | |
"""A DataSetMerger for routes.""" | |
ENTITY_TYPE_NAME = 'route' | |
FILE_NAME = 'routes.txt' | |
DATASET_NAME = 'Routes' | |
def _GetIter(self, schedule): | |
return schedule.GetRouteList() | |
def _GetById(self, schedule, route_id): | |
return schedule.GetRoute(route_id) | |
def _MergeEntities(self, a, b): | |
scheme = {'route_short_name': self._MergeIdentical, | |
'route_long_name': self._MergeIdentical, | |
'agency_id': self._MergeSameAgency, | |
'route_type': self._MergeIdentical, | |
'route_id': self._MergeIdentical, | |
'route_url': self._MergeOptional, | |
'route_color': self._MergeOptional, | |
'route_text_color': self._MergeOptional} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, entity, schedule, newid): | |
migrated_route = transitfeed.Route(field_dict=entity) | |
if newid: | |
migrated_route.route_id = self.feed_merger.GenerateId(entity.route_id) | |
if entity.agency_id: | |
original_agency = schedule.GetAgency(entity.agency_id) | |
else: | |
original_agency = schedule.GetDefaultAgency() | |
migrated_route.agency_id = original_agency._migrated_entity.agency_id | |
return migrated_route | |
def _Add(self, a, b, migrated_route): | |
self.feed_merger.Register(a, b, migrated_route) | |
self.feed_merger.merged_schedule.AddRouteObject(migrated_route) | |
def _GetId(self, entity): | |
return entity.route_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class ServicePeriodMerger(DataSetMerger): | |
"""A DataSetMerger for service periods. | |
Attributes: | |
require_disjoint_calendars: A boolean specifying whether to require | |
disjoint calendars when merging (True) or not (False). | |
""" | |
ENTITY_TYPE_NAME = 'service period' | |
FILE_NAME = 'calendar.txt/calendar_dates.txt' | |
DATASET_NAME = 'Service Periods' | |
def __init__(self, feed_merger): | |
DataSetMerger.__init__(self, feed_merger) | |
self.require_disjoint_calendars = True | |
def _ReportSameIdButNotMerged(self, entity_id, reason): | |
pass | |
def _GetIter(self, schedule): | |
return schedule.GetServicePeriodList() | |
def _GetById(self, schedule, service_id): | |
return schedule.GetServicePeriod(service_id) | |
def _MergeEntities(self, a, b): | |
"""Tries to merge two service periods. | |
Note: Currently this just raises a MergeError since service periods cannot | |
be merged. | |
Args: | |
a: The first service period. | |
b: The second service period. | |
Returns: | |
The merged service period. | |
Raises: | |
MergeError: When the service periods could not be merged. | |
""" | |
raise MergeError('Cannot merge service periods') | |
def _Migrate(self, original_service_period, schedule, newid): | |
migrated_service_period = transitfeed.ServicePeriod() | |
migrated_service_period.day_of_week = list( | |
original_service_period.day_of_week) | |
migrated_service_period.start_date = original_service_period.start_date | |
migrated_service_period.end_date = original_service_period.end_date | |
migrated_service_period.date_exceptions = dict( | |
original_service_period.date_exceptions) | |
if newid: | |
migrated_service_period.service_id = self.feed_merger.GenerateId( | |
original_service_period.service_id) | |
else: | |
migrated_service_period.service_id = original_service_period.service_id | |
return migrated_service_period | |
def _Add(self, a, b, migrated_service_period): | |
self.feed_merger.Register(a, b, migrated_service_period) | |
self.feed_merger.merged_schedule.AddServicePeriodObject( | |
migrated_service_period) | |
def _GetId(self, entity): | |
return entity.service_id | |
def MergeDataSets(self): | |
if self.require_disjoint_calendars and not self.CheckDisjointCalendars(): | |
self.feed_merger.problem_reporter.CalendarsNotDisjoint(self) | |
return False | |
self._MergeSameId() | |
self.feed_merger.problem_reporter.MergeNotImplemented(self) | |
return True | |
def DisjoinCalendars(self, cutoff): | |
"""Forces the old and new calendars to be disjoint about a cutoff date. | |
This truncates the service periods of the old schedule so that service | |
stops one day before the given cutoff date and truncates the new schedule | |
so that service only begins on the cutoff date. | |
Args: | |
cutoff: The cutoff date as a string in YYYYMMDD format. The timezone | |
is the same as used in the calendar.txt file. | |
""" | |
def TruncatePeriod(service_period, start, end): | |
"""Truncate the service period to into the range [start, end]. | |
Args: | |
service_period: The service period to truncate. | |
start: The start date as a string in YYYYMMDD format. | |
end: The end date as a string in YYYYMMDD format. | |
""" | |
service_period.start_date = max(service_period.start_date, start) | |
service_period.end_date = min(service_period.end_date, end) | |
dates_to_delete = [] | |
for k in service_period.date_exceptions: | |
if (k < start) or (k > end): | |
dates_to_delete.append(k) | |
for k in dates_to_delete: | |
del service_period.date_exceptions[k] | |
# find the date one day before cutoff | |
year = int(cutoff[:4]) | |
month = int(cutoff[4:6]) | |
day = int(cutoff[6:8]) | |
cutoff_date = datetime.date(year, month, day) | |
one_day_delta = datetime.timedelta(days=1) | |
before = (cutoff_date - one_day_delta).strftime('%Y%m%d') | |
for a in self.feed_merger.a_schedule.GetServicePeriodList(): | |
TruncatePeriod(a, 0, before) | |
for b in self.feed_merger.b_schedule.GetServicePeriodList(): | |
TruncatePeriod(b, cutoff, '9'*8) | |
def CheckDisjointCalendars(self): | |
"""Check whether any old service periods intersect with any new ones. | |
This is a rather coarse check based on | |
transitfeed.SevicePeriod.GetDateRange. | |
Returns: | |
True if the calendars are disjoint or False if not. | |
""" | |
# TODO: Do an exact check here. | |
a_service_periods = self.feed_merger.a_schedule.GetServicePeriodList() | |
b_service_periods = self.feed_merger.b_schedule.GetServicePeriodList() | |
for a_service_period in a_service_periods: | |
a_start, a_end = a_service_period.GetDateRange() | |
for b_service_period in b_service_periods: | |
b_start, b_end = b_service_period.GetDateRange() | |
overlap_start = max(a_start, b_start) | |
overlap_end = min(a_end, b_end) | |
if overlap_end >= overlap_start: | |
return False | |
return True | |
def GetMergeStats(self): | |
return None | |
class FareMerger(DataSetMerger): | |
"""A DataSetMerger for fares.""" | |
ENTITY_TYPE_NAME = 'fare attribute' | |
FILE_NAME = 'fare_attributes.txt' | |
DATASET_NAME = 'Fares' | |
def _GetIter(self, schedule): | |
return schedule.GetFareAttributeList() | |
def _GetById(self, schedule, fare_id): | |
return schedule.GetFareAttribute(fare_id) | |
def _MergeEntities(self, a, b): | |
"""Merges the fares if all the attributes are the same.""" | |
scheme = {'price': self._MergeIdentical, | |
'currency_type': self._MergeIdentical, | |
'payment_method': self._MergeIdentical, | |
'transfers': self._MergeIdentical, | |
'transfer_duration': self._MergeIdentical} | |
return self._SchemedMerge(scheme, a, b) | |
def _Migrate(self, original_fare, schedule, newid): | |
migrated_fare = transitfeed.FareAttribute( | |
field_dict=original_fare) | |
if newid: | |
migrated_fare.fare_id = self.feed_merger.GenerateId( | |
original_fare.fare_id) | |
return migrated_fare | |
def _Add(self, a, b, migrated_fare): | |
self.feed_merger.Register(a, b, migrated_fare) | |
self.feed_merger.merged_schedule.AddFareAttributeObject(migrated_fare) | |
def _GetId(self, fare): | |
return fare.fare_id | |
def MergeDataSets(self): | |
num_merged = self._MergeSameId() | |
print 'Fares merged: %d of %d, %d' % ( | |
num_merged, | |
len(self.feed_merger.a_schedule.GetFareAttributeList()), | |
len(self.feed_merger.b_schedule.GetFareAttributeList())) | |
return True | |
class TransferMerger(DataSetMerger): | |
"""A DataSetMerger for transfers. | |
Copy every transfer from the a/old and b/new schedules into the merged | |
schedule, translating from_stop_id and to_stop_id. Where a transfer ID is | |
found in both source schedules only the one from the b/new schedule is | |
migrated. | |
Only one transfer is processed per ID. Duplicates within a schedule are | |
ignored.""" | |
ENTITY_TYPE_NAME = 'transfer' | |
FILE_NAME = 'transfers.txt' | |
DATASET_NAME = 'Transfers' | |
def _GetIter(self, schedule): | |
return schedule.GetTransferIter() | |
def _GetId(self, transfer): | |
return transfer._ID() | |
def _Migrate(self, original_transfer, schedule): | |
# Make a copy of the original and then fix the stop_id references. | |
migrated_transfer = transitfeed.Transfer(field_dict=original_transfer) | |
if original_transfer.from_stop_id: | |
migrated_transfer.from_stop_id = schedule.GetStop( | |
original_transfer.from_stop_id)._migrated_entity.stop_id | |
if migrated_transfer.to_stop_id: | |
migrated_transfer.to_stop_id = schedule.GetStop( | |
original_transfer.to_stop_id)._migrated_entity.stop_id | |
return migrated_transfer | |
def _Add(self, a, b, migrated_transfer): | |
self.feed_merger.Register(a, b, migrated_transfer) | |
self.feed_merger.merged_schedule.AddTransferObject(migrated_transfer) | |
def MergeDataSets(self): | |
# If both schedules contain rows with equivalent from_stop_id and | |
# to_stop_id but different transfer_type or min_transfer_time only the | |
# transfer from b will be in the output. | |
self._MergeByIdKeepNew() | |
print 'Transfers merged: %d of %d, %d' % ( | |
self._num_merged, | |
# http://mail.python.org/pipermail/baypiggies/2008-August/003817.html | |
# claims this is a good way to find number of items in an iterable. | |
sum(1 for _ in self.feed_merger.a_schedule.GetTransferIter()), | |
sum(1 for _ in self.feed_merger.b_schedule.GetTransferIter())) | |
return True | |
class ShapeMerger(DataSetMerger): | |
"""A DataSetMerger for shapes. | |
In this implementation, merging shapes means just taking the new shape. | |
The only conditions for a merge are that the shape_ids are the same and | |
the endpoints of the old and new shapes are no further than | |
largest_shape_distance apart. | |
Attributes: | |
largest_shape_distance: The largest distance between the endpoints of two | |
shapes allowed for them to be merged in metres. | |
""" | |
ENTITY_TYPE_NAME = 'shape' | |
FILE_NAME = 'shapes.txt' | |
DATASET_NAME = 'Shapes' | |
largest_shape_distance = 10.0 | |
def SetLargestShapeDistance(self, distance): | |
"""Sets largest_shape_distance.""" | |
self.largest_shape_distance = distance | |
def _GetIter(self, schedule): | |
return schedule.GetShapeList() | |
def _GetById(self, schedule, shape_id): | |
return schedule.GetShape(shape_id) | |
def _MergeEntities(self, a, b): | |
"""Merges the shapes by taking the new shape. | |
Args: | |
a: The first transitfeed.Shape instance. | |
b: The second transitfeed.Shape instance. | |
Returns: | |
The merged shape. | |
Raises: | |
MergeError: If the ids are different or if the endpoints are further | |
than largest_shape_distance apart. | |
""" | |
if a.shape_id != b.shape_id: | |
raise MergeError('shape_id must be the same') | |
distance = max(ApproximateDistanceBetweenPoints(a.points[0][:2], | |
b.points[0][:2]), | |
ApproximateDistanceBetweenPoints(a.points[-1][:2], | |
b.points[-1][:2])) | |
if distance > self.largest_shape_distance: | |
raise MergeError('The shape endpoints are too far away: %.1fm ' | |
'(largest_shape_distance is %.1fm)' % | |
(distance, self.largest_shape_distance)) | |
return self._Migrate(b, self.feed_merger.b_schedule, False) | |
def _Migrate(self, original_shape, schedule, newid): | |
migrated_shape = transitfeed.Shape(original_shape.shape_id) | |
if newid: | |
migrated_shape.shape_id = self.feed_merger.GenerateId( | |
original_shape.shape_id) | |
for (lat, lon, dist) in original_shape.points: | |
migrated_shape.AddPoint(lat=lat, lon=lon, distance=dist) | |
return migrated_shape | |
def _Add(self, a, b, migrated_shape): | |
self.feed_merger.Register(a, b, migrated_shape) | |
self.feed_merger.merged_schedule.AddShapeObject(migrated_shape) | |
def _GetId(self, shape): | |
return shape.shape_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
return True | |
class TripMerger(DataSetMerger): | |
"""A DataSetMerger for trips. | |
This implementation makes no attempt to merge trips, it simply migrates | |
them all to the merged feed. | |
""" | |
ENTITY_TYPE_NAME = 'trip' | |
FILE_NAME = 'trips.txt' | |
DATASET_NAME = 'Trips' | |
def _ReportSameIdButNotMerged(self, trip_id, reason): | |
pass | |
def _GetIter(self, schedule): | |
return schedule.GetTripList() | |
def _GetById(self, schedule, trip_id): | |
return schedule.GetTrip(trip_id) | |
def _MergeEntities(self, a, b): | |
"""Raises a MergeError because currently trips cannot be merged.""" | |
raise MergeError('Cannot merge trips') | |
def _Migrate(self, original_trip, schedule, newid): | |
migrated_trip = transitfeed.Trip(field_dict=original_trip) | |
# Make new trip_id first. AddTripObject reports a problem if it conflicts | |
# with an existing id. | |
if newid: | |
migrated_trip.trip_id = self.feed_merger.GenerateId( | |
original_trip.trip_id) | |
# Need to add trip to schedule before copying stoptimes | |
self.feed_merger.merged_schedule.AddTripObject(migrated_trip, | |
validate=False) | |
if schedule == self.feed_merger.a_schedule: | |
merge_map = self.feed_merger.a_merge_map | |
else: | |
merge_map = self.feed_merger.b_merge_map | |
original_route = schedule.GetRoute(original_trip.route_id) | |
migrated_trip.route_id = merge_map[original_route].route_id | |
original_service_period = schedule.GetServicePeriod( | |
original_trip.service_id) | |
migrated_trip.service_id = merge_map[original_service_period].service_id | |
if original_trip.block_id: | |
migrated_trip.block_id = '%s_%s' % ( | |
self.feed_merger.GetScheduleName(schedule), | |
original_trip.block_id) | |
if original_trip.shape_id: | |
original_shape = schedule.GetShape(original_trip.shape_id) | |
migrated_trip.shape_id = merge_map[original_shape].shape_id | |
for original_stop_time in original_trip.GetStopTimes(): | |
migrated_stop_time = transitfeed.StopTime( | |
None, | |
merge_map[original_stop_time.stop], | |
original_stop_time.arrival_time, | |
original_stop_time.departure_time, | |
original_stop_time.stop_headsign, | |
original_stop_time.pickup_type, | |
original_stop_time.drop_off_type, | |
original_stop_time.shape_dist_traveled, | |
original_stop_time.arrival_secs, | |
original_stop_time.departure_secs) | |
migrated_trip.AddStopTimeObject(migrated_stop_time) | |
for headway_period in original_trip.GetFrequencyTuples(): | |
migrated_trip.AddFrequency(*headway_period) | |
return migrated_trip | |
def _Add(self, a, b, migrated_trip): | |
# Validate now, since it wasn't done in _Migrate | |
migrated_trip.Validate(self.feed_merger.merged_schedule.problem_reporter) | |
self.feed_merger.Register(a, b, migrated_trip) | |
def _GetId(self, trip): | |
return trip.trip_id | |
def MergeDataSets(self): | |
self._MergeSameId() | |
self.feed_merger.problem_reporter.MergeNotImplemented(self) | |
return True | |
def GetMergeStats(self): | |
return None | |
class FareRuleMerger(DataSetMerger): | |
"""A DataSetMerger for fare rules.""" | |
ENTITY_TYPE_NAME = 'fare rule' | |
FILE_NAME = 'fare_rules.txt' | |
DATASET_NAME = 'Fare Rules' | |
def MergeDataSets(self): | |
"""Merge the fare rule datasets. | |
The fare rules are first migrated. Merging is done by removing any | |
duplicate rules. | |
Returns: | |
True since fare rules can always be merged. | |
""" | |
rules = set() | |
for (schedule, merge_map, zone_map) in ([self.feed_merger.a_schedule, | |
self.feed_merger.a_merge_map, | |
self.feed_merger.a_zone_map], | |
[self.feed_merger.b_schedule, | |
self.feed_merger.b_merge_map, | |
self.feed_merger.b_zone_map]): | |
for fare in schedule.GetFareAttributeList(): | |
for fare_rule in fare.GetFareRuleList(): | |
fare_id = merge_map[ | |
schedule.GetFareAttribute(fare_rule.fare_id)].fare_id | |
route_id = (fare_rule.route_id and | |
merge_map[schedule.GetRoute(fare_rule.route_id)].route_id) | |
origin_id = (fare_rule.origin_id and | |
zone_map[fare_rule.origin_id]) | |
destination_id = (fare_rule.destination_id and | |
zone_map[fare_rule.destination_id]) | |
contains_id = (fare_rule.contains_id and | |
zone_map[fare_rule.contains_id]) | |
rules.add((fare_id, route_id, origin_id, destination_id, | |
contains_id)) | |
for fare_rule_tuple in rules: | |
migrated_fare_rule = transitfeed.FareRule(*fare_rule_tuple) | |
self.feed_merger.merged_schedule.AddFareRuleObject(migrated_fare_rule) | |
if rules: | |
self.feed_merger.problem_reporter.FareRulesBroken(self) | |
print 'Fare Rules: union has %d fare rules' % len(rules) | |
return True | |
def GetMergeStats(self): | |
return None | |
class FeedMerger(object): | |
"""A class for merging two whole feeds. | |
This class takes two instances of transitfeed.Schedule and uses | |
DataSetMerger instances to merge the feeds and produce the resultant | |
merged feed. | |
Attributes: | |
a_schedule: The old transitfeed.Schedule instance. | |
b_schedule: The new transitfeed.Schedule instance. | |
problem_reporter: The merge problem reporter. | |
merged_schedule: The merged transitfeed.Schedule instance. | |
a_merge_map: A map from old entities to merged entities. | |
b_merge_map: A map from new entities to merged entities. | |
a_zone_map: A map from old zone ids to merged zone ids. | |
b_zone_map: A map from new zone ids to merged zone ids. | |
""" | |
def __init__(self, a_schedule, b_schedule, merged_schedule, | |
problem_reporter): | |
"""Initialise the merger. | |
Once this initialiser has been called, a_schedule and b_schedule should | |
not be modified. | |
Args: | |
a_schedule: The old schedule, an instance of transitfeed.Schedule. | |
b_schedule: The new schedule, an instance of transitfeed.Schedule. | |
problem_reporter: The problem reporter, an instance of | |
transitfeed.ProblemReporter. | |
""" | |
self.a_schedule = a_schedule | |
self.b_schedule = b_schedule | |
self.merged_schedule = merged_schedule | |
self.a_merge_map = {} | |
self.b_merge_map = {} | |
self.a_zone_map = {} | |
self.b_zone_map = {} | |
self._mergers = [] | |
self._idnum = max(self._FindLargestIdPostfixNumber(self.a_schedule), | |
self._FindLargestIdPostfixNumber(self.b_schedule)) | |
self.problem_reporter = problem_reporter | |
def _FindLargestIdPostfixNumber(self, schedule): | |
"""Finds the largest integer used as the ending of an id in the schedule. | |
Args: | |
schedule: The schedule to check. | |
Returns: | |
The maximum integer used as an ending for an id. | |
""" | |
postfix_number_re = re.compile('(\d+)$') | |
def ExtractPostfixNumber(entity_id): | |
"""Try to extract an integer from the end of entity_id. | |
If entity_id is None or if there is no integer ending the id, zero is | |
returned. | |
Args: | |
entity_id: An id string or None. | |
Returns: | |
An integer ending the entity_id or zero. | |
""" | |
if entity_id is None: | |
return 0 | |
match = postfix_number_re.search(entity_id) | |
if match is not None: | |
return int(match.group(1)) | |
else: | |
return 0 | |
id_data_sets = {'agency_id': schedule.GetAgencyList(), | |
'stop_id': schedule.GetStopList(), | |
'route_id': schedule.GetRouteList(), | |
'trip_id': schedule.GetTripList(), | |
'service_id': schedule.GetServicePeriodList(), | |
'fare_id': schedule.GetFareAttributeList(), | |
'shape_id': schedule.GetShapeList()} | |
max_postfix_number = 0 | |
for id_name, entity_list in id_data_sets.items(): | |
for entity in entity_list: | |
entity_id = getattr(entity, id_name) | |
postfix_number = ExtractPostfixNumber(entity_id) | |
max_postfix_number = max(max_postfix_number, postfix_number) | |
return max_postfix_number | |
def GetScheduleName(self, schedule): | |
"""Returns a single letter identifier for the schedule. | |
This only works for the old and new schedules which return 'a' and 'b' | |
respectively. The purpose of such identifiers is for generating ids. | |
Args: | |
schedule: The transitfeed.Schedule instance. | |
Returns: | |
The schedule identifier. | |
Raises: | |
KeyError: schedule is not the old or new schedule. | |
""" | |
return {self.a_schedule: 'a', self.b_schedule: 'b'}[schedule] | |
def GenerateId(self, entity_id=None): | |
"""Generate a unique id based on the given id. | |
This is done by appending a counter which is then incremented. The | |
counter is initialised at the maximum number used as an ending for | |
any id in the old and new schedules. | |
Args: | |
entity_id: The base id string. This is allowed to be None. | |
Returns: | |
The generated id. | |
""" | |
self._idnum += 1 | |
if entity_id: | |
return '%s_merged_%d' % (entity_id, self._idnum) | |
else: | |
return 'merged_%d' % self._idnum | |
def Register(self, a, b, migrated_entity): | |
"""Registers a merge mapping. | |
If a and b are both not None, this means that entities a and b were merged | |
to produce migrated_entity. If one of a or b are not None, then it means | |
it was not merged but simply migrated. | |
The effect of a call to register is to update a_merge_map and b_merge_map | |
according to the merge. Also the private attributes _migrated_entity of a | |
and b are set to migrated_entity. | |
Args: | |
a: The entity from the old feed or None. | |
b: The entity from the new feed or None. | |
migrated_entity: The migrated entity. | |
""" | |
# There are a few places where code needs to find the corresponding | |
# migrated entity of an object without knowing in which original schedule | |
# the entity started. With a_merge_map and b_merge_map both have to be | |
# checked. Use of the _migrated_entity attribute allows the migrated entity | |
# to be directly found without the schedule. The merge maps also require | |
# that all objects be hashable. GenericGTFSObject is at the moment, but | |
# this is a bug. See comment in transitfeed.GenericGTFSObject. | |
if a is not None: | |
self.a_merge_map[a] = migrated_entity | |
a._migrated_entity = migrated_entity | |
if b is not None: | |
self.b_merge_map[b] = migrated_entity | |
b._migrated_entity = migrated_entity | |
def AddMerger(self, merger): | |
"""Add a DataSetMerger to be run by Merge(). | |
Args: | |
merger: The DataSetMerger instance. | |
""" | |
self._mergers.append(merger) | |
def AddDefaultMergers(self): | |
"""Adds the default DataSetMergers defined in this module.""" | |
self.AddMerger(AgencyMerger(self)) | |
self.AddMerger(StopMerger(self)) | |
self.AddMerger(RouteMerger(self)) | |
self.AddMerger(ServicePeriodMerger(self)) | |
self.AddMerger(FareMerger(self)) | |
self.AddMerger(ShapeMerger(self)) | |
self.AddMerger(TripMerger(self)) | |
self.AddMerger(FareRuleMerger(self)) | |
def GetMerger(self, cls): | |
"""Looks for an added DataSetMerger derived from the given class. | |
Args: | |
cls: A class derived from DataSetMerger. | |
Returns: | |
The matching DataSetMerger instance. | |
Raises: | |
LookupError: No matching DataSetMerger has been added. | |
""" | |
for merger in self._mergers: | |
if isinstance(merger, cls): | |
return merger | |
raise LookupError('No matching DataSetMerger found') | |
def GetMergerList(self): | |
"""Returns the list of DataSetMerger instances that have been added.""" | |
return self._mergers | |
def MergeSchedules(self): | |
"""Merge the schedules. | |
This is done by running the DataSetMergers that have been added with | |
AddMerger() in the order that they were added. | |
Returns: | |
True if the merge was successful. | |
""" | |
for merger in self._mergers: | |
if not merger.MergeDataSets(): | |
return False | |
return True | |
def GetMergedSchedule(self): | |
"""Returns the merged schedule. | |
This will be empty before MergeSchedules() is called. | |
Returns: | |
The merged schedule. | |
""" | |
return self.merged_schedule | |
def main(): | |
"""Run the merge driver program.""" | |
usage = \ | |
"""%prog [options] <input GTFS a.zip> <input GTFS b.zip> <output GTFS.zip> | |
Merges <input GTFS a.zip> and <input GTFS b.zip> into a new GTFS file | |
<output GTFS.zip>. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/Merge | |
""" | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('--cutoff_date', | |
dest='cutoff_date', | |
default=None, | |
help='a transition date from the old feed to the new ' | |
'feed in the format YYYYMMDD') | |
parser.add_option('--largest_stop_distance', | |
dest='largest_stop_distance', | |
default=StopMerger.largest_stop_distance, | |
help='the furthest distance two stops can be apart and ' | |
'still be merged, in metres') | |
parser.add_option('--largest_shape_distance', | |
dest='largest_shape_distance', | |
default=ShapeMerger.largest_shape_distance, | |
help='the furthest distance the endpoints of two shapes ' | |
'can be apart and the shape still be merged, in metres') | |
parser.add_option('--html_output_path', | |
dest='html_output_path', | |
default='merge-results.html', | |
help='write the html output to this file') | |
parser.add_option('--no_browser', | |
dest='no_browser', | |
action='store_true', | |
help='prevents the merge results from being opened in a ' | |
'browser') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Use in-memory sqlite db instead of a temporary file. ' | |
'It is faster but uses more RAM.') | |
parser.set_defaults(memory_db=False) | |
(options, args) = parser.parse_args() | |
if len(args) != 3: | |
parser.error('You did not provide all required command line arguments.') | |
old_feed_path = os.path.abspath(args[0]) | |
new_feed_path = os.path.abspath(args[1]) | |
merged_feed_path = os.path.abspath(args[2]) | |
if old_feed_path.find("IWantMyCrash") != -1: | |
# See test/testmerge.py | |
raise Exception('For testing the merge crash handler.') | |
a_schedule = LoadWithoutErrors(old_feed_path, options.memory_db) | |
b_schedule = LoadWithoutErrors(new_feed_path, options.memory_db) | |
merged_schedule = transitfeed.Schedule(memory_db=options.memory_db) | |
accumulator = HTMLProblemAccumulator() | |
problem_reporter = MergeProblemReporter(accumulator) | |
feed_merger = FeedMerger(a_schedule, b_schedule, merged_schedule, | |
problem_reporter) | |
feed_merger.AddDefaultMergers() | |
feed_merger.GetMerger(StopMerger).SetLargestStopDistance(float( | |
options.largest_stop_distance)) | |
feed_merger.GetMerger(ShapeMerger).SetLargestShapeDistance(float( | |
options.largest_shape_distance)) | |
if options.cutoff_date is not None: | |
service_period_merger = feed_merger.GetMerger(ServicePeriodMerger) | |
service_period_merger.DisjoinCalendars(options.cutoff_date) | |
if feed_merger.MergeSchedules(): | |
feed_merger.GetMergedSchedule().WriteGoogleTransitFeed(merged_feed_path) | |
else: | |
merged_feed_path = None | |
output_file = file(options.html_output_path, 'w') | |
accumulator.WriteOutput(output_file, feed_merger, | |
old_feed_path, new_feed_path, merged_feed_path) | |
output_file.close() | |
if not options.no_browser: | |
webbrowser.open('file://%s' % os.path.abspath(options.html_output_path)) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
An example application that uses the transitfeed module. | |
You must provide a Google Maps API key. | |
""" | |
import BaseHTTPServer, sys, urlparse | |
import bisect | |
from gtfsscheduleviewer.marey_graph import MareyGraph | |
import gtfsscheduleviewer | |
import mimetypes | |
import os.path | |
import re | |
import signal | |
import simplejson | |
import socket | |
import time | |
import transitfeed | |
from transitfeed import util | |
import urllib | |
# By default Windows kills Python with Ctrl+Break. Instead make Ctrl+Break | |
# raise a KeyboardInterrupt. | |
if hasattr(signal, 'SIGBREAK'): | |
signal.signal(signal.SIGBREAK, signal.default_int_handler) | |
mimetypes.add_type('text/plain', '.vbs') | |
class ResultEncoder(simplejson.JSONEncoder): | |
def default(self, obj): | |
try: | |
iterable = iter(obj) | |
except TypeError: | |
pass | |
else: | |
return list(iterable) | |
return simplejson.JSONEncoder.default(self, obj) | |
# Code taken from | |
# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/425210/index_txt | |
# An alternate approach is shown at | |
# http://mail.python.org/pipermail/python-list/2003-July/212751.html | |
# but it requires multiple threads. A sqlite object can only be used from one | |
# thread. | |
class StoppableHTTPServer(BaseHTTPServer.HTTPServer): | |
def server_bind(self): | |
BaseHTTPServer.HTTPServer.server_bind(self) | |
self.socket.settimeout(1) | |
self._run = True | |
def get_request(self): | |
while self._run: | |
try: | |
sock, addr = self.socket.accept() | |
sock.settimeout(None) | |
return (sock, addr) | |
except socket.timeout: | |
pass | |
def stop(self): | |
self._run = False | |
def serve(self): | |
while self._run: | |
self.handle_request() | |
def StopToTuple(stop): | |
"""Return tuple as expected by javascript function addStopMarkerFromList""" | |
return (stop.stop_id, stop.stop_name, float(stop.stop_lat), | |
float(stop.stop_lon), stop.location_type) | |
class ScheduleRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): | |
def do_GET(self): | |
scheme, host, path, x, params, fragment = urlparse.urlparse(self.path) | |
parsed_params = {} | |
for k in params.split('&'): | |
k = urllib.unquote(k) | |
if '=' in k: | |
k, v = k.split('=', 1) | |
parsed_params[k] = unicode(v, 'utf8') | |
else: | |
parsed_params[k] = '' | |
if path == '/': | |
return self.handle_GET_home() | |
m = re.match(r'/json/([a-z]{1,64})', path) | |
if m: | |
handler_name = 'handle_json_GET_%s' % m.group(1) | |
handler = getattr(self, handler_name, None) | |
if callable(handler): | |
return self.handle_json_wrapper_GET(handler, parsed_params) | |
# Restrict allowable file names to prevent relative path attacks etc | |
m = re.match(r'/file/([a-z0-9_-]{1,64}\.?[a-z0-9_-]{1,64})$', path) | |
if m and m.group(1): | |
try: | |
f, mime_type = self.OpenFile(m.group(1)) | |
return self.handle_static_file_GET(f, mime_type) | |
except IOError, e: | |
print "Error: unable to open %s" % m.group(1) | |
# Ignore and treat as 404 | |
m = re.match(r'/([a-z]{1,64})', path) | |
if m: | |
handler_name = 'handle_GET_%s' % m.group(1) | |
handler = getattr(self, handler_name, None) | |
if callable(handler): | |
return handler(parsed_params) | |
return self.handle_GET_default(parsed_params, path) | |
def OpenFile(self, filename): | |
"""Try to open filename in the static files directory of this server. | |
Return a tuple (file object, string mime_type) or raise an exception.""" | |
(mime_type, encoding) = mimetypes.guess_type(filename) | |
assert mime_type | |
# A crude guess of when we should use binary mode. Without it non-unix | |
# platforms may corrupt binary files. | |
if mime_type.startswith('text/'): | |
mode = 'r' | |
else: | |
mode = 'rb' | |
return open(os.path.join(self.server.file_dir, filename), mode), mime_type | |
def handle_GET_default(self, parsed_params, path): | |
self.send_error(404) | |
def handle_static_file_GET(self, fh, mime_type): | |
content = fh.read() | |
self.send_response(200) | |
self.send_header('Content-Type', mime_type) | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def AllowEditMode(self): | |
return False | |
def handle_GET_home(self): | |
schedule = self.server.schedule | |
(min_lat, min_lon, max_lat, max_lon) = schedule.GetStopBoundingBox() | |
forbid_editing = ('true', 'false')[self.AllowEditMode()] | |
agency = ', '.join(a.agency_name for a in schedule.GetAgencyList()).encode('utf-8') | |
key = self.server.key | |
host = self.server.host | |
# A very simple template system. For a fixed set of values replace [xxx] | |
# with the value of local variable xxx | |
f, _ = self.OpenFile('index.html') | |
content = f.read() | |
for v in ('agency', 'min_lat', 'min_lon', 'max_lat', 'max_lon', 'key', | |
'host', 'forbid_editing'): | |
content = content.replace('[%s]' % v, str(locals()[v])) | |
self.send_response(200) | |
self.send_header('Content-Type', 'text/html') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def handle_json_GET_routepatterns(self, params): | |
"""Given a route_id generate a list of patterns of the route. For each | |
pattern include some basic information and a few sample trips.""" | |
schedule = self.server.schedule | |
route = schedule.GetRoute(params.get('route', None)) | |
if not route: | |
self.send_error(404) | |
return | |
time = int(params.get('time', 0)) | |
date = params.get('date', "") | |
sample_size = 3 # For each pattern return the start time for this many trips | |
pattern_id_trip_dict = route.GetPatternIdTripDict() | |
patterns = [] | |
for pattern_id, trips in pattern_id_trip_dict.items(): | |
time_stops = trips[0].GetTimeStops() | |
if not time_stops: | |
continue | |
has_non_zero_trip_type = False; | |
# Iterating over a copy so we can remove from trips inside the loop | |
trips_with_service = [] | |
for trip in trips: | |
service_id = trip.service_id | |
service_period = schedule.GetServicePeriod(service_id) | |
if date and not service_period.IsActiveOn(date): | |
continue | |
trips_with_service.append(trip) | |
if trip['trip_type'] and trip['trip_type'] != '0': | |
has_non_zero_trip_type = True | |
# We're only interested in the trips that do run on the specified date | |
trips = trips_with_service | |
name = u'%s to %s, %d stops' % (time_stops[0][2].stop_name, time_stops[-1][2].stop_name, len(time_stops)) | |
transitfeed.SortListOfTripByTime(trips) | |
num_trips = len(trips) | |
if num_trips <= sample_size: | |
start_sample_index = 0 | |
num_after_sample = 0 | |
else: | |
# Will return sample_size trips that start after the 'time' param. | |
# Linear search because I couldn't find a built-in way to do a binary | |
# search with a custom key. | |
start_sample_index = len(trips) | |
for i, trip in enumerate(trips): | |
if trip.GetStartTime() >= time: | |
start_sample_index = i | |
break | |
num_after_sample = num_trips - (start_sample_index + sample_size) | |
if num_after_sample < 0: | |
# Less than sample_size trips start after 'time' so return all the | |
# last sample_size trips. | |
num_after_sample = 0 | |
start_sample_index = num_trips - sample_size | |
sample = [] | |
for t in trips[start_sample_index:start_sample_index + sample_size]: | |
sample.append( (t.GetStartTime(), t.trip_id) ) | |
patterns.append((name, pattern_id, start_sample_index, sample, | |
num_after_sample, (0,1)[has_non_zero_trip_type])) | |
patterns.sort() | |
return patterns | |
def handle_json_wrapper_GET(self, handler, parsed_params): | |
"""Call handler and output the return value in JSON.""" | |
schedule = self.server.schedule | |
result = handler(parsed_params) | |
content = ResultEncoder().encode(result) | |
self.send_response(200) | |
self.send_header('Content-Type', 'text/plain') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def handle_json_GET_routes(self, params): | |
"""Return a list of all routes.""" | |
schedule = self.server.schedule | |
result = [] | |
for r in schedule.GetRouteList(): | |
result.append( (r.route_id, r.route_short_name, r.route_long_name) ) | |
result.sort(key = lambda x: x[1:3]) | |
return result | |
def handle_json_GET_routerow(self, params): | |
schedule = self.server.schedule | |
route = schedule.GetRoute(params.get('route', None)) | |
return [transitfeed.Route._FIELD_NAMES, route.GetFieldValuesTuple()] | |
def handle_json_GET_triprows(self, params): | |
"""Return a list of rows from the feed file that are related to this | |
trip.""" | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip', None)) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
route = schedule.GetRoute(trip.route_id) | |
trip_row = dict(trip.iteritems()) | |
route_row = dict(route.iteritems()) | |
return [['trips.txt', trip_row], ['routes.txt', route_row]] | |
def handle_json_GET_tripstoptimes(self, params): | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip')) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
time_stops = trip.GetTimeStops() | |
stops = [] | |
times = [] | |
for arr,dep,stop in time_stops: | |
stops.append(StopToTuple(stop)) | |
times.append(arr) | |
return [stops, times] | |
def handle_json_GET_tripshape(self, params): | |
schedule = self.server.schedule | |
try: | |
trip = schedule.GetTrip(params.get('trip')) | |
except KeyError: | |
# if a non-existent trip is searched for, the return nothing | |
return | |
points = [] | |
if trip.shape_id: | |
shape = schedule.GetShape(trip.shape_id) | |
for (lat, lon, dist) in shape.points: | |
points.append((lat, lon)) | |
else: | |
time_stops = trip.GetTimeStops() | |
for arr,dep,stop in time_stops: | |
points.append((stop.stop_lat, stop.stop_lon)) | |
return points | |
def handle_json_GET_neareststops(self, params): | |
"""Return a list of the nearest 'limit' stops to 'lat', 'lon'""" | |
schedule = self.server.schedule | |
lat = float(params.get('lat')) | |
lon = float(params.get('lon')) | |
limit = int(params.get('limit')) | |
stops = schedule.GetNearestStops(lat=lat, lon=lon, n=limit) | |
return [StopToTuple(s) for s in stops] | |
def handle_json_GET_boundboxstops(self, params): | |
"""Return a list of up to 'limit' stops within bounding box with 'n','e' | |
and 's','w' in the NE and SW corners. Does not handle boxes crossing | |
longitude line 180.""" | |
schedule = self.server.schedule | |
n = float(params.get('n')) | |
e = float(params.get('e')) | |
s = float(params.get('s')) | |
w = float(params.get('w')) | |
limit = int(params.get('limit')) | |
stops = schedule.GetStopsInBoundingBox(north=n, east=e, south=s, west=w, n=limit) | |
return [StopToTuple(s) for s in stops] | |
def handle_json_GET_stopsearch(self, params): | |
schedule = self.server.schedule | |
query = params.get('q', None).lower() | |
matches = [] | |
for s in schedule.GetStopList(): | |
if s.stop_id.lower().find(query) != -1 or s.stop_name.lower().find(query) != -1: | |
matches.append(StopToTuple(s)) | |
return matches | |
def handle_json_GET_stoptrips(self, params): | |
"""Given a stop_id and time in seconds since midnight return the next | |
trips to visit the stop.""" | |
schedule = self.server.schedule | |
stop = schedule.GetStop(params.get('stop', None)) | |
time = int(params.get('time', 0)) | |
date = params.get('date', "") | |
time_trips = stop.GetStopTimeTrips(schedule) | |
time_trips.sort() # OPT: use bisect.insort to make this O(N*ln(N)) -> O(N) | |
# Keep the first 5 after param 'time'. | |
# Need make a tuple to find correct bisect point | |
time_trips = time_trips[bisect.bisect_left(time_trips, (time, 0)):] | |
time_trips = time_trips[:5] | |
# TODO: combine times for a route to show next 2 departure times | |
result = [] | |
for time, (trip, index), tp in time_trips: | |
service_id = trip.service_id | |
service_period = schedule.GetServicePeriod(service_id) | |
if date and not service_period.IsActiveOn(date): | |
continue | |
headsign = None | |
# Find the most recent headsign from the StopTime objects | |
for stoptime in trip.GetStopTimes()[index::-1]: | |
if stoptime.stop_headsign: | |
headsign = stoptime.stop_headsign | |
break | |
# If stop_headsign isn't found, look for a trip_headsign | |
if not headsign: | |
headsign = trip.trip_headsign | |
route = schedule.GetRoute(trip.route_id) | |
trip_name = '' | |
if route.route_short_name: | |
trip_name += route.route_short_name | |
if route.route_long_name: | |
if len(trip_name): | |
trip_name += " - " | |
trip_name += route.route_long_name | |
if headsign: | |
trip_name += " (Direction: %s)" % headsign | |
result.append((time, (trip.trip_id, trip_name, trip.service_id), tp)) | |
return result | |
def handle_GET_ttablegraph(self,params): | |
"""Draw a Marey graph in SVG for a pattern (collection of trips in a route | |
that visit the same sequence of stops).""" | |
schedule = self.server.schedule | |
marey = MareyGraph() | |
trip = schedule.GetTrip(params.get('trip', None)) | |
route = schedule.GetRoute(trip.route_id) | |
height = int(params.get('height', 300)) | |
if not route: | |
print 'no such route' | |
self.send_error(404) | |
return | |
pattern_id_trip_dict = route.GetPatternIdTripDict() | |
pattern_id = trip.pattern_id | |
if pattern_id not in pattern_id_trip_dict: | |
print 'no pattern %s found in %s' % (pattern_id, pattern_id_trip_dict.keys()) | |
self.send_error(404) | |
return | |
triplist = pattern_id_trip_dict[pattern_id] | |
pattern_start_time = min((t.GetStartTime() for t in triplist)) | |
pattern_end_time = max((t.GetEndTime() for t in triplist)) | |
marey.SetSpan(pattern_start_time,pattern_end_time) | |
marey.Draw(triplist[0].GetPattern(), triplist, height) | |
content = marey.Draw() | |
self.send_response(200) | |
self.send_header('Content-Type', 'image/svg+xml') | |
self.send_header('Content-Length', str(len(content))) | |
self.end_headers() | |
self.wfile.write(content) | |
def FindPy2ExeBase(): | |
"""If this is running in py2exe return the install directory else return | |
None""" | |
# py2exe puts gtfsscheduleviewer in library.zip. For py2exe setup.py is | |
# configured to put the data next to library.zip. | |
windows_ending = gtfsscheduleviewer.__file__.find('\\library.zip\\') | |
if windows_ending != -1: | |
return transitfeed.__file__[:windows_ending] | |
else: | |
return None | |
def FindDefaultFileDir(): | |
"""Return the path of the directory containing the static files. By default | |
the directory is called 'files'. The location depends on where setup.py put | |
it.""" | |
base = FindPy2ExeBase() | |
if base: | |
return os.path.join(base, 'schedule_viewer_files') | |
else: | |
# For all other distributions 'files' is in the gtfsscheduleviewer | |
# directory. | |
base = os.path.dirname(gtfsscheduleviewer.__file__) # Strip __init__.py | |
return os.path.join(base, 'files') | |
def GetDefaultKeyFilePath(): | |
"""In py2exe return absolute path of file in the base directory and in all | |
other distributions return relative path 'key.txt'""" | |
windows_base = FindPy2ExeBase() | |
if windows_base: | |
return os.path.join(windows_base, 'key.txt') | |
else: | |
return 'key.txt' | |
def main(RequestHandlerClass = ScheduleRequestHandler): | |
usage = \ | |
'''%prog [options] [<input GTFS.zip>] | |
Runs a webserver that lets you explore a <input GTFS.zip> in your browser. | |
If <input GTFS.zip> is omited the filename is read from the console. Dragging | |
a file into the console may enter the filename. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/ScheduleViewer | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('--feed_filename', '--feed', dest='feed_filename', | |
help='file name of feed to load') | |
parser.add_option('--key', dest='key', | |
help='Google Maps API key or the name ' | |
'of a text file that contains an API key') | |
parser.add_option('--host', dest='host', help='Host name of Google Maps') | |
parser.add_option('--port', dest='port', type='int', | |
help='port on which to listen') | |
parser.add_option('--file_dir', dest='file_dir', | |
help='directory containing static files') | |
parser.add_option('-n', '--noprompt', action='store_false', | |
dest='manual_entry', | |
help='disable interactive prompts') | |
parser.set_defaults(port=8765, | |
host='maps.google.com', | |
file_dir=FindDefaultFileDir(), | |
manual_entry=True) | |
(options, args) = parser.parse_args() | |
if not os.path.isfile(os.path.join(options.file_dir, 'index.html')): | |
print "Can't find index.html with --file_dir=%s" % options.file_dir | |
exit(1) | |
if not options.feed_filename and len(args) == 1: | |
options.feed_filename = args[0] | |
if not options.feed_filename and options.manual_entry: | |
options.feed_filename = raw_input('Enter Feed Location: ').strip('"') | |
default_key_file = GetDefaultKeyFilePath() | |
if not options.key and os.path.isfile(default_key_file): | |
options.key = open(default_key_file).read().strip() | |
if options.key and os.path.isfile(options.key): | |
options.key = open(options.key).read().strip() | |
schedule = transitfeed.Schedule(problem_reporter=transitfeed.ProblemReporter()) | |
print 'Loading data from feed "%s"...' % options.feed_filename | |
print '(this may take a few minutes for larger cities)' | |
schedule.Load(options.feed_filename) | |
server = StoppableHTTPServer(server_address=('', options.port), | |
RequestHandlerClass=RequestHandlerClass) | |
server.key = options.key | |
server.schedule = schedule | |
server.file_dir = options.file_dir | |
server.host = options.host | |
server.feed_path = options.feed_filename | |
print ("To view, point your browser at http://localhost:%d/" % | |
(server.server_port)) | |
server.serve_forever() | |
if __name__ == '__main__': | |
main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
This script can be used to create a source distribution, binary distribution | |
or Windows executable files. The output is put in dist/ | |
See | |
http://code.google.com/p/googletransitdatafeed/wiki/BuildingPythonWindowsExecutables | |
for help on creating Windows executables. | |
""" | |
from distutils.core import setup | |
import glob | |
import os.path | |
from transitfeed import __version__ as VERSION | |
try: | |
import py2exe | |
has_py2exe = True | |
except ImportError, e: | |
# Won't be able to generate win32 exe | |
has_py2exe = False | |
# py2exe doesn't automatically include pytz dependency because it is optional | |
options = {'py2exe': {'packages': ['pytz']}} | |
scripts_for_py2exe = ['feedvalidator.py', 'schedule_viewer.py', 'kmlparser.py', | |
'kmlwriter.py', 'merge.py', 'unusual_trip_filter.py'] | |
# On Nov 23, 2009 Tom Brown said: I'm not confident that we can include a | |
# working copy of this script in the py2exe distribution because it depends on | |
# ogr. I do want it included in the source tar.gz. | |
scripts_for_source_only = ['shape_importer.py'] | |
kwargs = {} | |
if has_py2exe: | |
kwargs['console'] = scripts_for_py2exe | |
# py2exe seems to ignore package_data and not add marey_graph. This makes it | |
# work. | |
kwargs['data_files'] = \ | |
[('schedule_viewer_files', | |
glob.glob(os.path.join('gtfsscheduleviewer', 'files', '*')))] | |
options['py2exe'] = {'dist_dir': 'transitfeed-windows-binary-%s' % VERSION} | |
setup( | |
version=VERSION, | |
name='transitfeed', | |
url='http://code.google.com/p/googletransitdatafeed/', | |
download_url='http://googletransitdatafeed.googlecode.com/' | |
'files/transitfeed-%s.tar.gz' % VERSION, | |
maintainer='Tom Brown', | |
maintainer_email='tom.brown.code@gmail.com', | |
description='Google Transit Feed Specification library and tools', | |
long_description='This module provides a library for reading, writing and ' | |
'validating Google Transit Feed Specification files. It includes some ' | |
'scripts that validate a feed, display it using the Google Maps API and ' | |
'the start of a KML importer and exporter.', | |
platforms='OS Independent', | |
license='Apache License, Version 2.0', | |
packages=['gtfsscheduleviewer', 'transitfeed'], | |
# Also need to list package_data contents in MANIFEST.in for it to be | |
# included in sdist. See "[Distutils] package_data not used by sdist | |
# command" Feb 2, 2007 | |
package_data={'gtfsscheduleviewer': ['files/*']}, | |
scripts=scripts_for_py2exe + scripts_for_source_only, | |
zip_safe=False, | |
classifiers=[ | |
'Development Status :: 4 - Beta', | |
'Intended Audience :: Developers', | |
'Intended Audience :: Information Technology', | |
'Intended Audience :: Other Audience', | |
'License :: OSI Approved :: Apache Software License', | |
'Operating System :: OS Independent', | |
'Programming Language :: Python', | |
'Topic :: Scientific/Engineering :: GIS', | |
'Topic :: Software Development :: Libraries :: Python Modules' | |
], | |
options=options, | |
**kwargs | |
) | |
if has_py2exe: | |
# Sometime between pytz-2008a and pytz-2008i common_timezones started to | |
# include only names of zones with a corresponding data file in zoneinfo. | |
# pytz installs the zoneinfo directory tree in the same directory | |
# as the pytz/__init__.py file. These data files are loaded using | |
# pkg_resources.resource_stream. py2exe does not copy this to library.zip so | |
# resource_stream can't find the files and common_timezones is empty when | |
# read in the py2exe executable. | |
# This manually copies zoneinfo into the zip. See also | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=121 | |
import pytz | |
import zipfile | |
# Make sure the layout of pytz hasn't changed | |
assert (pytz.__file__.endswith('__init__.pyc') or | |
pytz.__file__.endswith('__init__.py')), pytz.__file__ | |
zoneinfo_dir = os.path.join(os.path.dirname(pytz.__file__), 'zoneinfo') | |
# '..\\Lib\\pytz\\__init__.py' -> '..\\Lib' | |
disk_basedir = os.path.dirname(os.path.dirname(pytz.__file__)) | |
zipfile_path = os.path.join(options['py2exe']['dist_dir'], 'library.zip') | |
z = zipfile.ZipFile(zipfile_path, 'a') | |
for absdir, directories, filenames in os.walk(zoneinfo_dir): | |
assert absdir.startswith(disk_basedir), (absdir, disk_basedir) | |
zip_dir = absdir[len(disk_basedir):] | |
for f in filenames: | |
z.write(os.path.join(absdir, f), os.path.join(zip_dir, f)) | |
z.close() | |
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A utility program to help add shapes to an existing GTFS feed. | |
Requires the ogr python package. | |
""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import csv | |
import glob | |
import ogr | |
import os | |
import shutil | |
import sys | |
import tempfile | |
import transitfeed | |
from transitfeed import shapelib | |
from transitfeed import util | |
import zipfile | |
class ShapeImporterError(Exception): | |
pass | |
def PrintColumns(shapefile): | |
""" | |
Print the columns of layer 0 of the shapefile to the screen. | |
""" | |
ds = ogr.Open(shapefile) | |
layer = ds.GetLayer(0) | |
if len(layer) == 0: | |
raise ShapeImporterError("Layer 0 has no elements!") | |
feature = layer.GetFeature(0) | |
print "%d features" % feature.GetFieldCount() | |
for j in range(0, feature.GetFieldCount()): | |
print '--' + feature.GetFieldDefnRef(j).GetName() + \ | |
': ' + feature.GetFieldAsString(j) | |
def AddShapefile(shapefile, graph, key_cols): | |
""" | |
Adds shapes found in the given shape filename to the given polyline | |
graph object. | |
""" | |
ds = ogr.Open(shapefile) | |
layer = ds.GetLayer(0) | |
for i in range(0, len(layer)): | |
feature = layer.GetFeature(i) | |
geometry = feature.GetGeometryRef() | |
if key_cols: | |
key_list = [] | |
for col in key_cols: | |
key_list.append(str(feature.GetField(col))) | |
shape_id = '-'.join(key_list) | |
else: | |
shape_id = '%s-%d' % (shapefile, i) | |
poly = shapelib.Poly(name=shape_id) | |
for j in range(0, geometry.GetPointCount()): | |
(lat, lng) = (round(geometry.GetY(j), 15), round(geometry.GetX(j), 15)) | |
poly.AddPoint(shapelib.Point.FromLatLng(lat, lng)) | |
graph.AddPoly(poly) | |
return graph | |
def GetMatchingShape(pattern_poly, trip, matches, max_distance, verbosity=0): | |
""" | |
Tries to find a matching shape for the given pattern Poly object, | |
trip, and set of possibly matching Polys from which to choose a match. | |
""" | |
if len(matches) == 0: | |
print ('No matching shape found within max-distance %d for trip %s ' | |
% (max_distance, trip.trip_id)) | |
return None | |
if verbosity >= 1: | |
for match in matches: | |
print "match: size %d" % match.GetNumPoints() | |
scores = [(pattern_poly.GreedyPolyMatchDist(match), match) | |
for match in matches] | |
scores.sort() | |
if scores[0][0] > max_distance: | |
print ('No matching shape found within max-distance %d for trip %s ' | |
'(min score was %f)' | |
% (max_distance, trip.trip_id, scores[0][0])) | |
return None | |
return scores[0][1] | |
def AddExtraShapes(extra_shapes_txt, graph): | |
""" | |
Add extra shapes into our input set by parsing them out of a GTFS-formatted | |
shapes.txt file. Useful for manually adding lines to a shape file, since it's | |
a pain to edit .shp files. | |
""" | |
print "Adding extra shapes from %s" % extra_shapes_txt | |
try: | |
tmpdir = tempfile.mkdtemp() | |
shutil.copy(extra_shapes_txt, os.path.join(tmpdir, 'shapes.txt')) | |
loader = transitfeed.ShapeLoader(tmpdir) | |
schedule = loader.Load() | |
for shape in schedule.GetShapeList(): | |
print "Adding extra shape: %s" % shape.shape_id | |
graph.AddPoly(ShapeToPoly(shape)) | |
finally: | |
if tmpdir: | |
shutil.rmtree(tmpdir) | |
# Note: this method lives here to avoid cross-dependencies between | |
# shapelib and transitfeed. | |
def ShapeToPoly(shape): | |
poly = shapelib.Poly(name=shape.shape_id) | |
for lat, lng, distance in shape.points: | |
point = shapelib.Point.FromLatLng(round(lat, 15), round(lng, 15)) | |
poly.AddPoint(point) | |
return poly | |
def ValidateArgs(options_parser, options, args): | |
if not (args and options.source_gtfs and options.dest_gtfs): | |
options_parser.error("You must specify a source and dest GTFS file, " | |
"and at least one source shapefile") | |
def DefineOptions(): | |
usage = \ | |
"""%prog [options] --source_gtfs=<input GTFS.zip> --dest_gtfs=<output GTFS.zip>\ | |
<input.shp> [<input.shp>...] | |
Try to match shapes in one or more SHP files to trips in a GTFS file.""" | |
options_parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
options_parser.add_option("--print_columns", | |
action="store_true", | |
default=False, | |
dest="print_columns", | |
help="Print column names in shapefile DBF and exit") | |
options_parser.add_option("--keycols", | |
default="", | |
dest="keycols", | |
help="Comma-separated list of the column names used" | |
"to index shape ids") | |
options_parser.add_option("--max_distance", | |
type="int", | |
default=150, | |
dest="max_distance", | |
help="Max distance from a shape to which to match") | |
options_parser.add_option("--source_gtfs", | |
default="", | |
dest="source_gtfs", | |
metavar="FILE", | |
help="Read input GTFS from FILE") | |
options_parser.add_option("--dest_gtfs", | |
default="", | |
dest="dest_gtfs", | |
metavar="FILE", | |
help="Write output GTFS with shapes to FILE") | |
options_parser.add_option("--extra_shapes", | |
default="", | |
dest="extra_shapes", | |
metavar="FILE", | |
help="Extra shapes.txt (CSV) formatted file") | |
options_parser.add_option("--verbosity", | |
type="int", | |
default=0, | |
dest="verbosity", | |
help="Verbosity level. Higher is more verbose") | |
return options_parser | |
def main(key_cols): | |
print 'Parsing shapefile(s)...' | |
graph = shapelib.PolyGraph() | |
for arg in args: | |
print ' ' + arg | |
AddShapefile(arg, graph, key_cols) | |
if options.extra_shapes: | |
AddExtraShapes(options.extra_shapes, graph) | |
print 'Loading GTFS from %s...' % options.source_gtfs | |
schedule = transitfeed.Loader(options.source_gtfs).Load() | |
shape_count = 0 | |
pattern_count = 0 | |
verbosity = options.verbosity | |
print 'Matching shapes to trips...' | |
for route in schedule.GetRouteList(): | |
print 'Processing route', route.route_short_name | |
patterns = route.GetPatternIdTripDict() | |
for pattern_id, trips in patterns.iteritems(): | |
pattern_count += 1 | |
pattern = trips[0].GetPattern() | |
poly_points = [shapelib.Point.FromLatLng(p.stop_lat, p.stop_lon) | |
for p in pattern] | |
if verbosity >= 2: | |
print "\npattern %d, %d points:" % (pattern_id, len(poly_points)) | |
for i, (stop, point) in enumerate(zip(pattern, poly_points)): | |
print "Stop %d '%s': %s" % (i + 1, stop.stop_name, point.ToLatLng()) | |
# First, try to find polys that run all the way from | |
# the start of the trip to the end. | |
matches = graph.FindMatchingPolys(poly_points[0], poly_points[-1], | |
options.max_distance) | |
if not matches: | |
# Try to find a path through the graph, joining | |
# multiple edges to find a path that covers all the | |
# points in the trip. Some shape files are structured | |
# this way, with a polyline for each segment between | |
# stations instead of a polyline covering an entire line. | |
shortest_path = graph.FindShortestMultiPointPath(poly_points, | |
options.max_distance, | |
verbosity=verbosity) | |
if shortest_path: | |
matches = [shortest_path] | |
else: | |
matches = [] | |
pattern_poly = shapelib.Poly(poly_points) | |
shape_match = GetMatchingShape(pattern_poly, trips[0], | |
matches, options.max_distance, | |
verbosity=verbosity) | |
if shape_match: | |
shape_count += 1 | |
# Rename shape for readability. | |
shape_match = shapelib.Poly(points=shape_match.GetPoints(), | |
name="shape_%d" % shape_count) | |
for trip in trips: | |
try: | |
shape = schedule.GetShape(shape_match.GetName()) | |
except KeyError: | |
shape = transitfeed.Shape(shape_match.GetName()) | |
for point in shape_match.GetPoints(): | |
(lat, lng) = point.ToLatLng() | |
shape.AddPoint(lat, lng) | |
schedule.AddShapeObject(shape) | |
trip.shape_id = shape.shape_id | |
print "Matched %d shapes out of %d patterns" % (shape_count, pattern_count) | |
schedule.WriteGoogleTransitFeed(options.dest_gtfs) | |
if __name__ == '__main__': | |
# Import psyco if available for better performance. | |
try: | |
import psyco | |
psyco.full() | |
except ImportError: | |
pass | |
options_parser = DefineOptions() | |
(options, args) = options_parser.parse_args() | |
ValidateArgs(options_parser, options, args) | |
if options.print_columns: | |
for arg in args: | |
PrintColumns(arg) | |
sys.exit(0) | |
key_cols = options.keycols.split(',') | |
main(key_cols) | |
agency_id,agency_name,agency_url,agency_timezone,agency_phone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles,123 12314 | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,2007.01.01,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 | |
service_id,date,exception_type | |
FULLW,2007-06-04,2 | |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code,location_type,parent_station | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,,,1234,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,0,BEATTY_AIRPORT_STATION | |
BEATTY_AIRPORT_STATION,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,1, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,,,,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,,,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,,,,, | |
from_stop_id,to_stop_id,transfer_type,min_transfer_time | |
NADAV,NANAA,3, | |
EMSI,NANAA,2,1200 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/bad_eol.zip differ
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3 | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3 | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode short name,3 | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/contains_null/stops.txt differ
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,Ō,0 | |
CITY,FULLW,CITY2,Ō,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 | |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode short name,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Démonstration),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,Ō,0,, | |
CITY,FULLW,CITY2,Ō,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,FROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
FROG,Bull Frog,,36.881083,-116.817968 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,10,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
CITY,FULLW,CITY1,,0,, | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3, | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle,1, | |
CITY,FULLW,CITY1,,0, | |
CITY,FULLW,CITY2,,1, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0, | |
AAMV,WE,AAMV2,to Airport,1, | |
AAMV,WE,AAMV3,to Amargosa Valley,0, | |
AAMV,WE,AAMV4,to Airport,1, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,10,Airport - Bullfrog,,3,,, | |
BFC,DTA,20,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,30,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,40,City,,3,,, | |
AAMV,DTA,50,Airport - Amargosa Valley,,3,,, |
shape_id,shape_pt_lat,shape_pt_lon,shape_pt_sequence,shape_dist_traveled |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY3,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY3,6:05:00,6:07:00,NANAA,2,,, | |
CITY3,6:12:00,6:14:00,NADAV,3,,, | |
CITY3,6:19:00,6:21:00,DADAN,4,,, | |
CITY3,6:26:00,6:28:00,EMSI,5,,, | |
CITY4,6:28:00,6:30:00,EMSI,1,,, | |
CITY4,6:35:00,6:37:00,DADAN,2,,, | |
CITY4,6:42:00,6:44:00,NADAV,3,,, | |
CITY4,6:49:00,6:51:00,NANAA,4,,, | |
CITY4,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY5,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY5,6:05:00,6:07:00,NANAA,2,,, | |
CITY5,6:12:00,6:14:00,NADAV,3,,, | |
CITY5,6:19:00,6:21:00,DADAN,4,,, | |
CITY5,6:26:00,6:28:00,EMSI,5,,, | |
CITY6,6:28:00,6:30:00,EMSI,1,,, | |
CITY6,6:35:00,6:37:00,DADAN,2,,, | |
CITY6,6:42:00,6:44:00,NADAV,3,,, | |
CITY6,6:49:00,6:51:00,NANAA,4,,, | |
CITY6,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY7,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY7,6:05:00,6:07:00,NANAA,2,,, | |
CITY7,6:12:00,6:14:00,NADAV,3,,, | |
CITY7,6:19:00,6:21:00,DADAN,4,,, | |
CITY7,6:26:00,6:28:00,EMSI,5,,, | |
CITY8,6:28:00,6:30:00,EMSI,1,,, | |
CITY8,6:35:00,6:37:00,DADAN,2,,, | |
CITY8,6:42:00,6:44:00,NADAV,3,,, | |
CITY8,6:49:00,6:51:00,NANAA,4,,, | |
CITY8,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY9,6:00:00,6:00:00,STAGECOACH,1,,, | |
CITY9,6:05:00,6:07:00,NANAA,2,,, | |
CITY9,6:12:00,6:14:00,NADAV,3,,, | |
CITY9,6:19:00,6:21:00,DADAN,4,,, | |
CITY9,6:26:00,6:28:00,EMSI,5,,, | |
CITY10,6:28:00,6:30:00,EMSI,1,,, | |
CITY10,6:35:00,6:37:00,DADAN,2,,, | |
CITY10,6:42:00,6:44:00,NADAV,3,,, | |
CITY10,6:49:00,6:51:00,NANAA,4,,, | |
CITY10,6:56:00,6:58:00,STAGECOACH,5,,, | |
CITY11,6:00:00,6:00:00,NANAA,1,,, | |
CITY11,6:05:00,6:07:00,BEATTY_AIRPORT,2,,, | |
CITY11,6:12:00,6:14:00,BULLFROG,3,,, | |
CITY11,6:19:00,6:21:00,DADAN,4,,, | |
CITY11,6:26:00,6:28:00,EMSI,5,,, | |
CITY12,6:28:00,6:30:00,EMSI,1,,, | |
CITY12,6:35:00,6:37:00,DADAN,2,,, | |
CITY12,7:07:00,7:09:00,AMV,3,,, | |
CITY12,7:39:00,7:41:00,BEATTY_AIRPORT,4,,, | |
CITY12,7:46:00,7:48:00,STAGECOACH,5,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
CITY,FULLW,CITY3,,0,, | |
CITY,FULLW,CITY4,,1,, | |
CITY,FULLW,CITY5,,0,, | |
CITY,FULLW,CITY6,,1,, | |
CITY,FULLW,CITY7,,0,, | |
CITY,FULLW,CITY8,,1,, | |
CITY,FULLW,CITY9,,0,, | |
CITY,FULLW,CITY10,,1,, | |
CITY,FULLW,CITY11,,0,, | |
CITY,FULLW,CITY12,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 | |
WE,20070604,1 | |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
route_1,DTA,1,route with a single trip,,0,http://routes.com/route_1,FF0000, | |
route_2,DTA,2,route with two trips and one component,test route desc 2,1,,00FF00, | |
route_3,DTA,3,route with two trips and two components,test route desc 3,2,http://routes.com/route_3,, | |
route_4,DTA,4,route with two equal trips,test route desc 4,3,http://routes.com/route_4,FFFF00, | |
route_5,DTA,5,route with two trip but no graph,test route desc 5,4,http://routes.com/route_5,FF00FF, | |
route_6,DTA,6,route with one trip and no stops,test route desc 6,5,http://routes.com/route_6,00FFFF, | |
route_7,DTA,7,route with no trips,test route desc 7,6,http://routes.com/route_7,, | |
route_8,DTA,8,route with a cyclic pattern,test route desc 8,7,http://routes.com/route_8,, | |
shape_id,shape_pt_sequence,shape_pt_lat,shape_pt_lon | |
shape_1,1,1,1 | |
shape_1,2,2,4 | |
shape_1,3,3,9 | |
shape_1,4,4,16 | |
shape_2,1,11,11 | |
shape_2,2,12,14 | |
shape_2,3,13,19 | |
shape_2,4,14,26 | |
shape_3,1,21,21 | |
shape_3,2,22,24 | |
shape_3,3,23,29 | |
shape_3,4,24,36 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
route_1_1,6:00:00,6:00:00,stop1,1 | |
route_1_1,7:00:00,7:00:00,stop2,2 | |
route_1_1,8:00:00,8:00:00,stop3,3 | |
route_2_1,6:00:00,6:00:00,stop1,1 | |
route_2_1,7:00:00,7:00:00,stop2,2 | |
route_2_1,8:00:00,8:00:00,stop3,3 | |
route_2_2,6:00:00,6:00:00,stop2,1 | |
route_2_2,7:00:00,7:00:00,stop4,2 | |
route_2_2,8:00:00,8:00:00,stop5,3 | |
route_3_1,6:00:00,6:00:00,stop1,1 | |
route_3_1,7:00:00,7:00:00,stop2,2 | |
route_3_1,8:00:00,8:00:00,stop3,3 | |
route_3_2,6:00:00,6:00:00,stop4,1 | |
route_3_2,7:00:00,7:00:00,stop5,2 | |
route_3_2,8:00:00,8:00:00,stop6,3 | |
route_4_1,6:00:00,6:00:00,stop1,1 | |
route_4_1,7:00:00,7:00:00,stop2,2 | |
route_4_1,8:00:00,8:00:00,stop3,3 | |
route_4_2,6:00:00,6:00:00,stop1,1 | |
route_4_2,7:00:00,7:00:00,stop2,2 | |
route_4_2,8:00:00,8:00:00,stop3,3 | |
route_5_1,6:00:00,6:00:00,stop1,1 | |
route_5_2,6:00:00,6:00:00,stop2,1 | |
route_8_1,6:00:00,6:00:00,stop1,1 | |
route_8_1,7:00:00,7:00:00,stop2,2 | |
route_8_1,8:00:00,8:00:00,stop3,3 | |
route_8_1,9:00:00,9:00:00,stop1,4 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
stop1,Furnace Creek Resort (Demo),,36.425288,-117.133162,,http://stops.com/stop1 | |
stop2,Nye County Airport (Demo),the stop at Nye County Airport,36.868446,-116.784582,, | |
stop3,Bullfrog (Demo),the stop at Bullfrog,36.88108,-116.81797,,http://stops.com/stop3 | |
stop4,Stagecoach Hotel & Casino (Demo),the stop at Stagecoach Hotel & Casino,36.915682,-116.751677,,http://stops.com/stop4 | |
stop5,North Ave / D Ave N (Demo),the stop at North Ave / D Ave N,36.914893,-116.76821,,http://stops.com/stop5 | |
stop6,North Ave / N A Ave (Demo),the stop at North Ave / N A Ave,36.914944,-116.761472,,http://stops.com/stop6 | |
stop7,Doing Ave / D Ave N (Demo),the stop at Doing Ave / D Ave N,36.909489,-116.768242,,http://stops.com/stop7 | |
stop8,E Main St / S Irving St (Demo),the stop at E Main St / S Irving St,36.905697,-116.76218,,http://stops.com/stop8 | |
stop9,Amargosa Valley (Demo),the stop at Amargosa Valley,36.641496,-116.40094,,http://stops.com/stop9 | |
route_id,service_id,trip_id,shape_id | |
route_1,FULLW,route_1_1,shape_1 | |
route_2,FULLW,route_2_1,shape_2 | |
route_2,FULLW,route_2_2,shape_3 | |
route_3,FULLW,route_3_1,shape_1 | |
route_3,FULLW,route_3_2,shape_1 | |
route_4,FULLW,route_4_1, | |
route_4,FULLW,route_4_2, | |
route_5,FULLW,route_5_1, | |
route_5,FULLW,route_5_2, | |
route_8,FULLW,route_8_1, | |
route_8,WE,route_8_2, | |
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/good_feed.zip differ
agency_id,agency_name,agency_url,agency_timezone,agency_phone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles,123 12314 | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20111231 | |
WE,0,0,0,0,0,1,1,20070101,20111231 | |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code,location_type,parent_station | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,,,1234,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,0,BEATTY_AIRPORT_STATION | |
BEATTY_AIRPORT_STATION,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,1, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,,,,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,,,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,,,,, | |
from_stop_id,to_stop_id,transfer_type,min_transfer_time | |
NADAV,NANAA,3, | |
EMSI,NANAA,2,1200 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DVT,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 | |
route_id,service_id,trip_id,trip_headsign,direction_id | |
AB,FULLW,AB1,to Bullfrog,0 | |
AB,FULLW,AB2,to Airport,1 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0 | |
BFC,FULLW,BFC2,to Bullfrog,1 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_url,agency_timezone | |
DTA,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:12:00,,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
CITY,FULLW,CITY1,,0,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode short name,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,,,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,,,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Démonstration),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,Ō,0,, | |
CITY,FULLW,CITY2,Ō,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url | |
AB,DTA,,Airport ⇒ Bullfrog,,3, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,http://google.com | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3 | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon | |
FUR_CREEK_RES,Furnace Creek Resort (Démonstration),,36.425288,-117.13316 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,Ō,,, | |
CITY,FULLW,CITY2,Ō,,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
"service_id","monday","tuesday","wednesday","friday","saturday","sunday","start_date","end_date" | |
"FULLW",1,1,1,1,1,1,20070101,20101231 | |
"WE",0,0,0,0,1,1,20070101,20101231 | |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3 | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3 | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3 | |
CITY,DTA,,City,,3 | |
AAMV,DTA,,Airport - Amargosa Valley,,3 |
trip_id,arrival_time,departure_time,stop_id,stop_sequence | |
STBA,6:00:00,6:00:00,STAGECOACH,1 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2 | |
CITY1,6:00:00,6:00:00,STAGECOACH,1 | |
CITY1,6:05:00,6:07:00,NANAA,2 | |
CITY1,6:12:00,6:14:00,NADAV,3 | |
CITY1,6:19:00,6:21:00,DADAN,4 | |
CITY1,6:26:00,6:28:00,EMSI,5 | |
CITY2,6:28:00,6:30:00,EMSI,1 | |
CITY2,6:35:00,6:37:00,DADAN,2 | |
CITY2,6:42:00,6:44:00,NADAV,3 | |
CITY2,6:49:00,6:51:00,NANAA,4 | |
CITY2,6:56:00,6:58:00,STAGECOACH,5 | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AB1,8:10:00,8:15:00,BULLFROG,2 | |
AB2,12:05:00,12:05:00,BULLFROG,1 | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2 | |
BFC1,8:20:00,8:20:00,BULLFROG,1 | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2 | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1 | |
BFC2,12:00:00,12:00:00,BULLFROG,2 | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1 | |
AAMV1,9:00:00,9:00:00,AMV,2 | |
AAMV2,10:00:00,10:00:00,AMV,1 | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2 | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1 | |
AAMV3,14:00:00,14:00:00,AMV,2 | |
AAMV4,15:00:00,15:00:00,AMV,1 | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2 | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162 | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582 | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797 | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677 | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821 | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472 | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242 | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218 | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094 |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1 | |
AB,FULLW,AB2,to Airport,1,2 | |
STBA,FULLW,STBA,Shuttle | |
CITY,FULLW,CITY1,,0 | |
CITY,FULLW,CITY2,,1 | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1 | |
BFC,FULLW,BFC2,to Bullfrog,1,2 | |
AAMV,WE,AAMV1,to Amargosa Valley,0 | |
AAMV,WE,AAMV2,to Airport,1 | |
AAMV,WE,AAMV3,to Amargosa Valley,0 | |
AAMV,WE,AAMV4,to Airport,1 |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,1,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,1,,,, | |
CITY1,6:12:00,6:14:00,NADAV,2,,,, | |
CITY1,6:19:00,6:21:00,DADAN,3,,,, | |
CITY1,6:26:00,6:28:00,EMSI,4,,,, | |
CITY2,6:28:00,6:30:00,EMSI,-2,,,, | |
CITY2,6:35:00,6:37:00,DADAN,1,,,, | |
CITY2,6:42:00,6:44:00,NADAV,2,,,, | |
CITY2,6:49:00,6:51:00,NANAA,3,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,4,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,0,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,1,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,0,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,1,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,0,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,1,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,0,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,1,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,0,,,, | |
AAMV1,9:00:00,9:00:00,AMV,1,,,, | |
AAMV2,10:00:00,10:00:00,AMV,0,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,0,,,, | |
AAMV3,14:00:00,14:00:00,AMV,1,,,, | |
AAMV4,15:00:00,15:00:00,AMV,0,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,1,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
<?xml version="1.0" encoding="UTF-8"?> | |
<kml xmlns="http://earth.google.com/kml/2.0"> | |
<Document> | |
<name>A test file with one placemark</name> | |
<decription></decription> | |
<Placemark> | |
<name>Test</name> | |
<description></description> | |
<LineString> | |
<coordinates> | |
-93.238861,44.854240,0.000000 | |
-93.238708,44.853081,0.000000 | |
-93.237923,44.852638,0.000000 | |
</coordinates> | |
</LineString> | |
</Placemark> | |
</Document> | |
</kml> | |
<?xml version="1.0" encoding="UTF-8"?> | |
<kml xmlns="http://earth.google.com/kml/2.0"> | |
<Document> | |
<name>A test file with one placemark</name> | |
<decription></decription> | |
<Placemark> | |
<name>Stop Name</name> | |
<description></description> | |
<Point> | |
<coordinates>-93.239037,44.854164,0.000000</coordinates> | |
</Point> | |
</Placemark> | |
</Document> | |
</kml> | |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,date,exception_type | |
FULLW,20070604,1 | |
WE,20070605,1 | |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
STBB,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,City,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:40:00,6:41:00,NADAR,3,,,, | |
CITY2,6:42:00,6:44:00,NADAV,4,,,, | |
CITY2,6:49:00,6:51:00,NANAA,5,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,6,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone,agency_phone | |
DTA,Autorité de passage de démonstration,http://google.com,America/Los_Angeles,123 12314 | |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport ⇒ Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog ⇒ Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach ⇒ Airport Shuttle,,3,,, | |
CITY,DTA,Ō,Bar Circle,Route with ĸool unicode shortname,3,,, | |
AAMV,DTA,,Airport ⇒ Amargosa Valley,,3,,, | |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,0,to airport,1,0,0.212 | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,0,0,1.043 | |
CITY1,6:00:00,6:00:00,STAGECOACH,0,,,, | |
CITY1,6:05:00,6:07:00,NANAA,5,going to nadav,2,3, | |
CITY1,6:12:00,6:14:00,NADAV,10,,,, | |
CITY1,6:19:00,6:21:00,DADAN,15,,,, | |
CITY1,6:26:00,6:28:00,EMSI,20,,,, | |
CITY2,6:28:00,6:30:00,EMSI,100,,,, | |
CITY2,6:35:00,6:37:00,DADAN,200,,,, | |
CITY2,6:42:00,6:44:00,NADAV,300,,,, | |
CITY2,6:49:00,6:51:00,NANAA,400,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,500,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url,stop_code,location_type,parent_station | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,,,1234,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,0,BEATTY_AIRPORT_STATION | |
BEATTY_AIRPORT_STATION,Nye County Airport (Demo),,36.868446,-116.784582,,,1235,1, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,,,,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,,,1236,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,,,1237,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,,,1238,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,,,,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,,,,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,,,,, | |
from_stop_id,to_stop_id,transfer_type,min_transfer_time | |
NADAV,NANAA,3, | |
EMSI,NANAA,2,1200 | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
not a real zip file | |
agency_id,agency_name,agency_url,agency_timezone,agency_lange | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles,en |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date,leap_day | |
FULLW,1,1,1,1,1,1,1,20070101,20101231, | |
WE,0,0,0,0,0,1,1,20070101,20101231, |
service_id,date,exception_type,leap_day | |
FULLW,20070604,2, |
fare_id,price,currency_type,payment_method,transfers,transfer_time | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,source_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs,superfluous | |
STBA,6:00:00,22:00:00,1800, | |
CITY1,6:00:00,7:59:59,1800, | |
CITY2,6:00:00,7:59:59,1800, | |
CITY1,8:00:00,9:59:59,600, | |
CITY2,8:00:00,9:59:59,600, | |
CITY1,10:00:00,15:59:59,1800, | |
CITY2,10:00:00,15:59:59,1800, | |
CITY1,16:00:00,18:59:59,600, | |
CITY2,16:00:00,18:59:59,600, | |
CITY1,19:00:00,22:00:00,1800, | |
CITY2,19:00:00,22:00:00,1800, |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,Route_Text_Color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_time,shapedisttraveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, | |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_uri | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
from_stop_id,to_stop_id,transfer_type,min_transfer_time,to_stop | |
NADAV,NANAA,3,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,sharpe_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
BOGUS,Bogus Stop (Demo),,36.914682,-116.750677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, | |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/agency.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/calendar.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/calendar_dates.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/fare_attributes.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/fare_rules.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/frequencies.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/routes.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/stop_times.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/stops.txt differ
Binary files /dev/null and b/origin-src/transitfeed-1.2.6/test/data/utf16/trips.txt differ
agency_id,agency_name,agency_url,agency_timezone | |
DTA,Demo Transit Authority,http://google.com,America/Los_Angeles |
service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday,start_date,end_date | |
FULLW,1,1,1,1,1,1,1,20070101,20101231 | |
WE,0,0,0,0,0,1,1,20070101,20101231 |
service_id,date,exception_type | |
FULLW,20070604,2 |
fare_id,price,currency_type,payment_method,transfers,transfer_duration | |
p,1.25,USD,0,0, | |
a,5.25,USD,0,0, | |
fare_id,route_id,origin_id,destination_id,contains_id | |
p,AB,,, | |
p,STBA,,, | |
p,BFC,,, | |
a,AAMV,,, | |
trip_id,start_time,end_time,headway_secs | |
STBA,6:00:00,22:00:00,1800 | |
CITY1,6:00:00,7:59:59,1800 | |
CITY2,6:00:00,7:59:59,1800 | |
CITY1,8:00:00,9:59:59,600 | |
CITY2,8:00:00,9:59:59,600 | |
CITY1,10:00:00,15:59:59,1800 | |
CITY2,10:00:00,15:59:59,1800 | |
CITY1,16:00:00,18:59:59,600 | |
CITY2,16:00:00,18:59:59,600 | |
CITY1,19:00:00,22:00:00,1800 | |
CITY2,19:00:00,22:00:00,1800 |
route_id,agency_id,route_short_name,route_long_name,route_desc,route_type,route_url,route_color,route_text_color | |
AB,DTA,,Airport - Bullfrog,,3,,, | |
BFC,DTA,,Bullfrog - Furnace Creek Resort,,3,,, | |
STBA,DTA,,Stagecoach - Airport Shuttle,,3,,, | |
CITY,DTA,,City,,3,,, | |
AAMV,DTA,,Airport - Amargosa Valley,,3,,, |
trip_id,arrival_time,departure_time,stop_id,stop_sequence,stop_headsign,pickup_type,drop_off_type,shape_dist_traveled | |
STBA,6:00:00,6:00:00,STAGECOACH,1,,,, | |
STBA,6:20:00,6:20:00,BEATTY_AIRPORT,2,,,, | |
CITY1,6:00:00,6:00:00,STAGECOACH,1,,,, | |
CITY1,6:05:00,6:07:00,NANAA,2,,,, | |
CITY1,6:12:00,6:14:00,NADAV,3,,,, | |
CITY1,6:19:00,6:21:00,DADAN,4,,,, | |
CITY1,6:26:00,6:28:00,EMSI,5,,,, | |
CITY2,6:28:00,6:30:00,EMSI,1,,,, | |
CITY2,6:35:00,6:37:00,DADAN,2,,,, | |
CITY2,6:42:00,6:44:00,NADAV,3,,,, | |
CITY2,6:49:00,6:51:00,NANAA,4,,,, | |
CITY2,6:56:00,6:58:00,STAGECOACH,5,,,, | |
AB1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AB1,8:10:00,8:15:00,BULLFROG,2,,,, | |
AB2,12:05:00,12:05:00,BULLFROG,1,,,, | |
AB2,12:15:00,12:15:00,BEATTY_AIRPORT,2,,,, | |
BFC1,8:20:00,8:20:00,BULLFROG,1,,,, | |
BFC1,9:20:00,9:20:00,FUR_CREEK_RES,2,,,, | |
BFC2,11:00:00,11:00:00,FUR_CREEK_RES,1,,,, | |
BFC2,12:00:00,12:00:00,BULLFROG,2,,,, | |
AAMV1,8:00:00,8:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV1,9:00:00,9:00:00,AMV,2,,,, | |
AAMV2,10:00:00,10:00:00,AMV,1,,,, | |
AAMV2,11:00:00,11:00:00,BEATTY_AIRPORT,2,,,, | |
AAMV3,13:00:00,13:00:00,BEATTY_AIRPORT,1,,,, | |
AAMV3,14:00:00,14:00:00,AMV,2,,,, | |
AAMV4,15:00:00,15:00:00,AMV,1,,,, | |
AAMV4,16:00:00,16:00:00,BEATTY_AIRPORT,2,,,, |
stop_id,stop_name,stop_desc,stop_lat,stop_lon,zone_id,stop_url | |
FUR_CREEK_RES,Furnace Creek Resort (Demo),,36.425288,-117.133162,, | |
BEATTY_AIRPORT,Nye County Airport (Demo),,36.868446,-116.784582,, | |
BULLFROG,Bullfrog (Demo),,36.88108,-116.81797,, | |
STAGECOACH,Stagecoach Hotel & Casino (Demo),,36.915682,-116.751677,, | |
NADAV,North Ave / D Ave N (Demo),,36.914893,-116.76821,, | |
NANAA,North Ave / N A Ave (Demo),,36.914944,-116.761472,, | |
DADAN,Doing Ave / D Ave N (Demo),,36.909489,-116.768242,, | |
EMSI,E Main St / S Irving St (Demo),,36.905697,-116.76218,, | |
AMV,Amargosa Valley (Demo),,36.641496,-116.40094,, |
route_id,service_id,trip_id,trip_headsign,direction_id,block_id,shape_id | |
AB,FULLW,AB1,to Bullfrog,0,1, | |
AB,FULLW,AB2,to Airport,1,2, | |
STBA,FULLW,STBA,Shuttle,,, | |
CITY,FULLW,CITY1,,0,, | |
CITY,FULLW,CITY2,,1,, | |
BFC,FULLW,BFC1,to Furnace Creek Resort,0,1, | |
BFC,FULLW,BFC2,to Bullfrog,1,2, | |
AAMV,WE,AAMV1,to Amargosa Valley,0,, | |
AAMV,WE,AAMV2,to Airport,1,, | |
AAMV,WE,AAMV3,to Amargosa Valley,0,, | |
AAMV,WE,AAMV4,to Airport,1,, |
#!/usr/bin/python2.5 | |
# Test the examples to make sure they are not broken | |
import os | |
import re | |
import transitfeed | |
import unittest | |
import urllib | |
import util | |
class WikiExample(util.TempDirTestCaseBase): | |
# Download example from wiki and run it | |
def runTest(self): | |
wiki_source = urllib.urlopen( | |
'http://googletransitdatafeed.googlecode.com/svn/wiki/TransitFeed.wiki' | |
).read() | |
m = re.search(r'{{{(.*import transitfeed.*)}}}', wiki_source, re.DOTALL) | |
if not m: | |
raise Exception("Failed to find source code on wiki page") | |
wiki_code = m.group(1) | |
exec wiki_code | |
class shuttle_from_xmlfeed(util.TempDirTestCaseBase): | |
def runTest(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('shuttle_from_xmlfeed.py'), | |
'--input', 'file:' + self.GetExamplePath('shuttle_from_xmlfeed.xml'), | |
'--output', 'shuttle-YYYYMMDD.zip', | |
# save the path of the dated output to tempfilepath | |
'--execute', 'echo %(path)s > outputpath']) | |
dated_path = open('outputpath').read().strip() | |
self.assertTrue(re.match(r'shuttle-20\d\d[01]\d[0123]\d.zip$', dated_path)) | |
if not os.path.exists(dated_path): | |
raise Exception('did not create expected file') | |
class table(util.TempDirTestCaseBase): | |
def runTest(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('table.py'), | |
'--input', self.GetExamplePath('table.txt'), | |
'--output', 'google_transit.zip']) | |
if not os.path.exists('google_transit.zip'): | |
raise Exception('should have created output') | |
class small_builder(util.TempDirTestCaseBase): | |
def runTest(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('small_builder.py'), | |
'--output', 'google_transit.zip']) | |
if not os.path.exists('google_transit.zip'): | |
raise Exception('should have created output') | |
class google_random_queries(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('google_random_queries.py'), | |
'--output', 'queries.html', | |
'--limit', '5', | |
self.GetPath('test', 'data', 'good_feed')]) | |
if not os.path.exists('queries.html'): | |
raise Exception('should have created output') | |
def testInvalidFeedStillWorks(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('google_random_queries.py'), | |
'--output', 'queries.html', | |
'--limit', '5', | |
self.GetPath('test', 'data', 'invalid_route_agency')]) | |
if not os.path.exists('queries.html'): | |
raise Exception('should have created output') | |
def testBadArgs(self): | |
self.CheckCallWithPath( | |
[self.GetExamplePath('google_random_queries.py'), | |
'--output', 'queries.html', | |
'--limit', '5'], | |
expected_retcode=2) | |
if os.path.exists('queries.html'): | |
raise Exception('should not have created output') | |
class filter_unused_stops(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
unused_stop_path = self.GetPath('test', 'data', 'unused_stop') | |
# Make sure load fails for input | |
accumulator = transitfeed.ExceptionProblemAccumulator(raise_warnings=True) | |
problem_reporter = transitfeed.ProblemReporter(accumulator) | |
try: | |
transitfeed.Loader( | |
unused_stop_path, | |
problems=problem_reporter, extra_validation=True).Load() | |
self.fail('UnusedStop exception expected') | |
except transitfeed.UnusedStop, e: | |
pass | |
(stdout, stderr) = self.CheckCallWithPath( | |
[self.GetExamplePath('filter_unused_stops.py'), | |
'--list_removed', | |
unused_stop_path, 'output.zip']) | |
# Extra stop was listed on stdout | |
self.assertNotEqual(stdout.find('Bogus Stop'), -1) | |
# Make sure unused stop was removed and another stop wasn't | |
schedule = transitfeed.Loader( | |
'output.zip', problems=problem_reporter, extra_validation=True).Load() | |
schedule.GetStop('STAGECOACH') | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Smoke tests feed validator. Make sure it runs and returns the right things | |
# for a valid feed and a feed with errors. | |
import datetime | |
import feedvalidator | |
import os.path | |
import re | |
import StringIO | |
import transitfeed | |
import unittest | |
from urllib2 import HTTPError, URLError | |
import urllib2 | |
import util | |
import zipfile | |
class FullTests(util.TempDirTestCaseBase): | |
def testGoodFeed(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertFalse(re.search(r'ERROR', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(re.search(r'ERROR', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testGoodFeedConsoleOutput(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, | |
'--output=CONSOLE', self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertFalse(re.search(r'ERROR', out)) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testMissingStops(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, | |
self.GetPath('test', 'data', 'missing_stops')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'Invalid value BEATTY_AIRPORT', htmlout)) | |
self.assertFalse(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testMissingStopsConsoleOutput(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '-o', 'console', | |
'--latest_version', transitfeed.__version__, | |
self.GetPath('test', 'data', 'missing_stops')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
self.assertTrue(re.search(r'Invalid value BEATTY_AIRPORT', out)) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testLimitedErrors(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-l', '2', '-n', | |
'--latest_version', transitfeed.__version__, | |
self.GetPath('test', 'data', 'missing_stops')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertEquals(2, len(re.findall(r'class="problem">stop_id<', htmlout))) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testBadDateFormat(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, | |
self.GetPath('test', 'data', 'bad_date_format')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'in field <code>start_date', htmlout)) | |
self.assertTrue(re.search(r'in field <code>date', htmlout)) | |
self.assertFalse(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testBadUtf8(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, self.GetPath('test', 'data', 'bad_utf8')], | |
expected_retcode=1) | |
self.assertTrue(re.search(r'ERROR', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'Unicode error', htmlout)) | |
self.assertFalse(re.search(r'feed validated successfully', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testFileNotFound(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, 'file-not-found.zip'], | |
expected_retcode=1) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testBadOutputPath(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, '-o', 'path/does/not/exist.html', | |
self.GetPath('test', 'data', 'good_feed')], | |
expected_retcode=2) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
transitfeed.__version__, 'IWantMyvalidation-crash.txt'], | |
expected_retcode=127) | |
self.assertTrue(re.search(r'Yikes', out)) | |
self.assertFalse(re.search(r'feed validated successfully', out)) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertTrue(re.search(r'For testing the feed validator crash handler', | |
crashout)) | |
def testCheckVersionIsRun(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '--latest_version', | |
'100.100.100', self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertTrue(re.search(r'A new version 100.100.100', out)) | |
htmlout = open('validation-results.html').read() | |
self.assertTrue(re.search(r'A new version 100.100.100', htmlout)) | |
self.assertFalse(re.search(r'ERROR', htmlout)) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCheckVersionIsRunConsoleOutput(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '-n', '-o', 'console', | |
'--latest_version=100.100.100', | |
self.GetPath('test', 'data', 'good_feed')]) | |
self.assertTrue(re.search(r'feed validated successfully', out)) | |
self.assertTrue(re.search(r'A new version 100.100.100', out)) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testUsage(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('feedvalidator.py'), '--invalid_opt'], expected_retcode=2) | |
self.assertMatchesRegex(r'[Uu]sage: feedvalidator.py \[options\]', err) | |
self.assertMatchesRegex(r'wiki/FeedValidator', err) | |
self.assertMatchesRegex(r'--output', err) # output includes all usage info | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
self.assertFalse(os.path.exists('validation-results.html')) | |
# Regression tests to ensure that CalendarSummary works properly | |
# even when the feed starts in the future or expires in less than | |
# 60 days | |
# See http://code.google.com/p/googletransitdatafeed/issues/detail?id=204 | |
class CalendarSummaryTestCase(util.TestCase): | |
# Test feeds starting in the future | |
def testFutureFeedDoesNotCrashCalendarSummary(self): | |
today = datetime.date.today() | |
start_date = today + datetime.timedelta(days=20) | |
end_date = today + datetime.timedelta(days=80) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate(start_date.strftime("%Y%m%d")) | |
service_period.SetEndDate(end_date.strftime("%Y%m%d")) | |
service_period.SetWeekdayService(True) | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals(0, result['max_trips']) | |
self.assertEquals(0, result['min_trips']) | |
self.assertTrue(re.search("40 service dates", result['max_trips_dates'])) | |
# Test feeds ending in less than 60 days | |
def testShortFeedDoesNotCrashCalendarSummary(self): | |
start_date = datetime.date.today() | |
end_date = start_date + datetime.timedelta(days=15) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate(start_date.strftime("%Y%m%d")) | |
service_period.SetEndDate(end_date.strftime("%Y%m%d")) | |
service_period.SetWeekdayService(True) | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals(0, result['max_trips']) | |
self.assertEquals(0, result['min_trips']) | |
self.assertTrue(re.search("15 service dates", result['max_trips_dates'])) | |
# Test feeds starting in the future *and* ending in less than 60 days | |
def testFutureAndShortFeedDoesNotCrashCalendarSummary(self): | |
today = datetime.date.today() | |
start_date = today + datetime.timedelta(days=2) | |
end_date = today + datetime.timedelta(days=3) | |
schedule = transitfeed.Schedule() | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate(start_date.strftime("%Y%m%d")) | |
service_period.SetEndDate(end_date.strftime("%Y%m%d")) | |
service_period.SetWeekdayService(True) | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals(0, result['max_trips']) | |
self.assertEquals(0, result['min_trips']) | |
self.assertTrue(re.search("1 service date", result['max_trips_dates'])) | |
# Test feeds without service days | |
def testFeedWithNoDaysDoesNotCrashCalendarSummary(self): | |
schedule = transitfeed.Schedule() | |
result = feedvalidator.CalendarSummary(schedule) | |
self.assertEquals({}, result) | |
class MockOptions: | |
"""Pretend to be an optparse options object suitable for testing.""" | |
def __init__(self): | |
self.limit_per_type = 5 | |
self.memory_db = True | |
self.check_duplicate_trips = True | |
self.latest_version = transitfeed.__version__ | |
self.output = 'fake-filename.zip' | |
self.manual_entry = False | |
self.service_gap_interval = None | |
self.extension = None | |
class FeedValidatorTestCase(util.TempDirTestCaseBase): | |
def testBadEolContext(self): | |
"""Make sure the filename is included in the report of a bad eol.""" | |
filename = "routes.txt" | |
old_zip = zipfile.ZipFile( | |
self.GetPath('test', 'data', 'good_feed.zip'), 'r') | |
content_dict = self.ConvertZipToDict(old_zip) | |
old_routes = content_dict[filename] | |
new_routes = old_routes.replace('\n', '\r\n', 1) | |
self.assertNotEquals(old_routes, new_routes) | |
content_dict[filename] = new_routes | |
new_zipfile_mem = self.ConvertDictToZip(content_dict) | |
options = MockOptions() | |
output_file = StringIO.StringIO() | |
feedvalidator.RunValidationOutputToFile( | |
new_zipfile_mem, options, output_file) | |
self.assertMatchesRegex(filename, output_file.getvalue()) | |
class LimitPerTypeProblemReporterTestCase(util.TestCase): | |
def CreateLimitPerTypeProblemReporter(self, limit): | |
accumulator = feedvalidator.LimitPerTypeProblemAccumulator(limit) | |
problems = transitfeed.ProblemReporter(accumulator) | |
return problems | |
def assertProblemsAttribute(self, problem_type, class_name, attribute_name, | |
expected): | |
"""Join the value of each exception's attribute_name in order.""" | |
problem_attribute_list = [] | |
for e in self.problems.GetAccumulator().ProblemList( | |
problem_type, class_name).problems: | |
problem_attribute_list.append(getattr(e, attribute_name)) | |
self.assertEquals(expected, " ".join(problem_attribute_list)) | |
def testLimitOtherProblems(self): | |
"""The first N of each type should be kept.""" | |
self.problems = self.CreateLimitPerTypeProblemReporter(2) | |
self.accumulator = self.problems.GetAccumulator() | |
self.problems.OtherProblem("e1", type=transitfeed.TYPE_ERROR) | |
self.problems.OtherProblem("w1", type=transitfeed.TYPE_WARNING) | |
self.problems.OtherProblem("e2", type=transitfeed.TYPE_ERROR) | |
self.problems.OtherProblem("e3", type=transitfeed.TYPE_ERROR) | |
self.problems.OtherProblem("w2", type=transitfeed.TYPE_WARNING) | |
self.assertEquals(2, self.accumulator.WarningCount()) | |
self.assertEquals(3, self.accumulator.ErrorCount()) | |
# These are BoundedProblemList objects | |
warning_bounded_list = self.accumulator.ProblemList( | |
transitfeed.TYPE_WARNING, "OtherProblem") | |
error_bounded_list = self.accumulator.ProblemList( | |
transitfeed.TYPE_ERROR, "OtherProblem") | |
self.assertEquals(2, warning_bounded_list.count) | |
self.assertEquals(3, error_bounded_list.count) | |
self.assertEquals(0, warning_bounded_list.dropped_count) | |
self.assertEquals(1, error_bounded_list.dropped_count) | |
self.assertProblemsAttribute(transitfeed.TYPE_ERROR, "OtherProblem", | |
"description", "e1 e2") | |
self.assertProblemsAttribute(transitfeed.TYPE_WARNING, "OtherProblem", | |
"description", "w1 w2") | |
def testKeepUnsorted(self): | |
"""An imperfect test that insort triggers ExceptionWithContext.__cmp__.""" | |
# If ExceptionWithContext.__cmp__ doesn't trigger TypeError in | |
# bisect.insort then the default comparison of object id will be used. The | |
# id values tend to be given out in order of creation so call | |
# problems._Report with objects in a different order. This test should | |
# break if ExceptionWithContext.__cmp__ is removed or changed to return 0 | |
# or cmp(id(self), id(y)). | |
exceptions = [] | |
for i in range(20): | |
exceptions.append(transitfeed.OtherProblem(description="e%i" % i)) | |
exceptions = exceptions[10:] + exceptions[:10] | |
self.problems = self.CreateLimitPerTypeProblemReporter(3) | |
self.accumulator = self.problems.GetAccumulator() | |
for e in exceptions: | |
self.problems.AddToAccumulator(e) | |
self.assertEquals(0, self.accumulator.WarningCount()) | |
self.assertEquals(20, self.accumulator.ErrorCount()) | |
bounded_list = self.accumulator.ProblemList( | |
transitfeed.TYPE_ERROR, "OtherProblem") | |
self.assertEquals(20, bounded_list.count) | |
self.assertEquals(17, bounded_list.dropped_count) | |
self.assertProblemsAttribute(transitfeed.TYPE_ERROR, "OtherProblem", | |
"description", "e10 e11 e12") | |
def testLimitSortedTooFastTravel(self): | |
"""Sort by decreasing distance, keeping the N greatest.""" | |
self.problems = self.CreateLimitPerTypeProblemReporter(3) | |
self.accumulator = self.problems.GetAccumulator() | |
self.problems.TooFastTravel("t1", "prev stop", "next stop", 11230.4, 5, | |
None) | |
self.problems.TooFastTravel("t2", "prev stop", "next stop", 1120.4, 5, None) | |
self.problems.TooFastTravel("t3", "prev stop", "next stop", 1130.4, 5, None) | |
self.problems.TooFastTravel("t4", "prev stop", "next stop", 1230.4, 5, None) | |
self.assertEquals(0, self.accumulator.WarningCount()) | |
self.assertEquals(4, self.accumulator.ErrorCount()) | |
self.assertProblemsAttribute(transitfeed.TYPE_ERROR, "TooFastTravel", | |
"trip_id", "t1 t4 t3") | |
def testLimitSortedStopTooFarFromParentStation(self): | |
"""Sort by decreasing distance, keeping the N greatest.""" | |
self.problems = self.CreateLimitPerTypeProblemReporter(3) | |
self.accumulator = self.problems.GetAccumulator() | |
for i, distance in enumerate((1000, 3002.0, 1500, 2434.1, 5023.21)): | |
self.problems.StopTooFarFromParentStation( | |
"s%d" % i, "S %d" % i, "p%d" % i, "P %d" % i, distance) | |
self.assertEquals(5, self.accumulator.WarningCount()) | |
self.assertEquals(0, self.accumulator.ErrorCount()) | |
self.assertProblemsAttribute(transitfeed.TYPE_WARNING, | |
"StopTooFarFromParentStation", "stop_id", "s4 s1 s3") | |
def testLimitSortedStopsTooClose(self): | |
"""Sort by increasing distance, keeping the N closest.""" | |
self.problems = self.CreateLimitPerTypeProblemReporter(3) | |
self.accumulator = self.problems.GetAccumulator() | |
for i, distance in enumerate((4.0, 3.0, 2.5, 2.2, 1.0, 0.0)): | |
self.problems.StopsTooClose( | |
"Sa %d" % i, "sa%d" % i, "Sb %d" % i, "sb%d" % i, distance) | |
self.assertEquals(6, self.accumulator.WarningCount()) | |
self.assertEquals(0, self.accumulator.ErrorCount()) | |
self.assertProblemsAttribute(transitfeed.TYPE_WARNING, | |
"StopsTooClose", "stop_id_a", "sa5 sa4 sa3") | |
class CheckVersionTestCase(util.TempDirTestCaseBase): | |
def setUp(self): | |
self.mock = MockURLOpen() | |
def tearDown(self): | |
self.mock = None | |
feedvalidator.urlopen = urllib2.urlopen | |
def testAssignedDifferentVersion(self): | |
problems = feedvalidator.CheckVersion('100.100.100') | |
self.assertTrue(re.search(r'A new version 100.100.100', problems)) | |
def testAssignedSameVersion(self): | |
problems = feedvalidator.CheckVersion(transitfeed.__version__) | |
self.assertEquals(problems, None) | |
def testGetCorrectReturns(self): | |
feedvalidator.urlopen = self.mock.mockedConnectSuccess | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'A new version 100.0.1', problems)) | |
def testPageNotFound(self): | |
feedvalidator.urlopen = self.mock.mockedPageNotFound | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'The server couldn\'t', problems)) | |
self.assertTrue(re.search(r'Error code: 404', problems)) | |
def testConnectionTimeOut(self): | |
feedvalidator.urlopen = self.mock.mockedConnectionTimeOut | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'We failed to reach', problems)) | |
self.assertTrue(re.search(r'Reason: Connection timed', problems)) | |
def testGetAddrInfoFailed(self): | |
feedvalidator.urlopen = self.mock.mockedGetAddrInfoFailed | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'We failed to reach', problems)) | |
self.assertTrue(re.search(r'Reason: Getaddrinfo failed', problems)) | |
def testEmptyIsReturned(self): | |
feedvalidator.urlopen = self.mock.mockedEmptyIsReturned | |
problems = feedvalidator.CheckVersion() | |
self.assertTrue(re.search(r'We had trouble parsing', problems)) | |
class MockURLOpen: | |
"""Pretend to be a urllib2.urlopen suitable for testing.""" | |
def mockedConnectSuccess(self, request): | |
return StringIO.StringIO('<li><a href="transitfeed-1.0.0/">transitfeed-' | |
'1.0.0/</a></li><li><a href=transitfeed-100.0.1/>' | |
'transitfeed-100.0.1/</a></li>') | |
def mockedPageNotFound(self, request): | |
raise HTTPError(request.get_full_url(), 404, 'Not Found', | |
request.header_items(), None) | |
def mockedConnectionTimeOut(self, request): | |
raise URLError('Connection timed out') | |
def mockedGetAddrInfoFailed(self, request): | |
raise URLError('Getaddrinfo failed') | |
def mockedEmptyIsReturned(self, request): | |
return StringIO.StringIO() | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Unit tests for the kmlparser module. | |
import kmlparser | |
import os.path | |
import shutil | |
from StringIO import StringIO | |
import transitfeed | |
import unittest | |
import util | |
class TestStopsParsing(util.GetPathTestCase): | |
def testSingleStop(self): | |
feed = transitfeed.Schedule() | |
kmlFile = self.GetTestDataPath('one_stop.kml') | |
kmlparser.KmlParser().Parse(kmlFile, feed) | |
stops = feed.GetStopList() | |
self.assertEqual(1, len(stops)) | |
stop = stops[0] | |
self.assertEqual(u'Stop Name', stop.stop_name) | |
self.assertAlmostEqual(-93.239037, stop.stop_lon) | |
self.assertAlmostEqual(44.854164, stop.stop_lat) | |
write_output = StringIO() | |
feed.WriteGoogleTransitFeed(write_output) | |
def testSingleShape(self): | |
feed = transitfeed.Schedule() | |
kmlFile = self.GetTestDataPath('one_line.kml') | |
kmlparser.KmlParser().Parse(kmlFile, feed) | |
shapes = feed.GetShapeList() | |
self.assertEqual(1, len(shapes)) | |
shape = shapes[0] | |
self.assertEqual(3, len(shape.points)) | |
self.assertAlmostEqual(44.854240, shape.points[0][0]) | |
self.assertAlmostEqual(-93.238861, shape.points[0][1]) | |
self.assertAlmostEqual(44.853081, shape.points[1][0]) | |
self.assertAlmostEqual(-93.238708, shape.points[1][1]) | |
self.assertAlmostEqual(44.852638, shape.points[2][0]) | |
self.assertAlmostEqual(-93.237923, shape.points[2][1]) | |
write_output = StringIO() | |
feed.WriteGoogleTransitFeed(write_output) | |
class FullTests(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
shutil.copyfile(self.GetTestDataPath('one_stop.kml'), 'one_stop.kml') | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlparser.py'), 'one_stop.kml', 'one_stop.zip']) | |
# There will be lots of problems, but ignore them | |
accumulator = util.RecordingProblemAccumulator(self) | |
problems = transitfeed.ProblemReporter(accumulator) | |
schedule = transitfeed.Loader('one_stop.zip', problems=problems).Load() | |
self.assertEquals(len(schedule.GetStopList()), 1) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCommandLineError(self): | |
(out, err) = self.CheckCallWithPath([self.GetPath('kmlparser.py')], | |
expected_retcode=2) | |
self.assertMatchesRegex(r'did not provide .+ arguments', err) | |
self.assertMatchesRegex(r'[Uu]sage:', err) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlparser.py'), 'IWantMyCrash', 'output.zip'], | |
stdin_str="\n", expected_retcode=127) | |
self.assertMatchesRegex(r'Yikes', out) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertMatchesRegex(r'For testCrashHandler', crashout) | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.4 | |
# | |
# Copyright 2008 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Unit tests for the kmlwriter module.""" | |
import os | |
import StringIO | |
import tempfile | |
import unittest | |
import kmlparser | |
import kmlwriter | |
import transitfeed | |
import util | |
try: | |
import xml.etree.ElementTree as ET # python 2.5 | |
except ImportError, e: | |
import elementtree.ElementTree as ET # older pythons | |
def DataPath(path): | |
"""Return the path to a given file in the test data directory. | |
Args: | |
path: The path relative to the test data directory. | |
Returns: | |
The absolute path. | |
""" | |
here = os.path.dirname(__file__) | |
return os.path.join(here, 'data', path) | |
def _ElementToString(root): | |
"""Returns the node as an XML string. | |
Args: | |
root: The ElementTree.Element instance. | |
Returns: | |
The XML string. | |
""" | |
output = StringIO.StringIO() | |
ET.ElementTree(root).write(output, 'utf-8') | |
return output.getvalue() | |
class TestKMLStopsRoundtrip(util.TestCase): | |
"""Checks to see whether all stops are preserved when going to and from KML. | |
""" | |
def setUp(self): | |
fd, self.kml_output = tempfile.mkstemp('kml') | |
os.close(fd) | |
def tearDown(self): | |
os.remove(self.kml_output) | |
def runTest(self): | |
gtfs_input = DataPath('good_feed.zip') | |
feed1 = transitfeed.Loader(gtfs_input).Load() | |
kmlwriter.KMLWriter().Write(feed1, self.kml_output) | |
feed2 = transitfeed.Schedule() | |
kmlparser.KmlParser().Parse(self.kml_output, feed2) | |
stop_name_mapper = lambda x: x.stop_name | |
stops1 = set(map(stop_name_mapper, feed1.GetStopList())) | |
stops2 = set(map(stop_name_mapper, feed2.GetStopList())) | |
self.assertEqual(stops1, stops2) | |
class TestKMLGeneratorMethods(util.TestCase): | |
"""Tests the various KML element creation methods of KMLWriter.""" | |
def setUp(self): | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateFolderVisible(self): | |
element = self.kmlwriter._CreateFolder(self.parent, 'folder_name') | |
self.assertEqual(_ElementToString(element), | |
'<Folder><name>folder_name</name></Folder>') | |
def testCreateFolderNotVisible(self): | |
element = self.kmlwriter._CreateFolder(self.parent, 'folder_name', | |
visible=False) | |
self.assertEqual(_ElementToString(element), | |
'<Folder><name>folder_name</name>' | |
'<visibility>0</visibility></Folder>') | |
def testCreateFolderWithDescription(self): | |
element = self.kmlwriter._CreateFolder(self.parent, 'folder_name', | |
description='folder_desc') | |
self.assertEqual(_ElementToString(element), | |
'<Folder><name>folder_name</name>' | |
'<description>folder_desc</description></Folder>') | |
def testCreatePlacemark(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef') | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name></Placemark>') | |
def testCreatePlacemarkWithStyle(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef', | |
style_id='ghijkl') | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name>' | |
'<styleUrl>#ghijkl</styleUrl></Placemark>') | |
def testCreatePlacemarkNotVisible(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef', | |
visible=False) | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name>' | |
'<visibility>0</visibility></Placemark>') | |
def testCreatePlacemarkWithDescription(self): | |
element = self.kmlwriter._CreatePlacemark(self.parent, 'abcdef', | |
description='ghijkl') | |
self.assertEqual(_ElementToString(element), | |
'<Placemark><name>abcdef</name>' | |
'<description>ghijkl</description></Placemark>') | |
def testCreateLineString(self): | |
coord_list = [(2.0, 1.0), (4.0, 3.0), (6.0, 5.0)] | |
element = self.kmlwriter._CreateLineString(self.parent, coord_list) | |
self.assertEqual(_ElementToString(element), | |
'<LineString><tessellate>1</tessellate>' | |
'<coordinates>%f,%f %f,%f %f,%f</coordinates>' | |
'</LineString>' % (2.0, 1.0, 4.0, 3.0, 6.0, 5.0)) | |
def testCreateLineStringWithAltitude(self): | |
coord_list = [(2.0, 1.0, 10), (4.0, 3.0, 20), (6.0, 5.0, 30.0)] | |
element = self.kmlwriter._CreateLineString(self.parent, coord_list) | |
self.assertEqual(_ElementToString(element), | |
'<LineString><tessellate>1</tessellate>' | |
'<altitudeMode>absolute</altitudeMode>' | |
'<coordinates>%f,%f,%f %f,%f,%f %f,%f,%f</coordinates>' | |
'</LineString>' % | |
(2.0, 1.0, 10.0, 4.0, 3.0, 20.0, 6.0, 5.0, 30.0)) | |
def testCreateLineStringForShape(self): | |
shape = transitfeed.Shape('shape') | |
shape.AddPoint(1.0, 1.0) | |
shape.AddPoint(2.0, 4.0) | |
shape.AddPoint(3.0, 9.0) | |
element = self.kmlwriter._CreateLineStringForShape(self.parent, shape) | |
self.assertEqual(_ElementToString(element), | |
'<LineString><tessellate>1</tessellate>' | |
'<coordinates>%f,%f %f,%f %f,%f</coordinates>' | |
'</LineString>' % (1.0, 1.0, 4.0, 2.0, 9.0, 3.0)) | |
class TestRouteKML(util.TestCase): | |
"""Tests the routes folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateRoutePatternsFolderNoPatterns(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_7')) | |
self.assert_(folder is None) | |
def testCreateRoutePatternsFolderOnePattern(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_1')) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 1) | |
def testCreateRoutePatternsFolderTwoPatterns(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_3')) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 2) | |
def testCreateRoutePatternFolderTwoEqualPatterns(self): | |
folder = self.kmlwriter._CreateRoutePatternsFolder( | |
self.parent, self.feed.GetRoute('route_4')) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 1) | |
def testCreateRouteShapesFolderOneTripOneShape(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_1')) | |
self.assertEqual(len(folder.findall('Placemark')), 1) | |
def testCreateRouteShapesFolderTwoTripsTwoShapes(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_2')) | |
self.assertEqual(len(folder.findall('Placemark')), 2) | |
def testCreateRouteShapesFolderTwoTripsOneShape(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_3')) | |
self.assertEqual(len(folder.findall('Placemark')), 1) | |
def testCreateRouteShapesFolderTwoTripsNoShapes(self): | |
folder = self.kmlwriter._CreateRouteShapesFolder( | |
self.feed, self.parent, self.feed.GetRoute('route_4')) | |
self.assert_(folder is None) | |
def assertRouteFolderContainsTrips(self, tripids, folder): | |
"""Assert that the route folder contains exactly tripids""" | |
actual_tripds = set() | |
for placemark in folder.findall('Placemark'): | |
actual_tripds.add(placemark.find('name').text) | |
self.assertEquals(set(tripids), actual_tripds) | |
def testCreateTripsFolderForRouteTwoTrips(self): | |
route = self.feed.GetRoute('route_2') | |
folder = self.kmlwriter._CreateRouteTripsFolder(self.parent, route) | |
self.assertRouteFolderContainsTrips(['route_2_1', 'route_2_2'], folder) | |
def testCreateTripsFolderForRouteDateFilterNone(self): | |
self.kmlwriter.date_filter = None | |
route = self.feed.GetRoute('route_8') | |
folder = self.kmlwriter._CreateRouteTripsFolder(self.parent, route) | |
self.assertRouteFolderContainsTrips(['route_8_1', 'route_8_2'], folder) | |
def testCreateTripsFolderForRouteDateFilterSet(self): | |
self.kmlwriter.date_filter = '20070604' | |
route = self.feed.GetRoute('route_8') | |
folder = self.kmlwriter._CreateRouteTripsFolder(self.parent, route) | |
self.assertRouteFolderContainsTrips(['route_8_2'], folder) | |
def _GetTripPlacemark(self, route_folder, trip_name): | |
for trip_placemark in route_folder.findall('Placemark'): | |
if trip_placemark.find('name').text == trip_name: | |
return trip_placemark | |
def testCreateRouteTripsFolderAltitude0(self): | |
self.kmlwriter.altitude_per_sec = 0.0 | |
folder = self.kmlwriter._CreateRouteTripsFolder( | |
self.parent, self.feed.GetRoute('route_4')) | |
trip_placemark = self._GetTripPlacemark(folder, 'route_4_1') | |
self.assertEqual(_ElementToString(trip_placemark.find('LineString')), | |
'<LineString><tessellate>1</tessellate>' | |
'<coordinates>-117.133162,36.425288 ' | |
'-116.784582,36.868446 ' | |
'-116.817970,36.881080</coordinates></LineString>') | |
def testCreateRouteTripsFolderAltitude1(self): | |
self.kmlwriter.altitude_per_sec = 0.5 | |
folder = self.kmlwriter._CreateRouteTripsFolder( | |
self.parent, self.feed.GetRoute('route_4')) | |
trip_placemark = self._GetTripPlacemark(folder, 'route_4_1') | |
self.assertEqual(_ElementToString(trip_placemark.find('LineString')), | |
'<LineString><tessellate>1</tessellate>' | |
'<altitudeMode>absolute</altitudeMode>' | |
'<coordinates>-117.133162,36.425288,3600.000000 ' | |
'-116.784582,36.868446,5400.000000 ' | |
'-116.817970,36.881080,7200.000000</coordinates>' | |
'</LineString>') | |
def testCreateRouteTripsFolderNoTrips(self): | |
folder = self.kmlwriter._CreateRouteTripsFolder( | |
self.parent, self.feed.GetRoute('route_7')) | |
self.assert_(folder is None) | |
def testCreateRoutesFolderNoRoutes(self): | |
schedule = transitfeed.Schedule() | |
folder = self.kmlwriter._CreateRoutesFolder(schedule, self.parent) | |
self.assert_(folder is None) | |
def testCreateRoutesFolderNoRoutesWithRouteType(self): | |
folder = self.kmlwriter._CreateRoutesFolder(self.feed, self.parent, 999) | |
self.assert_(folder is None) | |
def _TestCreateRoutesFolder(self, show_trips): | |
self.kmlwriter.show_trips = show_trips | |
folder = self.kmlwriter._CreateRoutesFolder(self.feed, self.parent) | |
self.assertEquals(folder.tag, 'Folder') | |
styles = self.parent.findall('Style') | |
self.assertEquals(len(styles), len(self.feed.GetRouteList())) | |
route_folders = folder.findall('Folder') | |
self.assertEquals(len(route_folders), len(self.feed.GetRouteList())) | |
def testCreateRoutesFolder(self): | |
self._TestCreateRoutesFolder(False) | |
def testCreateRoutesFolderShowTrips(self): | |
self._TestCreateRoutesFolder(True) | |
def testCreateRoutesFolderWithRouteType(self): | |
folder = self.kmlwriter._CreateRoutesFolder(self.feed, self.parent, 1) | |
route_folders = folder.findall('Folder') | |
self.assertEquals(len(route_folders), 1) | |
class TestShapesKML(util.TestCase): | |
"""Tests the shapes folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.flatten_feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.good_feed = transitfeed.Loader(DataPath('good_feed.zip')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateShapesFolderNoShapes(self): | |
folder = self.kmlwriter._CreateShapesFolder(self.good_feed, self.parent) | |
self.assertEquals(folder, None) | |
def testCreateShapesFolder(self): | |
folder = self.kmlwriter._CreateShapesFolder(self.flatten_feed, self.parent) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 3) | |
for placemark in placemarks: | |
self.assert_(placemark.find('LineString') is not None) | |
class TestStopsKML(util.TestCase): | |
"""Tests the stops folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.parent = ET.Element('parent') | |
def testCreateStopsFolderNoStops(self): | |
schedule = transitfeed.Schedule() | |
folder = self.kmlwriter._CreateStopsFolder(schedule, self.parent) | |
self.assert_(folder is None) | |
def testCreateStopsFolder(self): | |
folder = self.kmlwriter._CreateStopsFolder(self.feed, self.parent) | |
placemarks = folder.findall('Placemark') | |
self.assertEquals(len(placemarks), len(self.feed.GetStopList())) | |
class TestShapePointsKML(util.TestCase): | |
"""Tests the shape points folder KML generation methods of KMLWriter.""" | |
def setUp(self): | |
self.flatten_feed = transitfeed.Loader(DataPath('flatten_feed')).Load() | |
self.kmlwriter = kmlwriter.KMLWriter() | |
self.kmlwriter.shape_points = True | |
self.parent = ET.Element('parent') | |
def testCreateShapePointsFolder(self): | |
folder = self.kmlwriter._CreateShapesFolder(self.flatten_feed, self.parent) | |
shape_point_folder = folder.find('Folder') | |
self.assertEquals(shape_point_folder.find('name').text, | |
'shape_1 Shape Points') | |
placemarks = shape_point_folder.findall('Placemark') | |
self.assertEquals(len(placemarks), 4) | |
for placemark in placemarks: | |
self.assert_(placemark.find('Point') is not None) | |
class FullTests(util.TempDirTestCaseBase): | |
def testNormalRun(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlwriter.py'), self.GetTestDataPath('good_feed.zip'), | |
'good_feed.kml']) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
self.assertTrue(os.path.exists('good_feed.kml')) | |
def testCommandLineError(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlwriter.py'), '--bad_flag'], expected_retcode=2) | |
self.assertMatchesRegex(r'no such option.*--bad_flag', err) | |
self.assertMatchesRegex(r'--showtrips', err) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('kmlwriter.py'), 'IWantMyCrash', 'output.zip'], | |
stdin_str="\n", expected_retcode=127) | |
self.assertMatchesRegex(r'Yikes', out) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertMatchesRegex(r'For testCrashHandler', crashout) | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Unit tests for the merge module.""" | |
__author__ = 'timothy.stranex@gmail.com (Timothy Stranex)' | |
import merge | |
import os.path | |
import re | |
import StringIO | |
import transitfeed | |
import unittest | |
import util | |
import zipfile | |
def CheckAttribs(a, b, attrs, assertEquals): | |
"""Checks that the objects a and b have the same values for the attributes | |
given in attrs. These checks are done using the given assert function. | |
Args: | |
a: The first object. | |
b: The second object. | |
attrs: The list of attribute names (strings). | |
assertEquals: The assertEquals method from unittest.TestCase. | |
""" | |
# For Stop objects (and maybe others in the future) Validate converts some | |
# attributes from string to native type | |
a.Validate() | |
b.Validate() | |
for k in attrs: | |
assertEquals(getattr(a, k), getattr(b, k)) | |
def CreateAgency(): | |
"""Create an transitfeed.Agency object for testing. | |
Returns: | |
The agency object. | |
""" | |
return transitfeed.Agency(name='agency', | |
url='http://agency', | |
timezone='Africa/Johannesburg', | |
id='agency') | |
class TestingProblemReporter(merge.MergeProblemReporter): | |
def __init__(self, accumulator): | |
merge.MergeProblemReporter.__init__(self, accumulator) | |
class TestingProblemAccumulator(transitfeed.ProblemAccumulatorInterface): | |
"""This problem reporter keeps track of all problems. | |
Attributes: | |
problems: The list of problems reported. | |
""" | |
def __init__(self): | |
self.problems = [] | |
self._expect_classes = [] | |
def _Report(self, problem): | |
problem.FormatProblem() # Shouldn't crash | |
self.problems.append(problem) | |
for problem_class in self._expect_classes: | |
if isinstance(problem, problem_class): | |
return | |
raise problem | |
def CheckReported(self, problem_class): | |
"""Checks if a problem of the given class was reported. | |
Args: | |
problem_class: The problem class, a class inheriting from | |
MergeProblemWithContext. | |
Returns: | |
True if a matching problem was reported. | |
""" | |
for problem in self.problems: | |
if isinstance(problem, problem_class): | |
return True | |
return False | |
def ExpectProblemClass(self, problem_class): | |
"""Supresses exception raising for problems inheriting from this class. | |
Args: | |
problem_class: The problem class, a class inheriting from | |
MergeProblemWithContext. | |
""" | |
self._expect_classes.append(problem_class) | |
def assertExpectedProblemsReported(self, testcase): | |
"""Asserts that every expected problem class has been reported. | |
The assertions are done using the assert_ method of the testcase. | |
Args: | |
testcase: The unittest.TestCase instance. | |
""" | |
for problem_class in self._expect_classes: | |
testcase.assert_(self.CheckReported(problem_class)) | |
class TestApproximateDistanceBetweenPoints(util.TestCase): | |
def _assertWithinEpsilon(self, a, b, epsilon=1.0): | |
"""Asserts that a and b are equal to within an epsilon. | |
Args: | |
a: The first value (float). | |
b: The second value (float). | |
epsilon: The epsilon value (float). | |
""" | |
self.assert_(abs(a-b) < epsilon) | |
def testDegenerate(self): | |
p = (30.0, 30.0) | |
self._assertWithinEpsilon( | |
merge.ApproximateDistanceBetweenPoints(p, p), 0.0) | |
def testFar(self): | |
p1 = (30.0, 30.0) | |
p2 = (40.0, 40.0) | |
self.assert_(merge.ApproximateDistanceBetweenPoints(p1, p2) > 1e4) | |
class TestSchemedMerge(util.TestCase): | |
class TestEntity: | |
"""A mock entity (like Route or Stop) for testing.""" | |
def __init__(self, x, y, z): | |
self.x = x | |
self.y = y | |
self.z = z | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
accumulator = TestingProblemAccumulator() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
TestingProblemReporter(accumulator)) | |
self.ds = merge.DataSetMerger(self.fm) | |
def Migrate(ent, sched, newid): | |
"""A migration function for the mock entity.""" | |
return self.TestEntity(ent.x, ent.y, ent.z) | |
self.ds._Migrate = Migrate | |
def testMergeIdentical(self): | |
class TestAttrib: | |
"""An object that is equal to everything.""" | |
def __cmp__(self, b): | |
return 0 | |
x = 99 | |
a = TestAttrib() | |
b = TestAttrib() | |
self.assert_(self.ds._MergeIdentical(x, x) == x) | |
self.assert_(self.ds._MergeIdentical(a, b) is b) | |
self.assertRaises(merge.MergeError, self.ds._MergeIdentical, 1, 2) | |
def testMergeIdenticalCaseInsensitive(self): | |
self.assert_(self.ds._MergeIdenticalCaseInsensitive('abc', 'ABC') == 'ABC') | |
self.assert_(self.ds._MergeIdenticalCaseInsensitive('abc', 'AbC') == 'AbC') | |
self.assertRaises(merge.MergeError, | |
self.ds._MergeIdenticalCaseInsensitive, 'abc', 'bcd') | |
self.assertRaises(merge.MergeError, | |
self.ds._MergeIdenticalCaseInsensitive, 'abc', 'ABCD') | |
def testMergeOptional(self): | |
x = 99 | |
y = 100 | |
self.assertEquals(self.ds._MergeOptional(None, None), None) | |
self.assertEquals(self.ds._MergeOptional(None, x), x) | |
self.assertEquals(self.ds._MergeOptional(x, None), x) | |
self.assertEquals(self.ds._MergeOptional(x, x), x) | |
self.assertRaises(merge.MergeError, self.ds._MergeOptional, x, y) | |
def testMergeSameAgency(self): | |
kwargs = {'name': 'xxx', | |
'agency_url': 'http://www.example.com', | |
'agency_timezone': 'Europe/Zurich'} | |
id1 = 'agency1' | |
id2 = 'agency2' | |
id3 = 'agency3' | |
id4 = 'agency4' | |
id5 = 'agency5' | |
a = self.fm.a_schedule.NewDefaultAgency(id=id1, **kwargs) | |
b = self.fm.b_schedule.NewDefaultAgency(id=id2, **kwargs) | |
c = transitfeed.Agency(id=id3, **kwargs) | |
self.fm.merged_schedule.AddAgencyObject(c) | |
self.fm.Register(a, b, c) | |
d = transitfeed.Agency(id=id4, **kwargs) | |
e = transitfeed.Agency(id=id5, **kwargs) | |
self.fm.a_schedule.AddAgencyObject(d) | |
self.fm.merged_schedule.AddAgencyObject(e) | |
self.fm.Register(d, None, e) | |
self.assertEquals(self.ds._MergeSameAgency(id1, id2), id3) | |
self.assertEquals(self.ds._MergeSameAgency(None, None), id3) | |
self.assertEquals(self.ds._MergeSameAgency(id1, None), id3) | |
self.assertEquals(self.ds._MergeSameAgency(None, id2), id3) | |
# id1 is not a valid agency_id in the new schedule so it cannot be merged | |
self.assertRaises(KeyError, self.ds._MergeSameAgency, id1, id1) | |
# this fails because d (id4) and b (id2) don't map to the same agency | |
# in the merged schedule | |
self.assertRaises(merge.MergeError, self.ds._MergeSameAgency, id4, id2) | |
def testSchemedMerge_Success(self): | |
def Merger(a, b): | |
return a + b | |
scheme = {'x': Merger, 'y': Merger, 'z': Merger} | |
a = self.TestEntity(1, 2, 3) | |
b = self.TestEntity(4, 5, 6) | |
c = self.ds._SchemedMerge(scheme, a, b) | |
self.assertEquals(c.x, 5) | |
self.assertEquals(c.y, 7) | |
self.assertEquals(c.z, 9) | |
def testSchemedMerge_Failure(self): | |
def Merger(a, b): | |
raise merge.MergeError() | |
scheme = {'x': Merger, 'y': Merger, 'z': Merger} | |
a = self.TestEntity(1, 2, 3) | |
b = self.TestEntity(4, 5, 6) | |
self.assertRaises(merge.MergeError, self.ds._SchemedMerge, | |
scheme, a, b) | |
def testSchemedMerge_NoNewId(self): | |
class TestDataSetMerger(merge.DataSetMerger): | |
def _Migrate(self, entity, schedule, newid): | |
self.newid = newid | |
return entity | |
dataset_merger = TestDataSetMerger(self.fm) | |
a = self.TestEntity(1, 2, 3) | |
b = self.TestEntity(4, 5, 6) | |
dataset_merger._SchemedMerge({}, a, b) | |
self.assertEquals(dataset_merger.newid, False) | |
def testSchemedMerge_ErrorTextContainsAttributeNameAndReason(self): | |
reason = 'my reason' | |
attribute_name = 'long_attribute_name' | |
def GoodMerger(a, b): | |
return a + b | |
def BadMerger(a, b): | |
raise merge.MergeError(reason) | |
a = self.TestEntity(1, 2, 3) | |
setattr(a, attribute_name, 1) | |
b = self.TestEntity(4, 5, 6) | |
setattr(b, attribute_name, 2) | |
scheme = {'x': GoodMerger, 'y': GoodMerger, 'z': GoodMerger, | |
attribute_name: BadMerger} | |
try: | |
self.ds._SchemedMerge(scheme, a, b) | |
except merge.MergeError, merge_error: | |
error_text = str(merge_error) | |
self.assert_(reason in error_text) | |
self.assert_(attribute_name in error_text) | |
class TestFeedMerger(util.TestCase): | |
class Merger: | |
def __init__(self, test, n, should_fail=False): | |
self.test = test | |
self.n = n | |
self.should_fail = should_fail | |
def MergeDataSets(self): | |
self.test.called.append(self.n) | |
return not self.should_fail | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
accumulator = TestingProblemAccumulator() | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
TestingProblemReporter(accumulator)) | |
self.called = [] | |
def testSequence(self): | |
for i in range(10): | |
self.fm.AddMerger(TestFeedMerger.Merger(self, i)) | |
self.assert_(self.fm.MergeSchedules()) | |
self.assertEquals(self.called, range(10)) | |
def testStopsAfterError(self): | |
for i in range(10): | |
self.fm.AddMerger(TestFeedMerger.Merger(self, i, i == 5)) | |
self.assert_(not self.fm.MergeSchedules()) | |
self.assertEquals(self.called, range(6)) | |
def testRegister(self): | |
s1 = transitfeed.Stop(stop_id='1') | |
s2 = transitfeed.Stop(stop_id='2') | |
s3 = transitfeed.Stop(stop_id='3') | |
self.fm.Register(s1, s2, s3) | |
self.assertEquals(self.fm.a_merge_map, {s1: s3}) | |
self.assertEquals('3', s1._migrated_entity.stop_id) | |
self.assertEquals(self.fm.b_merge_map, {s2: s3}) | |
self.assertEquals('3', s2._migrated_entity.stop_id) | |
def testRegisterNone(self): | |
s2 = transitfeed.Stop(stop_id='2') | |
s3 = transitfeed.Stop(stop_id='3') | |
self.fm.Register(None, s2, s3) | |
self.assertEquals(self.fm.a_merge_map, {}) | |
self.assertEquals(self.fm.b_merge_map, {s2: s3}) | |
self.assertEquals('3', s2._migrated_entity.stop_id) | |
def testGenerateId_Prefix(self): | |
x = 'test' | |
a = self.fm.GenerateId(x) | |
b = self.fm.GenerateId(x) | |
self.assertNotEqual(a, b) | |
self.assert_(a.startswith(x)) | |
self.assert_(b.startswith(x)) | |
def testGenerateId_None(self): | |
a = self.fm.GenerateId(None) | |
b = self.fm.GenerateId(None) | |
self.assertNotEqual(a, b) | |
def testGenerateId_InitialCounter(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
for i in range(10): | |
agency = transitfeed.Agency(name='agency', url='http://agency', | |
timezone='Africa/Johannesburg', | |
id='agency_%d' % i) | |
if i % 2: | |
b_schedule.AddAgencyObject(agency) | |
else: | |
a_schedule.AddAgencyObject(agency) | |
accumulator = TestingProblemAccumulator() | |
feed_merger = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
TestingProblemReporter(accumulator)) | |
# check that the postfix number of any generated ids are greater than | |
# the postfix numbers of any ids in the old and new schedules | |
gen_id = feed_merger.GenerateId(None) | |
postfix_num = int(gen_id[gen_id.rfind('_')+1:]) | |
self.assert_(postfix_num >= 10) | |
def testGetMerger(self): | |
class MergerA(merge.DataSetMerger): | |
pass | |
class MergerB(merge.DataSetMerger): | |
pass | |
a = MergerA(self.fm) | |
b = MergerB(self.fm) | |
self.fm.AddMerger(a) | |
self.fm.AddMerger(b) | |
self.assertEquals(self.fm.GetMerger(MergerA), a) | |
self.assertEquals(self.fm.GetMerger(MergerB), b) | |
def testGetMerger_Error(self): | |
self.assertRaises(LookupError, self.fm.GetMerger, TestFeedMerger.Merger) | |
class TestServicePeriodMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.spm = merge.ServicePeriodMerger(self.fm) | |
self.fm.AddMerger(self.spm) | |
def _AddTwoPeriods(self, start1, end1, start2, end2): | |
sp1fields = ['test1', start1, end1] + ['1']*7 | |
self.sp1 = transitfeed.ServicePeriod(field_list=sp1fields) | |
sp2fields = ['test2', start2, end2] + ['1']*7 | |
self.sp2 = transitfeed.ServicePeriod(field_list=sp2fields) | |
self.fm.a_schedule.AddServicePeriodObject(self.sp1) | |
self.fm.b_schedule.AddServicePeriodObject(self.sp2) | |
def testCheckDisjoint_True(self): | |
self._AddTwoPeriods('20071213', '20071231', | |
'20080101', '20080201') | |
self.assert_(self.spm.CheckDisjointCalendars()) | |
def testCheckDisjoint_False1(self): | |
self._AddTwoPeriods('20071213', '20080201', | |
'20080101', '20080301') | |
self.assert_(not self.spm.CheckDisjointCalendars()) | |
def testCheckDisjoint_False2(self): | |
self._AddTwoPeriods('20080101', '20090101', | |
'20070101', '20080601') | |
self.assert_(not self.spm.CheckDisjointCalendars()) | |
def testCheckDisjoint_False3(self): | |
self._AddTwoPeriods('20080301', '20080901', | |
'20080101', '20090101') | |
self.assert_(not self.spm.CheckDisjointCalendars()) | |
def testDisjoinCalendars(self): | |
self._AddTwoPeriods('20071213', '20080201', | |
'20080101', '20080301') | |
self.spm.DisjoinCalendars('20080101') | |
self.assertEquals(self.sp1.start_date, '20071213') | |
self.assertEquals(self.sp1.end_date, '20071231') | |
self.assertEquals(self.sp2.start_date, '20080101') | |
self.assertEquals(self.sp2.end_date, '20080301') | |
def testDisjoinCalendars_Dates(self): | |
self._AddTwoPeriods('20071213', '20080201', | |
'20080101', '20080301') | |
self.sp1.SetDateHasService('20071201') | |
self.sp1.SetDateHasService('20081231') | |
self.sp2.SetDateHasService('20071201') | |
self.sp2.SetDateHasService('20081231') | |
self.spm.DisjoinCalendars('20080101') | |
self.assert_('20071201' in self.sp1.date_exceptions.keys()) | |
self.assert_('20081231' not in self.sp1.date_exceptions.keys()) | |
self.assert_('20071201' not in self.sp2.date_exceptions.keys()) | |
self.assert_('20081231' in self.sp2.date_exceptions.keys()) | |
def testUnion(self): | |
self._AddTwoPeriods('20071213', '20071231', | |
'20080101', '20080201') | |
self.accumulator.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetServicePeriodList()), 2) | |
# make fields a copy of the service period attributes except service_id | |
fields = list(transitfeed.ServicePeriod._DAYS_OF_WEEK) | |
fields += ['start_date', 'end_date'] | |
# now check that these attributes are preserved in the merge | |
CheckAttribs(self.sp1, self.fm.a_merge_map[self.sp1], fields, | |
self.assertEquals) | |
CheckAttribs(self.sp2, self.fm.b_merge_map[self.sp2], fields, | |
self.assertEquals) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testMerge_RequiredButNotDisjoint(self): | |
self._AddTwoPeriods('20070101', '20090101', | |
'20080101', '20100101') | |
self.accumulator.ExpectProblemClass(merge.CalendarsNotDisjoint) | |
self.assertEquals(self.spm.MergeDataSets(), False) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testMerge_NotRequiredAndNotDisjoint(self): | |
self._AddTwoPeriods('20070101', '20090101', | |
'20080101', '20100101') | |
self.spm.require_disjoint_calendars = False | |
self.accumulator.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.accumulator.assertExpectedProblemsReported(self) | |
class TestAgencyMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.am = merge.AgencyMerger(self.fm) | |
self.fm.AddMerger(self.am) | |
self.a1 = transitfeed.Agency(id='a1', agency_name='a1', | |
agency_url='http://www.a1.com', | |
agency_timezone='Africa/Johannesburg', | |
agency_phone='123 456 78 90') | |
self.a2 = transitfeed.Agency(id='a2', agency_name='a1', | |
agency_url='http://www.a1.com', | |
agency_timezone='Africa/Johannesburg', | |
agency_phone='789 65 43 21') | |
def testMerge(self): | |
self.a2.agency_id = self.a1.agency_id | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetAgencyList()), 1) | |
self.assertEquals(merged_schedule.GetAgencyList()[0], | |
self.fm.a_merge_map[self.a1]) | |
self.assertEquals(self.fm.a_merge_map[self.a1], | |
self.fm.b_merge_map[self.a2]) | |
# differing values such as agency_phone should be taken from self.a2 | |
self.assertEquals(merged_schedule.GetAgencyList()[0], self.a2) | |
self.assertEquals(self.am.GetMergeStats(), (1, 0, 0)) | |
# check that id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.a1].agency_id, | |
self.a1.agency_id) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetAgencyList()), 2) | |
self.assert_(self.fm.a_merge_map[self.a1] in | |
merged_schedule.GetAgencyList()) | |
self.assert_(self.fm.b_merge_map[self.a2] in | |
merged_schedule.GetAgencyList()) | |
self.assertEquals(self.a1, self.fm.a_merge_map[self.a1]) | |
self.assertEquals(self.a2, self.fm.b_merge_map[self.a2]) | |
self.assertEquals(self.am.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are preserved | |
self.assertEquals(self.fm.a_merge_map[self.a1].agency_id, | |
self.a1.agency_id) | |
self.assertEquals(self.fm.b_merge_map[self.a2].agency_id, | |
self.a2.agency_id) | |
def testNoMerge_SameId(self): | |
# Force a1.agency_id to be unicode to make sure it is correctly encoded | |
# to utf-8 before concatinating to the agency_name containing non-ascii | |
# characters. | |
self.a1.agency_id = unicode(self.a1.agency_id) | |
self.a2.agency_id = str(self.a1.agency_id) | |
self.a2.agency_name = 'different \xc3\xa9' | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetAgencyList()), 2) | |
self.assertEquals(self.am.GetMergeStats(), (0, 1, 1)) | |
# check that the merged entities have different ids | |
self.assertNotEqual(self.fm.a_merge_map[self.a1].agency_id, | |
self.fm.b_merge_map[self.a2].agency_id) | |
self.accumulator.assertExpectedProblemsReported(self) | |
class TestStopMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.sm = merge.StopMerger(self.fm) | |
self.fm.AddMerger(self.sm) | |
self.s1 = transitfeed.Stop(30.0, 30.0, | |
u'Andr\202' , 's1') | |
self.s1.stop_desc = 'stop 1' | |
self.s1.stop_url = 'http://stop/1' | |
self.s1.zone_id = 'zone1' | |
self.s2 = transitfeed.Stop(30.0, 30.0, 's2', 's2') | |
self.s2.stop_desc = 'stop 2' | |
self.s2.stop_url = 'http://stop/2' | |
self.s2.zone_id = 'zone1' | |
def testMerge(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s1.location_type = 1 | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 1) | |
self.assertEquals(merged_schedule.GetStopList()[0], | |
self.fm.a_merge_map[self.s1]) | |
self.assertEquals(self.fm.a_merge_map[self.s1], | |
self.fm.b_merge_map[self.s2]) | |
self.assertEquals(self.sm.GetMergeStats(), (1, 0, 0)) | |
# check that the remaining attributes are taken from the new stop | |
fields = ['stop_name', 'stop_lat', 'stop_lon', 'stop_desc', 'stop_url', | |
'location_type'] | |
CheckAttribs(self.fm.a_merge_map[self.s1], self.s2, fields, | |
self.assertEquals) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].stop_id, self.s1.stop_id) | |
# check that the zone_id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].zone_id, self.s1.zone_id) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.fm.a_merge_map[self.s1] in merged_schedule.GetStopList()) | |
self.assert_(self.fm.b_merge_map[self.s2] in merged_schedule.GetStopList()) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
def testNoMerge_DifferentName(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.fm.a_merge_map[self.s1] in merged_schedule.GetStopList()) | |
self.assert_(self.fm.b_merge_map[self.s2] in merged_schedule.GetStopList()) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
def testNoMerge_FarApart(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s2.stop_lat = 40.0 | |
self.s2.stop_lon = 40.0 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.fm.a_merge_map[self.s1] in merged_schedule.GetStopList()) | |
self.assert_(self.fm.b_merge_map[self.s2] in merged_schedule.GetStopList()) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the merged ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.s1].stop_id, | |
self.fm.b_merge_map[self.s2].stop_id) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testMerge_CaseInsensitive(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name.upper() | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 1) | |
self.assertEquals(self.sm.GetMergeStats(), (1, 0, 0)) | |
def testNoMerge_ZoneId(self): | |
self.s2.zone_id = 'zone2' | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetStopList()), 2) | |
self.assert_(self.s1.zone_id in self.fm.a_zone_map) | |
self.assert_(self.s2.zone_id in self.fm.b_zone_map) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the zones are still different | |
self.assertNotEqual(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.b_merge_map[self.s2].zone_id) | |
def testZoneId_SamePreservation(self): | |
# checks that if the zone_ids of some stops are the same before the | |
# merge, they are still the same after. | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.a_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.a_merge_map[self.s2].zone_id) | |
def testZoneId_DifferentSchedules(self): | |
# zone_ids may be the same in different schedules but unless the stops | |
# are merged, they should map to different zone_ids | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertNotEquals(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.b_merge_map[self.s2].zone_id) | |
def testZoneId_MergePreservation(self): | |
# check that if two stops are merged, the zone mapping is used for all | |
# other stops too | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
s3 = transitfeed.Stop(field_dict=self.s1) | |
s3.stop_id = 'different' | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.a_schedule.AddStopObject(s3) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(self.fm.a_merge_map[self.s1].zone_id, | |
self.fm.a_merge_map[s3].zone_id) | |
self.assertEquals(self.fm.a_merge_map[s3].zone_id, | |
self.fm.b_merge_map[self.s2].zone_id) | |
def testMergeStationType(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s1.location_type = 1 | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
merged_stops = self.fm.GetMergedSchedule().GetStopList() | |
self.assertEquals(len(merged_stops), 1) | |
self.assertEquals(merged_stops[0].location_type, 1) | |
def testMergeDifferentTypes(self): | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
try: | |
self.fm.MergeSchedules() | |
self.fail("Expecting MergeError") | |
except merge.SameIdButNotMerged, merge_error: | |
self.assertTrue(("%s" % merge_error).find("location_type") != -1) | |
def AssertS1ParentIsS2(self): | |
"""Assert that the merged s1 has parent s2.""" | |
new_s1 = self.s1._migrated_entity | |
new_s2 = self.s2._migrated_entity | |
self.assertEquals(new_s1.parent_station, new_s2.stop_id) | |
self.assertEquals(new_s2.parent_station, None) | |
self.assertEquals(new_s1.location_type, 0) | |
self.assertEquals(new_s2.location_type, 1) | |
def testMergeMaintainParentRelationship(self): | |
self.s2.location_type = 1 | |
self.s1.parent_station = self.s2.stop_id | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.a_schedule.AddStopObject(self.s2) | |
self.fm.MergeSchedules() | |
self.AssertS1ParentIsS2() | |
def testParentRelationshipAfterMerge(self): | |
s3 = transitfeed.Stop(field_dict=self.s1) | |
s3.parent_station = self.s2.stop_id | |
self.s2.location_type = 1 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.fm.b_schedule.AddStopObject(s3) | |
self.fm.MergeSchedules() | |
self.AssertS1ParentIsS2() | |
def testParentRelationshipWithNewParentid(self): | |
self.s2.location_type = 1 | |
self.s1.parent_station = self.s2.stop_id | |
# s3 will have a stop_id conflict with self.s2 so parent_id of the | |
# migrated self.s1 will need to be updated | |
s3 = transitfeed.Stop(field_dict=self.s2) | |
s3.stop_lat = 45 | |
self.fm.a_schedule.AddStopObject(s3) | |
self.fm.b_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertNotEquals(s3._migrated_entity.stop_id, | |
self.s2._migrated_entity.stop_id) | |
# Check that s2 got a new id | |
self.assertNotEquals(self.s2.stop_id, | |
self.s2._migrated_entity.stop_id) | |
self.AssertS1ParentIsS2() | |
def _AddStopsApart(self): | |
"""Adds two stops to the schedules and returns the distance between them. | |
Returns: | |
The distance between the stops in metres, a value greater than zero. | |
""" | |
self.s2.stop_id = self.s1.stop_id | |
self.s2.stop_name = self.s1.stop_name | |
self.s2.stop_lat += 1.0e-3 | |
self.fm.a_schedule.AddStopObject(self.s1) | |
self.fm.b_schedule.AddStopObject(self.s2) | |
return transitfeed.ApproximateDistanceBetweenStops(self.s1, self.s2) | |
def testSetLargestStopDistanceSmall(self): | |
largest_stop_distance = self._AddStopsApart() * 0.5 | |
self.sm.SetLargestStopDistance(largest_stop_distance) | |
self.assertEquals(self.sm.largest_stop_distance, largest_stop_distance) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetStopList()), 2) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testSetLargestStopDistanceLarge(self): | |
largest_stop_distance = self._AddStopsApart() * 2.0 | |
self.sm.SetLargestStopDistance(largest_stop_distance) | |
self.assertEquals(self.sm.largest_stop_distance, largest_stop_distance) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetStopList()), 1) | |
class TestRouteMerger(util.TestCase): | |
fields = ['route_short_name', 'route_long_name', 'route_type', | |
'route_url'] | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.fm.AddMerger(merge.AgencyMerger(self.fm)) | |
self.rm = merge.RouteMerger(self.fm) | |
self.fm.AddMerger(self.rm) | |
akwargs = {'id': 'a1', | |
'agency_name': 'a1', | |
'agency_url': 'http://www.a1.com', | |
'agency_timezone': 'Europe/Zurich'} | |
self.a1 = transitfeed.Agency(**akwargs) | |
self.a2 = transitfeed.Agency(**akwargs) | |
a_schedule.AddAgencyObject(self.a1) | |
b_schedule.AddAgencyObject(self.a2) | |
rkwargs = {'route_id': 'r1', | |
'agency_id': 'a1', | |
'short_name': 'r1', | |
'long_name': 'r1r1', | |
'route_type': '0'} | |
self.r1 = transitfeed.Route(**rkwargs) | |
self.r2 = transitfeed.Route(**rkwargs) | |
self.r2.route_url = 'http://route/2' | |
def testMerge(self): | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetRouteList()), 1) | |
r = merged_schedule.GetRouteList()[0] | |
self.assert_(self.fm.a_merge_map[self.r1] is r) | |
self.assert_(self.fm.b_merge_map[self.r2] is r) | |
CheckAttribs(self.r2, r, self.fields, self.assertEquals) | |
self.assertEquals(r.agency_id, self.fm.a_merge_map[self.a1].agency_id) | |
self.assertEquals(self.rm.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.r1].route_id, self.r1.route_id) | |
def testMergeNoAgency(self): | |
self.r1.agency_id = None | |
self.r2.agency_id = None | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetRouteList()), 1) | |
r = merged_schedule.GetRouteList()[0] | |
CheckAttribs(self.r2, r, self.fields, self.assertEquals) | |
# Merged route has copy of default agency_id | |
self.assertEquals(r.agency_id, self.a1.agency_id) | |
self.assertEquals(self.rm.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.r1].route_id, self.r1.route_id) | |
def testMigrateNoAgency(self): | |
self.r1.agency_id = None | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.MergeSchedules() | |
merged_schedule = self.fm.GetMergedSchedule() | |
self.assertEquals(len(merged_schedule.GetRouteList()), 1) | |
r = merged_schedule.GetRouteList()[0] | |
CheckAttribs(self.r1, r, self.fields, self.assertEquals) | |
# Migrated route has copy of default agency_id | |
self.assertEquals(r.agency_id, self.a1.agency_id) | |
def testNoMerge_DifferentId(self): | |
self.r2.route_id = 'r2' | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetRouteList()), 2) | |
self.assertEquals(self.rm.GetMergeStats(), (0, 1, 1)) | |
def testNoMerge_SameId(self): | |
self.r2.route_short_name = 'different' | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetRouteList()), 2) | |
self.assertEquals(self.rm.GetMergeStats(), (0, 1, 1)) | |
# check that the merged ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.r1].route_id, | |
self.fm.b_merge_map[self.r2].route_id) | |
self.accumulator.assertExpectedProblemsReported(self) | |
class TestTripMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.fm.AddDefaultMergers() | |
self.tm = self.fm.GetMerger(merge.TripMerger) | |
akwargs = {'id': 'a1', | |
'agency_name': 'a1', | |
'agency_url': 'http://www.a1.com', | |
'agency_timezone': 'Europe/Zurich'} | |
self.a1 = transitfeed.Agency(**akwargs) | |
rkwargs = {'route_id': 'r1', | |
'agency_id': 'a1', | |
'short_name': 'r1', | |
'long_name': 'r1r1', | |
'route_type': '0'} | |
self.r1 = transitfeed.Route(**rkwargs) | |
self.s1 = transitfeed.ServicePeriod('s1') | |
self.s1.start_date = '20071201' | |
self.s1.end_date = '20071231' | |
self.s1.SetWeekdayService() | |
self.shape = transitfeed.Shape('shape1') | |
self.shape.AddPoint(30.0, 30.0) | |
self.t1 = transitfeed.Trip(service_period=self.s1, | |
route=self.r1, trip_id='t1') | |
self.t2 = transitfeed.Trip(service_period=self.s1, | |
route=self.r1, trip_id='t2') | |
# Must add self.t1 to a schedule before calling self.t1.AddStopTime | |
a_schedule.AddTripObject(self.t1, validate=False) | |
a_schedule.AddTripObject(self.t2, validate=False) | |
self.t1.block_id = 'b1' | |
self.t2.block_id = 'b1' | |
self.t1.shape_id = 'shape1' | |
self.stop = transitfeed.Stop(30.0, 30.0, stop_id='stop1') | |
self.t1.AddStopTime(self.stop, arrival_secs=0, departure_secs=0) | |
a_schedule.AddAgencyObject(self.a1) | |
a_schedule.AddStopObject(self.stop) | |
a_schedule.AddRouteObject(self.r1) | |
a_schedule.AddServicePeriodObject(self.s1) | |
a_schedule.AddShapeObject(self.shape) | |
def testMigrate(self): | |
self.accumulator.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.accumulator.assertExpectedProblemsReported(self) | |
r = self.fm.a_merge_map[self.r1] | |
s = self.fm.a_merge_map[self.s1] | |
shape = self.fm.a_merge_map[self.shape] | |
t1 = self.fm.a_merge_map[self.t1] | |
t2 = self.fm.a_merge_map[self.t2] | |
self.assertEquals(t1.route_id, r.route_id) | |
self.assertEquals(t1.service_id, s.service_id) | |
self.assertEquals(t1.shape_id, shape.shape_id) | |
self.assertEquals(t1.block_id, t2.block_id) | |
self.assertEquals(len(t1.GetStopTimes()), 1) | |
st = t1.GetStopTimes()[0] | |
self.assertEquals(st.stop, self.fm.a_merge_map[self.stop]) | |
def testReportsNotImplementedProblem(self): | |
self.accumulator.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testMergeStats(self): | |
self.assert_(self.tm.GetMergeStats() is None) | |
def testConflictingTripid(self): | |
a1_in_b = transitfeed.Agency(field_dict=self.a1) | |
r1_in_b = transitfeed.Route(field_dict=self.r1) | |
t1_in_b = transitfeed.Trip(field_dict=self.t1) | |
shape_in_b = transitfeed.Shape('shape1') | |
shape_in_b.AddPoint(30.0, 30.0) | |
s_in_b = transitfeed.ServicePeriod('s1') | |
s_in_b.start_date = '20080101' | |
s_in_b.end_date = '20080131' | |
s_in_b.SetWeekdayService() | |
self.fm.b_schedule.AddAgencyObject(a1_in_b) | |
self.fm.b_schedule.AddRouteObject(r1_in_b) | |
self.fm.b_schedule.AddShapeObject(shape_in_b) | |
self.fm.b_schedule.AddTripObject(t1_in_b, validate=False) | |
self.fm.b_schedule.AddServicePeriodObject(s_in_b, validate=False) | |
self.accumulator.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
# 3 trips moved to merged_schedule: from a_schedule t1, t2 and from | |
# b_schedule t1 | |
self.assertEquals(len(self.fm.merged_schedule.GetTripList()), 3) | |
class TestFareMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.faremerger = merge.FareMerger(self.fm) | |
self.fm.AddMerger(self.faremerger) | |
self.f1 = transitfeed.FareAttribute('f1', '10', 'ZAR', '1', '0') | |
self.f2 = transitfeed.FareAttribute('f2', '10', 'ZAR', '1', '0') | |
def testMerge(self): | |
self.f2.fare_id = self.f1.fare_id | |
self.fm.a_schedule.AddFareAttributeObject(self.f1) | |
self.fm.b_schedule.AddFareAttributeObject(self.f2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareAttributeList()), 1) | |
self.assertEquals(self.faremerger.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.f1].fare_id, self.f1.fare_id) | |
def testNoMerge_DifferentPrice(self): | |
self.f2.fare_id = self.f1.fare_id | |
self.f2.price = 11.0 | |
self.fm.a_schedule.AddFareAttributeObject(self.f1) | |
self.fm.b_schedule.AddFareAttributeObject(self.f2) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareAttributeList()), 2) | |
self.assertEquals(self.faremerger.GetMergeStats(), (0, 1, 1)) | |
# check that the merged ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.f1].fare_id, | |
self.fm.b_merge_map[self.f2].fare_id) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddFareAttributeObject(self.f1) | |
self.fm.b_schedule.AddFareAttributeObject(self.f2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareAttributeList()), 2) | |
self.assertEquals(self.faremerger.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are preserved | |
self.assertEquals(self.fm.a_merge_map[self.f1].fare_id, self.f1.fare_id) | |
self.assertEquals(self.fm.b_merge_map[self.f2].fare_id, self.f2.fare_id) | |
class TestShapeMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.sm = merge.ShapeMerger(self.fm) | |
self.fm.AddMerger(self.sm) | |
# setup some shapes | |
# s1 and s2 have the same endpoints but take different paths | |
# s3 has different endpoints to s1 and s2 | |
self.s1 = transitfeed.Shape('s1') | |
self.s1.AddPoint(30.0, 30.0) | |
self.s1.AddPoint(40.0, 30.0) | |
self.s1.AddPoint(50.0, 50.0) | |
self.s2 = transitfeed.Shape('s2') | |
self.s2.AddPoint(30.0, 30.0) | |
self.s2.AddPoint(40.0, 35.0) | |
self.s2.AddPoint(50.0, 50.0) | |
self.s3 = transitfeed.Shape('s3') | |
self.s3.AddPoint(31.0, 31.0) | |
self.s3.AddPoint(45.0, 35.0) | |
self.s3.AddPoint(51.0, 51.0) | |
def testMerge(self): | |
self.s2.shape_id = self.s1.shape_id | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetShapeList()), 1) | |
self.assertEquals(self.fm.merged_schedule.GetShapeList()[0], self.s2) | |
self.assertEquals(self.sm.GetMergeStats(), (1, 0, 0)) | |
# check that the id is preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].shape_id, self.s1.shape_id) | |
def testNoMerge_DifferentId(self): | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s2) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetShapeList()), 2) | |
self.assertEquals(self.s1, self.fm.a_merge_map[self.s1]) | |
self.assertEquals(self.s2, self.fm.b_merge_map[self.s2]) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are preserved | |
self.assertEquals(self.fm.a_merge_map[self.s1].shape_id, self.s1.shape_id) | |
self.assertEquals(self.fm.b_merge_map[self.s2].shape_id, self.s2.shape_id) | |
def testNoMerge_FarEndpoints(self): | |
self.s3.shape_id = self.s1.shape_id | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s3) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetShapeList()), 2) | |
self.assertEquals(self.s1, self.fm.a_merge_map[self.s1]) | |
self.assertEquals(self.s3, self.fm.b_merge_map[self.s3]) | |
self.assertEquals(self.sm.GetMergeStats(), (0, 1, 1)) | |
# check that the ids are different | |
self.assertNotEquals(self.fm.a_merge_map[self.s1].shape_id, | |
self.fm.b_merge_map[self.s3].shape_id) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def _AddShapesApart(self): | |
"""Adds two shapes to the schedules. | |
The maximum of the distances between the endpoints is returned. | |
Returns: | |
The distance in metres, a value greater than zero. | |
""" | |
self.s3.shape_id = self.s1.shape_id | |
self.fm.a_schedule.AddShapeObject(self.s1) | |
self.fm.b_schedule.AddShapeObject(self.s3) | |
distance1 = merge.ApproximateDistanceBetweenPoints( | |
self.s1.points[0][:2], self.s3.points[0][:2]) | |
distance2 = merge.ApproximateDistanceBetweenPoints( | |
self.s1.points[-1][:2], self.s3.points[-1][:2]) | |
return max(distance1, distance2) | |
def testSetLargestShapeDistanceSmall(self): | |
largest_shape_distance = self._AddShapesApart() * 0.5 | |
self.sm.SetLargestShapeDistance(largest_shape_distance) | |
self.assertEquals(self.sm.largest_shape_distance, largest_shape_distance) | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetShapeList()), 2) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testSetLargestShapeDistanceLarge(self): | |
largest_shape_distance = self._AddShapesApart() * 2.0 | |
self.sm.SetLargestShapeDistance(largest_shape_distance) | |
self.assertEquals(self.sm.largest_shape_distance, largest_shape_distance) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.GetMergedSchedule().GetShapeList()), 1) | |
class TestFareRuleMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
self.fm.AddDefaultMergers() | |
self.fare_rule_merger = self.fm.GetMerger(merge.FareRuleMerger) | |
akwargs = {'id': 'a1', | |
'agency_name': 'a1', | |
'agency_url': 'http://www.a1.com', | |
'agency_timezone': 'Europe/Zurich'} | |
self.a1 = transitfeed.Agency(**akwargs) | |
self.a2 = transitfeed.Agency(**akwargs) | |
rkwargs = {'route_id': 'r1', | |
'agency_id': 'a1', | |
'short_name': 'r1', | |
'long_name': 'r1r1', | |
'route_type': '0'} | |
self.r1 = transitfeed.Route(**rkwargs) | |
self.r2 = transitfeed.Route(**rkwargs) | |
self.f1 = transitfeed.FareAttribute('f1', '10', 'ZAR', '1', '0') | |
self.f2 = transitfeed.FareAttribute('f1', '10', 'ZAR', '1', '0') | |
self.f3 = transitfeed.FareAttribute('f3', '11', 'USD', '1', '0') | |
self.fr1 = transitfeed.FareRule('f1', 'r1') | |
self.fr2 = transitfeed.FareRule('f1', 'r1') | |
self.fr3 = transitfeed.FareRule('f3', 'r1') | |
self.fm.a_schedule.AddAgencyObject(self.a1) | |
self.fm.a_schedule.AddRouteObject(self.r1) | |
self.fm.a_schedule.AddFareAttributeObject(self.f1) | |
self.fm.a_schedule.AddFareAttributeObject(self.f3) | |
self.fm.a_schedule.AddFareRuleObject(self.fr1) | |
self.fm.a_schedule.AddFareRuleObject(self.fr3) | |
self.fm.b_schedule.AddAgencyObject(self.a2) | |
self.fm.b_schedule.AddRouteObject(self.r2) | |
self.fm.b_schedule.AddFareAttributeObject(self.f2) | |
self.fm.b_schedule.AddFareRuleObject(self.fr2) | |
def testMerge(self): | |
self.accumulator.ExpectProblemClass(merge.FareRulesBroken) | |
self.accumulator.ExpectProblemClass(merge.MergeNotImplemented) | |
self.fm.MergeSchedules() | |
self.assertEquals(len(self.fm.merged_schedule.GetFareAttributeList()), 2) | |
fare_1 = self.fm.a_merge_map[self.f1] | |
fare_2 = self.fm.a_merge_map[self.f3] | |
self.assertEquals(len(fare_1.GetFareRuleList()), 1) | |
fare_rule_1 = fare_1.GetFareRuleList()[0] | |
self.assertEquals(len(fare_2.GetFareRuleList()), 1) | |
fare_rule_2 = fare_2.GetFareRuleList()[0] | |
self.assertEquals(fare_rule_1.fare_id, | |
self.fm.a_merge_map[self.f1].fare_id) | |
self.assertEquals(fare_rule_1.route_id, | |
self.fm.a_merge_map[self.r1].route_id) | |
self.assertEqual(fare_rule_2.fare_id, | |
self.fm.a_merge_map[self.f3].fare_id) | |
self.assertEqual(fare_rule_2.route_id, | |
self.fm.a_merge_map[self.r1].route_id) | |
self.accumulator.assertExpectedProblemsReported(self) | |
def testMergeStats(self): | |
self.assert_(self.fare_rule_merger.GetMergeStats() is None) | |
class TestTransferMerger(util.TestCase): | |
def setUp(self): | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.accumulator = TestingProblemAccumulator() | |
self.problem_reporter = TestingProblemReporter(self.accumulator) | |
self.fm = merge.FeedMerger(a_schedule, b_schedule, merged_schedule, | |
self.problem_reporter) | |
def testStopsMerged(self): | |
stop0 = transitfeed.Stop(lat=30.0, lng=30.0, name="0", stop_id="0") | |
stop1 = transitfeed.Stop(lat=30.1, lng=30.1, name="1", stop_id="1") | |
self.fm.a_schedule.AddStopObject(transitfeed.Stop(field_dict=stop0)) | |
self.fm.b_schedule.AddStopObject(transitfeed.Stop(field_dict=stop0)) | |
self.fm.a_schedule.AddStopObject(transitfeed.Stop(field_dict=stop1)) | |
self.fm.b_schedule.AddStopObject(transitfeed.Stop(field_dict=stop1)) | |
self.fm.a_schedule.AddTransferObject(transitfeed.Transfer(from_stop_id="0", | |
to_stop_id="1")) | |
self.fm.b_schedule.AddTransferObject(transitfeed.Transfer(from_stop_id="0", | |
to_stop_id="1")) | |
self.fm.AddMerger(merge.StopMerger(self.fm)) | |
self.fm.AddMerger(merge.TransferMerger(self.fm)) | |
self.fm.MergeSchedules() | |
transfers = self.fm.merged_schedule.GetTransferList() | |
self.assertEquals(1, len(transfers)) | |
self.assertEquals("0", transfers[0].from_stop_id) | |
self.assertEquals("1", transfers[0].to_stop_id) | |
def testToStopNotMerged(self): | |
"""When stops aren't merged transfer is duplicated.""" | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
stop0 = transitfeed.Stop(lat=30.0, lng=30.0, name="0", stop_id="0") | |
stop1a = transitfeed.Stop(lat=30.1, lng=30.1, name="1a", stop_id="1") | |
stop1b = transitfeed.Stop(lat=30.1, lng=30.1, name="1b", stop_id="1") | |
# a_schedule and b_schedule both have a transfer with to_stop_id=1 but the | |
# stops are not merged so the transfer must be duplicated. Create a copy | |
# of the Stop objects to add to the schedules. | |
self.fm.a_schedule.AddStopObject(transitfeed.Stop(field_dict=stop0)) | |
self.fm.a_schedule.AddStopObject(transitfeed.Stop(field_dict=stop1a)) | |
self.fm.a_schedule.AddTransferObject( | |
transitfeed.Transfer(from_stop_id="0", to_stop_id="1")) | |
self.fm.b_schedule.AddStopObject(transitfeed.Stop(field_dict=stop0)) | |
self.fm.b_schedule.AddStopObject(transitfeed.Stop(field_dict=stop1b)) | |
self.fm.b_schedule.AddTransferObject( | |
transitfeed.Transfer(from_stop_id="0", to_stop_id="1")) | |
self.fm.AddMerger(merge.StopMerger(self.fm)) | |
self.fm.AddMerger(merge.TransferMerger(self.fm)) | |
self.fm.MergeSchedules() | |
transfers = self.fm.merged_schedule.GetTransferList() | |
self.assertEquals(2, len(transfers)) | |
self.assertEquals("0", transfers[0].from_stop_id) | |
self.assertEquals("0", transfers[1].from_stop_id) | |
# transfers are not ordered so allow the migrated to_stop_id values to | |
# appear in either order. | |
def MergedScheduleStopName(stop_id): | |
return self.fm.merged_schedule.GetStop(stop_id).stop_name | |
if MergedScheduleStopName(transfers[0].to_stop_id) == "1a": | |
self.assertEquals("1b", MergedScheduleStopName(transfers[1].to_stop_id)) | |
else: | |
self.assertEquals("1b", MergedScheduleStopName(transfers[0].to_stop_id)) | |
self.assertEquals("1a", MergedScheduleStopName(transfers[1].to_stop_id)) | |
def testFromStopNotMerged(self): | |
"""When stops aren't merged transfer is duplicated.""" | |
self.accumulator.ExpectProblemClass(merge.SameIdButNotMerged) | |
stop0 = transitfeed.Stop(lat=30.0, lng=30.0, name="0", stop_id="0") | |
stop1a = transitfeed.Stop(lat=30.1, lng=30.1, name="1a", stop_id="1") | |
stop1b = transitfeed.Stop(lat=30.1, lng=30.1, name="1b", stop_id="1") | |
# a_schedule and b_schedule both have a transfer with from_stop_id=1 but the | |
# stops are not merged so the transfer must be duplicated. Create a copy | |
# of the Stop objects to add to the schedules. | |
self.fm.a_schedule.AddStopObject(transitfeed.Stop(field_dict=stop0)) | |
self.fm.a_schedule.AddStopObject(transitfeed.Stop(field_dict=stop1a)) | |
self.fm.a_schedule.AddTransferObject( | |
transitfeed.Transfer(from_stop_id="1", to_stop_id="0")) | |
self.fm.b_schedule.AddStopObject(transitfeed.Stop(field_dict=stop0)) | |
self.fm.b_schedule.AddStopObject(transitfeed.Stop(field_dict=stop1b)) | |
self.fm.b_schedule.AddTransferObject( | |
transitfeed.Transfer(from_stop_id="1", to_stop_id="0")) | |
self.fm.AddMerger(merge.StopMerger(self.fm)) | |
self.fm.AddMerger(merge.TransferMerger(self.fm)) | |
self.fm.MergeSchedules() | |
transfers = self.fm.merged_schedule.GetTransferList() | |
self.assertEquals(2, len(transfers)) | |
self.assertEquals("0", transfers[0].to_stop_id) | |
self.assertEquals("0", transfers[1].to_stop_id) | |
# transfers are not ordered so allow the migrated from_stop_id values to | |
# appear in either order. | |
def MergedScheduleStopName(stop_id): | |
return self.fm.merged_schedule.GetStop(stop_id).stop_name | |
if MergedScheduleStopName(transfers[0].from_stop_id) == "1a": | |
self.assertEquals("1b", MergedScheduleStopName(transfers[1].from_stop_id)) | |
else: | |
self.assertEquals("1b", MergedScheduleStopName(transfers[0].from_stop_id)) | |
self.assertEquals("1a", MergedScheduleStopName(transfers[1].from_stop_id)) | |
class TestExceptionProblemAccumulator(util.TestCase): | |
def setUp(self): | |
self.dataset_merger = merge.TripMerger(None) | |
def testRaisesErrors(self): | |
accumulator = transitfeed.ExceptionProblemAccumulator() | |
problem_reporter = merge.MergeProblemReporter(accumulator) | |
self.assertRaises(merge.CalendarsNotDisjoint, | |
problem_reporter.CalendarsNotDisjoint, | |
self.dataset_merger) | |
def testNoRaiseWarnings(self): | |
accumulator = transitfeed.ExceptionProblemAccumulator() | |
problem_reporter = merge.MergeProblemReporter(accumulator) | |
problem_reporter.MergeNotImplemented(self.dataset_merger) | |
def testRaiseWarnings(self): | |
accumulator = transitfeed.ExceptionProblemAccumulator(True) | |
problem_reporter = merge.MergeProblemReporter(accumulator) | |
self.assertRaises(merge.MergeNotImplemented, | |
problem_reporter.MergeNotImplemented, | |
self.dataset_merger) | |
class TestHTMLProblemAccumulator(util.TestCase): | |
def setUp(self): | |
self.accumulator = merge.HTMLProblemAccumulator() | |
self.problem_reporter = merge.MergeProblemReporter(self.accumulator) | |
a_schedule = transitfeed.Schedule() | |
b_schedule = transitfeed.Schedule() | |
merged_schedule = transitfeed.Schedule() | |
self.feed_merger = merge.FeedMerger(a_schedule, b_schedule, | |
merged_schedule, | |
self.problem_reporter) | |
self.dataset_merger = merge.TripMerger(None) | |
def testGeneratesSomeHTML(self): | |
self.problem_reporter.CalendarsNotDisjoint(self.dataset_merger) | |
self.problem_reporter.MergeNotImplemented(self.dataset_merger) | |
self.problem_reporter.FareRulesBroken(self.dataset_merger) | |
self.problem_reporter.SameIdButNotMerged(self.dataset_merger, | |
'test', 'unknown reason') | |
output_file = StringIO.StringIO() | |
old_feed_path = '/path/to/old/feed' | |
new_feed_path = '/path/to/new/feed' | |
merged_feed_path = '/path/to/merged/feed' | |
self.accumulator.WriteOutput(output_file, self.feed_merger, | |
old_feed_path, new_feed_path, | |
merged_feed_path) | |
html = output_file.getvalue() | |
self.assert_(html.startswith('<html>')) | |
self.assert_(html.endswith('</html>')) | |
class MergeInSubprocessTestCase(util.TempDirTestCaseBase): | |
def CopyAndModifyTestData(self, zip_path, modify_file, old, new): | |
"""Return path of zip_path copy with old replaced by new in modify_file.""" | |
zipfile_mem = StringIO.StringIO(open(zip_path, 'rb').read()) | |
old_zip = zipfile.ZipFile(zipfile_mem, 'r') | |
content_dict = self.ConvertZipToDict(old_zip) | |
content_dict[modify_file] = content_dict[modify_file].replace(old, new) | |
new_zipfile_mem = self.ConvertDictToZip(content_dict) | |
new_zip_path = os.path.join(self.tempdirpath, "modified.zip") | |
open(new_zip_path, 'wb').write(new_zipfile_mem.getvalue()) | |
return new_zip_path | |
def testCrashHandler(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser', | |
'IWantMyCrash', 'file2', 'fileout.zip'], | |
expected_retcode=127) | |
self.assertMatchesRegex(r'Yikes', out) | |
crashout = open('transitfeedcrash.txt').read() | |
self.assertMatchesRegex(r'For testing the merge crash handler', crashout) | |
def testMergeBadCommandLine(self): | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser'], | |
expected_retcode=2) | |
self.assertFalse(out) | |
self.assertMatchesRegex(r'command line arguments', err) | |
self.assertFalse(os.path.exists('transitfeedcrash.txt')) | |
def testMergeWithWarnings(self): | |
# Make a copy of good_feed.zip which is not active until 20110101. This | |
# avoids adding another test/data file. good_feed.zip needs to remain error | |
# free so it can't start in the future. | |
future_good_feed = self.CopyAndModifyTestData( | |
self.GetPath('test/data/good_feed.zip'), 'calendar.txt', | |
'20070101', '20110101') | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser', | |
self.GetPath('test/data/unused_stop'), | |
future_good_feed, | |
os.path.join(self.tempdirpath, 'merged-warnings.zip')], | |
expected_retcode=0) | |
def testMergeWithErrors(self): | |
# Make a copy of good_feed.zip which is not active until 20110101. This | |
# avoids adding another test/data file. good_feed.zip needs to remain error | |
# free so it can't start in the future. | |
future_good_feed = self.CopyAndModifyTestData( | |
self.GetPath('test/data/good_feed.zip'), 'calendar.txt', | |
'20070101', '20110101') | |
(out, err) = self.CheckCallWithPath( | |
[self.GetPath('merge.py'), '--no_browser', | |
self.GetPath('test/data/unused_stop'), | |
future_good_feed], | |
expected_retcode=2) | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.4 | |
# | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Tests for transitfeed.shapelib.py""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import math | |
from transitfeed import shapelib | |
from transitfeed.shapelib import Point | |
from transitfeed.shapelib import Poly | |
from transitfeed.shapelib import PolyCollection | |
from transitfeed.shapelib import PolyGraph | |
import unittest | |
import util | |
def formatPoint(p, precision=12): | |
formatString = "(%%.%df, %%.%df, %%.%df)" % (precision, precision, precision) | |
return formatString % (p.x, p.y, p.z) | |
def formatPoints(points): | |
return "[%s]" % ", ".join([formatPoint(p, precision=4) for p in points]) | |
class ShapeLibTestBase(util.TestCase): | |
def assertApproxEq(self, a, b): | |
self.assertAlmostEqual(a, b, 8) | |
def assertPointApproxEq(self, a, b): | |
try: | |
self.assertApproxEq(a.x, b.x) | |
self.assertApproxEq(a.y, b.y) | |
self.assertApproxEq(a.z, b.z) | |
except AssertionError: | |
print 'ERROR: %s != %s' % (formatPoint(a), formatPoint(b)) | |
raise | |
def assertPointsApproxEq(self, points1, points2): | |
try: | |
self.assertEqual(len(points1), len(points2)) | |
except AssertionError: | |
print "ERROR: %s != %s" % (formatPoints(points1), formatPoints(points2)) | |
raise | |
for i in xrange(len(points1)): | |
try: | |
self.assertPointApproxEq(points1[i], points2[i]) | |
except AssertionError: | |
print ('ERROR: points not equal in position %d\n%s != %s' | |
% (i, formatPoints(points1), formatPoints(points2))) | |
raise | |
class TestPoints(ShapeLibTestBase): | |
def testPoints(self): | |
p = Point(1, 1, 1) | |
self.assertApproxEq(p.DotProd(p), 3) | |
self.assertApproxEq(p.Norm2(), math.sqrt(3)) | |
self.assertPointApproxEq(Point(1.5, 1.5, 1.5), | |
p.Times(1.5)) | |
norm = 1.7320508075688772 | |
self.assertPointApproxEq(p.Normalize(), | |
Point(1 / norm, | |
1 / norm, | |
1 / norm)) | |
p2 = Point(1, 0, 0) | |
self.assertPointApproxEq(p2, p2.Normalize()) | |
def testCrossProd(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 1 ,0).Normalize() | |
p1_cross_p2 = p1.CrossProd(p2) | |
self.assertApproxEq(p1_cross_p2.x, 0) | |
self.assertApproxEq(p1_cross_p2.y, 0) | |
self.assertApproxEq(p1_cross_p2.z, 1) | |
def testRobustCrossProd(self): | |
p1 = Point(1, 0, 0) | |
p2 = Point(1, 0, 0) | |
self.assertPointApproxEq(Point(0, 0, 0), | |
p1.CrossProd(p2)) | |
# only needs to be an arbitrary vector perpendicular to (1, 0, 0) | |
self.assertPointApproxEq( | |
Point(0.000000000000000, -0.998598452020993, 0.052925717957113), | |
p1.RobustCrossProd(p2)) | |
def testS2LatLong(self): | |
point = Point.FromLatLng(30, 40) | |
self.assertPointApproxEq(Point(0.663413948169, | |
0.556670399226, | |
0.5), point) | |
(lat, lng) = point.ToLatLng() | |
self.assertApproxEq(30, lat) | |
self.assertApproxEq(40, lng) | |
def testOrtho(self): | |
point = Point(1, 1, 1) | |
ortho = point.Ortho() | |
self.assertApproxEq(ortho.DotProd(point), 0) | |
def testAngle(self): | |
point1 = Point(1, 1, 0).Normalize() | |
point2 = Point(0, 1, 0) | |
self.assertApproxEq(45, point1.Angle(point2) * 360 / (2 * math.pi)) | |
self.assertApproxEq(point1.Angle(point2), point2.Angle(point1)) | |
def testGetDistanceMeters(self): | |
point1 = Point.FromLatLng(40.536895,-74.203033) | |
point2 = Point.FromLatLng(40.575239,-74.112825) | |
self.assertApproxEq(8732.623770873237, | |
point1.GetDistanceMeters(point2)) | |
class TestClosestPoint(ShapeLibTestBase): | |
def testGetClosestPoint(self): | |
x = Point(1, 1, 0).Normalize() | |
a = Point(1, 0, 0) | |
b = Point(0, 1, 0) | |
closest = shapelib.GetClosestPoint(x, a, b) | |
self.assertApproxEq(0.707106781187, closest.x) | |
self.assertApproxEq(0.707106781187, closest.y) | |
self.assertApproxEq(0.0, closest.z) | |
class TestPoly(ShapeLibTestBase): | |
def testGetClosestPointShape(self): | |
poly = Poly() | |
poly.AddPoint(Point(1, 1, 0).Normalize()) | |
self.assertPointApproxEq(Point( | |
0.707106781187, 0.707106781187, 0), poly.GetPoint(0)) | |
point = Point(0, 1, 1).Normalize() | |
self.assertPointApproxEq(Point(1, 1, 0).Normalize(), | |
poly.GetClosestPoint(point)[0]) | |
poly.AddPoint(Point(0, 1, 1).Normalize()) | |
self.assertPointApproxEq( | |
Point(0, 1, 1).Normalize(), | |
poly.GetClosestPoint(point)[0]) | |
def testCutAtClosestPoint(self): | |
poly = Poly() | |
poly.AddPoint(Point(0, 1, 0).Normalize()) | |
poly.AddPoint(Point(0, 0.5, 0.5).Normalize()) | |
poly.AddPoint(Point(0, 0, 1).Normalize()) | |
(before, after) = \ | |
poly.CutAtClosestPoint(Point(0, 0.3, 0.7).Normalize()) | |
self.assert_(2 == before.GetNumPoints()) | |
self.assert_(2 == before.GetNumPoints()) | |
self.assertPointApproxEq( | |
Point(0, 0.707106781187, 0.707106781187), before.GetPoint(1)) | |
self.assertPointApproxEq( | |
Point(0, 0.393919298579, 0.919145030018), after.GetPoint(0)) | |
poly = Poly() | |
poly.AddPoint(Point.FromLatLng(40.527035999999995, -74.191265999999999)) | |
poly.AddPoint(Point.FromLatLng(40.526859999999999, -74.191140000000004)) | |
poly.AddPoint(Point.FromLatLng(40.524681000000001, -74.189579999999992)) | |
poly.AddPoint(Point.FromLatLng(40.523128999999997, -74.188467000000003)) | |
poly.AddPoint(Point.FromLatLng(40.523054999999999, -74.188676000000001)) | |
pattern = Poly() | |
pattern.AddPoint(Point.FromLatLng(40.52713, | |
-74.191146000000003)) | |
self.assertApproxEq(14.564268281551, pattern.GreedyPolyMatchDist(poly)) | |
def testMergePolys(self): | |
poly1 = Poly(name="Foo") | |
poly1.AddPoint(Point(0, 1, 0).Normalize()) | |
poly1.AddPoint(Point(0, 0.5, 0.5).Normalize()) | |
poly1.AddPoint(Point(0, 0, 1).Normalize()) | |
poly1.AddPoint(Point(1, 1, 1).Normalize()) | |
poly2 = Poly() | |
poly3 = Poly(name="Bar") | |
poly3.AddPoint(Point(1, 1, 1).Normalize()) | |
poly3.AddPoint(Point(2, 0.5, 0.5).Normalize()) | |
merged1 = Poly.MergePolys([poly1, poly2]) | |
self.assertPointsApproxEq(poly1.GetPoints(), merged1.GetPoints()) | |
self.assertEqual("Foo;", merged1.GetName()) | |
merged2 = Poly.MergePolys([poly2, poly3]) | |
self.assertPointsApproxEq(poly3.GetPoints(), merged2.GetPoints()) | |
self.assertEqual(";Bar", merged2.GetName()) | |
merged3 = Poly.MergePolys([poly1, poly2, poly3], merge_point_threshold=0) | |
mergedPoints = poly1.GetPoints()[:] | |
mergedPoints.append(poly3.GetPoint(-1)) | |
self.assertPointsApproxEq(mergedPoints, merged3.GetPoints()) | |
self.assertEqual("Foo;;Bar", merged3.GetName()) | |
merged4 = Poly.MergePolys([poly2]) | |
self.assertEqual("", merged4.GetName()) | |
self.assertEqual(0, merged4.GetNumPoints()) | |
# test merging two nearby points | |
newPoint = poly1.GetPoint(-1).Plus(Point(0.000001, 0, 0)).Normalize() | |
poly1.AddPoint(newPoint) | |
distance = poly1.GetPoint(-1).GetDistanceMeters(poly3.GetPoint(0)) | |
self.assertTrue(distance <= 10) | |
self.assertTrue(distance > 5) | |
merged5 = Poly.MergePolys([poly1, poly2, poly3], merge_point_threshold=10) | |
mergedPoints = poly1.GetPoints()[:] | |
mergedPoints.append(poly3.GetPoint(-1)) | |
self.assertPointsApproxEq(mergedPoints, merged5.GetPoints()) | |
self.assertEqual("Foo;;Bar", merged5.GetName()) | |
merged6 = Poly.MergePolys([poly1, poly2, poly3], merge_point_threshold=5) | |
mergedPoints = poly1.GetPoints()[:] | |
mergedPoints += poly3.GetPoints() | |
self.assertPointsApproxEq(mergedPoints, merged6.GetPoints()) | |
self.assertEqual("Foo;;Bar", merged6.GetName()) | |
def testReversed(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
poly1 = Poly([p1, p2, p3]) | |
self.assertPointsApproxEq([p3, p2, p1], poly1.Reversed().GetPoints()) | |
def testLengthMeters(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
poly0 = Poly([p1]) | |
poly1 = Poly([p1, p2]) | |
poly2 = Poly([p1, p2, p3]) | |
try: | |
poly0.LengthMeters() | |
self.fail("Should have thrown AssertionError") | |
except AssertionError: | |
pass | |
p1_p2 = p1.GetDistanceMeters(p2) | |
p2_p3 = p2.GetDistanceMeters(p3) | |
self.assertEqual(p1_p2, poly1.LengthMeters()) | |
self.assertEqual(p1_p2 + p2_p3, poly2.LengthMeters()) | |
self.assertEqual(p1_p2 + p2_p3, poly2.Reversed().LengthMeters()) | |
class TestCollection(ShapeLibTestBase): | |
def testPolyMatch(self): | |
poly = Poly() | |
poly.AddPoint(Point(0, 1, 0).Normalize()) | |
poly.AddPoint(Point(0, 0.5, 0.5).Normalize()) | |
poly.AddPoint(Point(0, 0, 1).Normalize()) | |
collection = PolyCollection() | |
collection.AddPoly(poly) | |
match = collection.FindMatchingPolys(Point(0, 1, 0), | |
Point(0, 0, 1)) | |
self.assert_(len(match) == 1 and match[0] == poly) | |
match = collection.FindMatchingPolys(Point(0, 1, 0), | |
Point(0, 1, 0)) | |
self.assert_(len(match) == 0) | |
poly = Poly() | |
poly.AddPoint(Point.FromLatLng(45.585212,-122.586136)) | |
poly.AddPoint(Point.FromLatLng(45.586654,-122.587595)) | |
collection = PolyCollection() | |
collection.AddPoly(poly) | |
match = collection.FindMatchingPolys( | |
Point.FromLatLng(45.585212,-122.586136), | |
Point.FromLatLng(45.586654,-122.587595)) | |
self.assert_(len(match) == 1 and match[0] == poly) | |
match = collection.FindMatchingPolys( | |
Point.FromLatLng(45.585219,-122.586136), | |
Point.FromLatLng(45.586654,-122.587595)) | |
self.assert_(len(match) == 1 and match[0] == poly) | |
self.assertApproxEq(0.0, poly.GreedyPolyMatchDist(poly)) | |
match = collection.FindMatchingPolys( | |
Point.FromLatLng(45.587212,-122.586136), | |
Point.FromLatLng(45.586654,-122.587595)) | |
self.assert_(len(match) == 0) | |
class TestGraph(ShapeLibTestBase): | |
def testReconstructPath(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
poly1 = Poly([p1, p2]) | |
poly2 = Poly([p3, p2]) | |
came_from = { | |
p2: (p1, poly1), | |
p3: (p2, poly2) | |
} | |
graph = PolyGraph() | |
reconstructed1 = graph._ReconstructPath(came_from, p1) | |
self.assertEqual(0, reconstructed1.GetNumPoints()) | |
reconstructed2 = graph._ReconstructPath(came_from, p2) | |
self.assertPointsApproxEq([p1, p2], reconstructed2.GetPoints()) | |
reconstructed3 = graph._ReconstructPath(came_from, p3) | |
self.assertPointsApproxEq([p1, p2, p3], reconstructed3.GetPoints()) | |
def testShortestPath(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0, 0.5, 0.5).Normalize() | |
p3 = Point(0.3, 0.8, 0.5).Normalize() | |
p4 = Point(0.7, 0.7, 0.5).Normalize() | |
poly1 = Poly([p1, p2, p3], "poly1") | |
poly2 = Poly([p4, p3], "poly2") | |
poly3 = Poly([p4, p1], "poly3") | |
graph = PolyGraph() | |
graph.AddPoly(poly1) | |
graph.AddPoly(poly2) | |
graph.AddPoly(poly3) | |
path = graph.ShortestPath(p1, p4) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p1, p4], path.GetPoints()) | |
path = graph.ShortestPath(p1, p3) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p1, p4, p3], path.GetPoints()) | |
path = graph.ShortestPath(p3, p1) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p3, p4, p1], path.GetPoints()) | |
def testFindShortestMultiPointPath(self): | |
p1 = Point(1, 0, 0).Normalize() | |
p2 = Point(0.5, 0.5, 0).Normalize() | |
p3 = Point(0.5, 0.5, 0.1).Normalize() | |
p4 = Point(0, 1, 0).Normalize() | |
poly1 = Poly([p1, p2, p3], "poly1") | |
poly2 = Poly([p4, p3], "poly2") | |
poly3 = Poly([p4, p1], "poly3") | |
graph = PolyGraph() | |
graph.AddPoly(poly1) | |
graph.AddPoly(poly2) | |
graph.AddPoly(poly3) | |
path = graph.FindShortestMultiPointPath([p1, p3, p4]) | |
self.assert_(path is not None) | |
self.assertPointsApproxEq([p1, p2, p3, p4], path.GetPoints()) | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Unit tests for the transitfeed module. | |
import datetime | |
from datetime import date | |
import dircache | |
import os.path | |
import re | |
import sys | |
import tempfile | |
import time | |
import transitfeed | |
import types | |
import unittest | |
import util | |
from util import RecordingProblemAccumulator | |
from StringIO import StringIO | |
import zipfile | |
import zlib | |
def DataPath(path): | |
here = os.path.dirname(__file__) | |
return os.path.join(here, 'data', path) | |
def GetDataPathContents(): | |
here = os.path.dirname(__file__) | |
return dircache.listdir(os.path.join(here, 'data')) | |
class ExceptionProblemReporterNoExpiration(transitfeed.ProblemReporter): | |
"""Ignores feed expiration problems. | |
Use TestFailureProblemReporter in new code because it fails more cleanly, is | |
easier to extend and does more thorough checking. | |
""" | |
def __init__(self): | |
accumulator = transitfeed.ExceptionProblemAccumulator(raise_warnings=True) | |
transitfeed.ProblemReporter.__init__(self, accumulator) | |
def ExpirationDate(self, expiration, context=None): | |
pass # We don't want to give errors about our test data | |
def GetTestFailureProblemReporter(test_case, | |
ignore_types=("ExpirationDate",)): | |
accumulator = TestFailureProblemAccumulator(test_case, ignore_types) | |
problems = transitfeed.ProblemReporter(accumulator) | |
return problems | |
class TestFailureProblemAccumulator(transitfeed.ProblemAccumulatorInterface): | |
"""Causes a test failure immediately on any problem.""" | |
def __init__(self, test_case, ignore_types=("ExpirationDate",)): | |
self.test_case = test_case | |
self._ignore_types = ignore_types or set() | |
def _Report(self, e): | |
# These should never crash | |
formatted_problem = e.FormatProblem() | |
formatted_context = e.FormatContext() | |
exception_class = e.__class__.__name__ | |
if exception_class in self._ignore_types: | |
return | |
self.test_case.fail( | |
"%s: %s\n%s" % (exception_class, formatted_problem, formatted_context)) | |
class UnrecognizedColumnRecorder(transitfeed.ProblemReporter): | |
"""Keeps track of unrecognized column errors.""" | |
def __init__(self, test_case): | |
self.accumulator = RecordingProblemAccumulator(test_case, | |
ignore_types=("ExpirationDate",)) | |
self.column_errors = [] | |
def UnrecognizedColumn(self, file_name, column_name, context=None): | |
self.column_errors.append((file_name, column_name)) | |
class RedirectStdOutTestCaseBase(util.TestCase): | |
"""Save stdout to the StringIO buffer self.this_stdout""" | |
def setUp(self): | |
self.saved_stdout = sys.stdout | |
self.this_stdout = StringIO() | |
sys.stdout = self.this_stdout | |
def tearDown(self): | |
sys.stdout = self.saved_stdout | |
self.this_stdout.close() | |
# ensure that there are no exceptions when attempting to load | |
# (so that the validator won't crash) | |
class NoExceptionTestCase(RedirectStdOutTestCaseBase): | |
def runTest(self): | |
for feed in GetDataPathContents(): | |
loader = transitfeed.Loader(DataPath(feed), | |
problems=transitfeed.ProblemReporter(), | |
extra_validation=True) | |
schedule = loader.Load() | |
schedule.Validate() | |
class EndOfLineCheckerTestCase(util.TestCase): | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator(self) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
def RunEndOfLineChecker(self, end_of_line_checker): | |
# Iterating using for calls end_of_line_checker.next() until a | |
# StopIteration is raised. EndOfLineChecker does the final check for a mix | |
# of CR LF and LF ends just before raising StopIteration. | |
for line in end_of_line_checker: | |
pass | |
def testInvalidLineEnd(self): | |
f = transitfeed.EndOfLineChecker(StringIO("line1\r\r\nline2"), | |
"<StringIO>", | |
self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.accumulator.PopException("InvalidLineEnd") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 1) | |
self.assertEqual(e.bad_line_end, r"\r\r\n") | |
self.accumulator.AssertNoMoreExceptions() | |
def testInvalidLineEndToo(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1\nline2\r\nline3\r\r\r\n"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.accumulator.PopException("InvalidLineEnd") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 3) | |
self.assertEqual(e.bad_line_end, r"\r\r\r\n") | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertTrue(e.description.find("consistent line end") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testEmbeddedCr(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1\rline1b"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 1) | |
self.assertEqual(e.FormatProblem(), | |
"Line contains ASCII Carriage Return 0x0D, \\r") | |
self.accumulator.AssertNoMoreExceptions() | |
def testEmbeddedUtf8NextLine(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1b\xc2\x85"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.row_num, 1) | |
self.assertEqual(e.FormatProblem(), | |
"Line contains Unicode NEXT LINE SEPARATOR U+0085") | |
self.accumulator.AssertNoMoreExceptions() | |
def testEndOfLineMix(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("line1\nline2\r\nline3\nline4"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.FormatProblem(), | |
"Found 1 CR LF \"\\r\\n\" line end (line 2) and " | |
"2 LF \"\\n\" line ends (lines 1, 3). A file must use a " | |
"consistent line end.") | |
self.accumulator.AssertNoMoreExceptions() | |
def testEndOfLineManyMix(self): | |
f = transitfeed.EndOfLineChecker( | |
StringIO("1\n2\n3\n4\n5\n6\n7\r\n8\r\n9\r\n10\r\n11\r\n"), | |
"<StringIO>", self.problems) | |
self.RunEndOfLineChecker(f) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "<StringIO>") | |
self.assertEqual(e.FormatProblem(), | |
"Found 5 CR LF \"\\r\\n\" line ends (lines 7, 8, 9, 10, " | |
"11) and 6 LF \"\\n\" line ends (lines 1, 2, 3, 4, 5, " | |
"...). A file must use a consistent line end.") | |
self.accumulator.AssertNoMoreExceptions() | |
def testLoad(self): | |
loader = transitfeed.Loader( | |
DataPath("bad_eol.zip"), problems=self.problems, extra_validation=True) | |
loader.Load() | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "calendar.txt") | |
self.assertTrue(re.search( | |
r"Found 1 CR LF.* \(line 2\) and 2 LF .*\(lines 1, 3\)", | |
e.FormatProblem())) | |
e = self.accumulator.PopException("InvalidLineEnd") | |
self.assertEqual(e.file_name, "routes.txt") | |
self.assertEqual(e.row_num, 5) | |
self.assertTrue(e.FormatProblem().find(r"\r\r\n") != -1) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEqual(e.file_name, "trips.txt") | |
self.assertEqual(e.row_num, 1) | |
self.assertTrue(re.search( | |
r"contains ASCII Form Feed", | |
e.FormatProblem())) | |
# TODO(Tom): avoid this duplicate error for the same issue | |
e = self.accumulator.PopException("CsvSyntax") | |
self.assertEqual(e.row_num, 1) | |
self.assertTrue(re.search( | |
r"header row should not contain any space char", | |
e.FormatProblem())) | |
self.accumulator.AssertNoMoreExceptions() | |
class LoadTestCase(util.TestCase): | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator(self, ("ExpirationDate",)) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
def Load(self, feed_name): | |
loader = transitfeed.Loader( | |
DataPath(feed_name), problems=self.problems, extra_validation=True) | |
loader.Load() | |
def ExpectInvalidValue(self, feed_name, column_name): | |
self.Load(feed_name) | |
self.accumulator.PopInvalidValue(column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def ExpectMissingFile(self, feed_name, file_name): | |
self.Load(feed_name) | |
e = self.accumulator.PopException("MissingFile") | |
self.assertEqual(file_name, e.file_name) | |
# Don't call AssertNoMoreExceptions() because a missing file causes | |
# many errors. | |
class LoadFromZipTestCase(util.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems = GetTestFailureProblemReporter(self), | |
extra_validation = True) | |
loader.Load() | |
# now try using Schedule.Load | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.Load(DataPath('good_feed.zip'), extra_validation=True) | |
class LoadAndRewriteFromZipTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.Load(DataPath('good_feed.zip'), extra_validation=True) | |
# Finally see if write crashes | |
schedule.WriteGoogleTransitFeed(tempfile.TemporaryFile()) | |
class LoadFromDirectoryTestCase(util.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed'), | |
problems = GetTestFailureProblemReporter(self), | |
extra_validation = True) | |
loader.Load() | |
class LoadUnknownFeedTestCase(util.TestCase): | |
def runTest(self): | |
feed_name = DataPath('unknown_feed') | |
loader = transitfeed.Loader( | |
feed_name, | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('FeedNotFound exception expected') | |
except transitfeed.FeedNotFound, e: | |
self.assertEqual(feed_name, e.feed_name) | |
class LoadUnknownFormatTestCase(util.TestCase): | |
def runTest(self): | |
feed_name = DataPath('unknown_format.zip') | |
loader = transitfeed.Loader( | |
feed_name, | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('UnknownFormat exception expected') | |
except transitfeed.UnknownFormat, e: | |
self.assertEqual(feed_name, e.feed_name) | |
class LoadUnrecognizedColumnsTestCase(util.TestCase): | |
def runTest(self): | |
problems = UnrecognizedColumnRecorder(self) | |
loader = transitfeed.Loader(DataPath('unrecognized_columns'), | |
problems=problems) | |
loader.Load() | |
found_errors = set(problems.column_errors) | |
expected_errors = set([ | |
('agency.txt', 'agency_lange'), | |
('stops.txt', 'stop_uri'), | |
('routes.txt', 'Route_Text_Color'), | |
('calendar.txt', 'leap_day'), | |
('calendar_dates.txt', 'leap_day'), | |
('trips.txt', 'sharpe_id'), | |
('stop_times.txt', 'shapedisttraveled'), | |
('stop_times.txt', 'drop_off_time'), | |
('fare_attributes.txt', 'transfer_time'), | |
('fare_rules.txt', 'source_id'), | |
('frequencies.txt', 'superfluous'), | |
('transfers.txt', 'to_stop') | |
]) | |
# Now make sure we got the unrecognized column errors that we expected. | |
not_expected = found_errors.difference(expected_errors) | |
self.failIf(not_expected, 'unexpected errors: %s' % str(not_expected)) | |
not_found = expected_errors.difference(found_errors) | |
self.failIf(not_found, 'expected but not found: %s' % str(not_found)) | |
class LoadExtraCellValidationTestCase(LoadTestCase): | |
"""Check that the validation detects too many cells in a row.""" | |
def runTest(self): | |
self.Load('extra_row_cells') | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEquals("routes.txt", e.file_name) | |
self.assertEquals(4, e.row_num) | |
self.accumulator.AssertNoMoreExceptions() | |
class LoadMissingCellValidationTestCase(LoadTestCase): | |
"""Check that the validation detects missing cells in a row.""" | |
def runTest(self): | |
self.Load('missing_row_cells') | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertEquals("routes.txt", e.file_name) | |
self.assertEquals(4, e.row_num) | |
self.accumulator.AssertNoMoreExceptions() | |
class LoadUnknownFileTestCase(util.TestCase): | |
"""Check that the validation detects unknown files.""" | |
def runTest(self): | |
feed_name = DataPath('unknown_file') | |
self.accumulator = RecordingProblemAccumulator(self, ("ExpirationDate",)) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
loader = transitfeed.Loader( | |
feed_name, | |
problems = self.problems, | |
extra_validation = True) | |
loader.Load() | |
e = self.accumulator.PopException('UnknownFile') | |
self.assertEqual('frecuencias.txt', e.file_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class LoadUTF8BOMTestCase(util.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('utf8bom'), | |
problems = GetTestFailureProblemReporter(self), | |
extra_validation = True) | |
loader.Load() | |
class LoadUTF16TestCase(util.TestCase): | |
def runTest(self): | |
# utf16 generated by `recode utf8..utf16 *' | |
accumulator = transitfeed.ExceptionProblemAccumulator() | |
problem_reporter = transitfeed.ProblemReporter(accumulator) | |
loader = transitfeed.Loader( | |
DataPath('utf16'), | |
problems = problem_reporter, | |
extra_validation = True) | |
try: | |
loader.Load() | |
# TODO: make sure processing proceeds beyond the problem | |
self.fail('FileFormat exception expected') | |
except transitfeed.FileFormat, e: | |
# make sure these don't raise an exception | |
self.assertTrue(re.search(r'encoded in utf-16', e.FormatProblem())) | |
e.FormatContext() | |
class LoadNullTestCase(util.TestCase): | |
def runTest(self): | |
accumulator = transitfeed.ExceptionProblemAccumulator() | |
problem_reporter = transitfeed.ProblemReporter(accumulator) | |
loader = transitfeed.Loader( | |
DataPath('contains_null'), | |
problems = problem_reporter, | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('FileFormat exception expected') | |
except transitfeed.FileFormat, e: | |
self.assertTrue(re.search(r'contains a null', e.FormatProblem())) | |
# make sure these don't raise an exception | |
e.FormatContext() | |
class ProblemReporterTestCase(RedirectStdOutTestCaseBase): | |
# Unittest for problem reporter | |
def testContextWithBadUnicodeProblem(self): | |
pr = transitfeed.ProblemReporter() | |
# Context has valid unicode values | |
pr.SetFileContext('filename.foo', 23, | |
[u'Andr\202', u'Person \uc720 foo', None], | |
[u'1\202', u'2\202', u'3\202']) | |
pr.OtherProblem('test string') | |
pr.OtherProblem(u'\xff\xfe\x80\x88') | |
# Invalid ascii and utf-8. encode('utf-8') and decode('utf-8') will fail | |
# for this value | |
pr.OtherProblem('\xff\xfe\x80\x88') | |
self.assertTrue(re.search(r"test string", self.this_stdout.getvalue())) | |
self.assertTrue(re.search(r"filename.foo:23", self.this_stdout.getvalue())) | |
def testNoContextWithBadUnicode(self): | |
pr = transitfeed.ProblemReporter() | |
pr.OtherProblem('test string') | |
pr.OtherProblem(u'\xff\xfe\x80\x88') | |
# Invalid ascii and utf-8. encode('utf-8') and decode('utf-8') will fail | |
# for this value | |
pr.OtherProblem('\xff\xfe\x80\x88') | |
self.assertTrue(re.search(r"test string", self.this_stdout.getvalue())) | |
def testBadUnicodeContext(self): | |
pr = transitfeed.ProblemReporter() | |
pr.SetFileContext('filename.foo', 23, | |
[u'Andr\202', 'Person \xff\xfe\x80\x88 foo', None], | |
[u'1\202', u'2\202', u'3\202']) | |
pr.OtherProblem("help, my context isn't utf-8!") | |
self.assertTrue(re.search(r"help, my context", self.this_stdout.getvalue())) | |
self.assertTrue(re.search(r"filename.foo:23", self.this_stdout.getvalue())) | |
def testLongWord(self): | |
# Make sure LineWrap doesn't puke | |
pr = transitfeed.ProblemReporter() | |
pr.OtherProblem('1111untheontuhoenuthoentuhntoehuontehuntoehuntoehunto' | |
'2222oheuntheounthoeunthoeunthoeuntheontuheontuhoue') | |
self.assertTrue(re.search(r"1111.+2222", self.this_stdout.getvalue())) | |
class BadProblemReporterTestCase(RedirectStdOutTestCaseBase): | |
"""Make sure ProblemReporter doesn't crash when given bad unicode data and | |
does find some error""" | |
# tom.brown.code-utf8_weaknesses fixed a bug with problem reporter and bad | |
# utf-8 strings | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('bad_utf8'), | |
problems = transitfeed.ProblemReporter(), | |
extra_validation = True) | |
loader.Load() | |
# raises exception if not found | |
self.this_stdout.getvalue().index('Invalid value') | |
class BadUtf8TestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('bad_utf8') | |
self.accumulator.PopException("UnrecognizedColumn") | |
self.accumulator.PopInvalidValue("agency_name", "agency.txt") | |
self.accumulator.PopInvalidValue("stop_name", "stops.txt") | |
self.accumulator.PopInvalidValue("route_short_name", "routes.txt") | |
self.accumulator.PopInvalidValue("route_long_name", "routes.txt") | |
self.accumulator.PopInvalidValue("trip_headsign", "trips.txt") | |
self.accumulator.PopInvalidValue("stop_headsign", "stop_times.txt") | |
self.accumulator.AssertNoMoreExceptions() | |
class LoadMissingAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_agency', 'agency.txt') | |
class LoadMissingStopsTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_stops', 'stops.txt') | |
class LoadMissingRoutesTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_routes', 'routes.txt') | |
class LoadMissingTripsTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_trips', 'trips.txt') | |
class LoadMissingStopTimesTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_stop_times', 'stop_times.txt') | |
class LoadMissingCalendarTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectMissingFile('missing_calendar', 'calendar.txt') | |
class EmptyFileTestCase(util.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('empty_file'), | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('EmptyFile exception expected') | |
except transitfeed.EmptyFile, e: | |
self.assertEqual('agency.txt', e.file_name) | |
class MissingColumnTestCase(util.TestCase): | |
def runTest(self): | |
loader = transitfeed.Loader( | |
DataPath('missing_column'), | |
problems = ExceptionProblemReporterNoExpiration(), | |
extra_validation = True) | |
try: | |
loader.Load() | |
self.fail('MissingColumn exception expected') | |
except transitfeed.MissingColumn, e: | |
self.assertEqual('agency.txt', e.file_name) | |
self.assertEqual('agency_name', e.column_name) | |
class ZeroBasedStopSequenceTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('negative_stop_sequence', 'stop_sequence') | |
class DuplicateStopTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
try: | |
schedule.Load(DataPath('duplicate_stop'), extra_validation=True) | |
self.fail('OtherProblem exception expected') | |
except transitfeed.OtherProblem: | |
pass | |
class DuplicateStopSequenceTestCase(util.TestCase): | |
def runTest(self): | |
accumulator = RecordingProblemAccumulator(self, ("ExpirationDate", | |
"NoServiceExceptions")) | |
problems = transitfeed.ProblemReporter(accumulator) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
schedule.Load(DataPath('duplicate_stop_sequence'), extra_validation=True) | |
e = accumulator.PopException('InvalidValue') | |
self.assertEqual('stop_sequence', e.column_name) | |
accumulator.AssertNoMoreExceptions() | |
class MissingEndpointTimesTestCase(util.TestCase): | |
def runTest(self): | |
problems = ExceptionProblemReporterNoExpiration() | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
try: | |
schedule.Load(DataPath('missing_endpoint_times'), extra_validation=True) | |
self.fail('InvalidValue exception expected') | |
except transitfeed.InvalidValue, e: | |
self.assertEqual('departure_time', e.column_name) | |
self.assertEqual('', e.value) | |
class DuplicateScheduleIDTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
try: | |
schedule.Load(DataPath('duplicate_schedule_id'), extra_validation=True) | |
self.fail('DuplicateID exception expected') | |
except transitfeed.DuplicateID: | |
pass | |
class OverlappingBlockSchedule(transitfeed.Schedule): | |
"""Special Schedule subclass that counts the number of calls to | |
GetServicePeriod() so we can verify service period overlap calculation | |
caching""" | |
_get_service_period_call_count = 0 | |
def GetServicePeriod(self, service_id): | |
self._get_service_period_call_count += 1 | |
return transitfeed.Schedule.GetServicePeriod(self,service_id) | |
def GetServicePeriodCallCount(self): | |
return self._get_service_period_call_count | |
class OverlappingBlockTripsTestCase(util.TestCase): | |
"""Builds a simple schedule for testing of overlapping block trips""" | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator( | |
self, ("ExpirationDate", "NoServiceExceptions")) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
schedule = OverlappingBlockSchedule(problem_reporter=self.problems) | |
schedule.AddAgency("Demo Transit Authority", "http://dta.org", | |
"America/Los_Angeles") | |
sp1 = transitfeed.ServicePeriod("SID1") | |
sp1.SetWeekdayService(True) | |
sp1.SetStartDate("20070605") | |
sp1.SetEndDate("20080605") | |
schedule.AddServicePeriodObject(sp1) | |
sp2 = transitfeed.ServicePeriod("SID2") | |
sp2.SetDayOfWeekHasService(0) | |
sp2.SetDayOfWeekHasService(2) | |
sp2.SetDayOfWeekHasService(4) | |
sp2.SetStartDate("20070605") | |
sp2.SetEndDate("20080605") | |
schedule.AddServicePeriodObject(sp2) | |
sp3 = transitfeed.ServicePeriod("SID3") | |
sp3.SetWeekendService(True) | |
sp3.SetStartDate("20070605") | |
sp3.SetEndDate("20080605") | |
schedule.AddServicePeriodObject(sp3) | |
self.stop1 = schedule.AddStop(lng=-116.75167, | |
lat=36.915682, | |
name="Stagecoach Hotel & Casino", | |
stop_id="S1") | |
self.stop2 = schedule.AddStop(lng=-116.76218, | |
lat=36.905697, | |
name="E Main St / S Irving St", | |
stop_id="S2") | |
self.route = schedule.AddRoute("", "City", "Bus", route_id="CITY") | |
self.schedule = schedule | |
self.sp1 = sp1 | |
self.sp2 = sp2 | |
self.sp3 = sp3 | |
def testNoOverlap(self): | |
schedule, route, sp1 = self.schedule, self.route, self.sp1 | |
trip1 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY1") | |
trip1.block_id = "BLOCK" | |
trip1.AddStopTime(self.stop1, stop_time="6:00:00") | |
trip1.AddStopTime(self.stop2, stop_time="6:30:00") | |
trip2 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY2") | |
trip2.block_id = "BLOCK" | |
trip2.AddStopTime(self.stop2, stop_time="6:30:00") | |
trip2.AddStopTime(self.stop1, stop_time="7:00:00") | |
schedule.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
def testOverlapSameServicePeriod(self): | |
schedule, route, sp1 = self.schedule, self.route, self.sp1 | |
trip1 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY1") | |
trip1.block_id = "BLOCK" | |
trip1.AddStopTime(self.stop1, stop_time="6:00:00") | |
trip1.AddStopTime(self.stop2, stop_time="6:30:00") | |
trip2 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY2") | |
trip2.block_id = "BLOCK" | |
trip2.AddStopTime(self.stop2, stop_time="6:20:00") | |
trip2.AddStopTime(self.stop1, stop_time="6:50:00") | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('OverlappingTripsInSameBlock') | |
self.assertEqual(e.trip_id1, 'CITY1') | |
self.assertEqual(e.trip_id2, 'CITY2') | |
self.assertEqual(e.block_id, 'BLOCK') | |
self.accumulator.AssertNoMoreExceptions() | |
def testOverlapDifferentServicePeriods(self): | |
schedule, route, sp1, sp2 = self.schedule, self.route, self.sp1, self.sp2 | |
trip1 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY1") | |
trip1.block_id = "BLOCK" | |
trip1.AddStopTime(self.stop1, stop_time="6:00:00") | |
trip1.AddStopTime(self.stop2, stop_time="6:30:00") | |
trip2 = route.AddTrip(schedule, service_period=sp2, trip_id="CITY2") | |
trip2.block_id = "BLOCK" | |
trip2.AddStopTime(self.stop2, stop_time="6:20:00") | |
trip2.AddStopTime(self.stop1, stop_time="6:50:00") | |
trip3 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY3") | |
trip3.block_id = "BLOCK" | |
trip3.AddStopTime(self.stop1, stop_time="7:00:00") | |
trip3.AddStopTime(self.stop2, stop_time="7:30:00") | |
trip4 = route.AddTrip(schedule, service_period=sp2, trip_id="CITY4") | |
trip4.block_id = "BLOCK" | |
trip4.AddStopTime(self.stop2, stop_time="7:20:00") | |
trip4.AddStopTime(self.stop1, stop_time="7:50:00") | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('OverlappingTripsInSameBlock') | |
self.assertEqual(e.trip_id1, 'CITY1') | |
self.assertEqual(e.trip_id2, 'CITY2') | |
self.assertEqual(e.block_id, 'BLOCK') | |
e = self.accumulator.PopException('OverlappingTripsInSameBlock') | |
self.assertEqual(e.trip_id1, 'CITY3') | |
self.assertEqual(e.trip_id2, 'CITY4') | |
self.assertEqual(e.block_id, 'BLOCK') | |
self.accumulator.AssertNoMoreExceptions() | |
# If service period overlap calculation caching is working correctly, | |
# we expect only two calls to GetServicePeriod(), one each for sp1 and | |
# sp2, as oppossed four calls total for the four overlapping trips | |
self.assertEquals(2,schedule.GetServicePeriodCallCount()) | |
def testNoOverlapDifferentServicePeriods(self): | |
schedule, route, sp1, sp3 = self.schedule, self.route, self.sp1, self.sp3 | |
trip1 = route.AddTrip(schedule, service_period=sp1, trip_id="CITY1") | |
trip1.block_id = "BLOCK" | |
trip1.AddStopTime(self.stop1, stop_time="6:00:00") | |
trip1.AddStopTime(self.stop2, stop_time="6:30:00") | |
trip2 = route.AddTrip(schedule, service_period=sp3, trip_id="CITY2") | |
trip2.block_id = "BLOCK" | |
trip2.AddStopTime(self.stop2, stop_time="6:20:00") | |
trip2.AddStopTime(self.stop1, stop_time="6:50:00") | |
schedule.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
class ColorLuminanceTestCase(util.TestCase): | |
def runTest(self): | |
self.assertEqual(transitfeed.ColorLuminance('000000'), 0, | |
"ColorLuminance('000000') should be zero") | |
self.assertEqual(transitfeed.ColorLuminance('FFFFFF'), 255, | |
"ColorLuminance('FFFFFF') should be 255") | |
RGBmsg = ("ColorLuminance('RRGGBB') should be " | |
"0.299*<Red> + 0.587*<Green> + 0.114*<Blue>") | |
decimal_places_tested = 8 | |
self.assertAlmostEqual(transitfeed.ColorLuminance('640000'), 29.9, | |
decimal_places_tested, RGBmsg) | |
self.assertAlmostEqual(transitfeed.ColorLuminance('006400'), 58.7, | |
decimal_places_tested, RGBmsg) | |
self.assertAlmostEqual(transitfeed.ColorLuminance('000064'), 11.4, | |
decimal_places_tested, RGBmsg) | |
self.assertAlmostEqual(transitfeed.ColorLuminance('1171B3'), | |
0.299*17 + 0.587*113 + 0.114*179, | |
decimal_places_tested, RGBmsg) | |
INVALID_VALUE = Exception() | |
class ValidationTestCase(util.TestCase): | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator( | |
self, ("ExpirationDate", "NoServiceExceptions")) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
def tearDown(self): | |
self.accumulator.TearDownAssertNoMoreExceptions() | |
def ExpectNoProblems(self, object): | |
self.accumulator.AssertNoMoreExceptions() | |
object.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
# TODO: Get rid of Expect*Closure methods. With the | |
# RecordingProblemAccumulator it is now possible to replace | |
# self.ExpectMissingValueInClosure(lambda: o.method(...), foo) | |
# with | |
# o.method(...) | |
# self.ExpectMissingValueInClosure(foo) | |
# because problems don't raise an exception. This has the advantage of | |
# making it easy and clear to test the return value of o.method(...) and | |
# easier to test for a sequence of problems caused by one call. | |
def ExpectMissingValue(self, object, column_name): | |
self.ExpectMissingValueInClosure(column_name, | |
lambda: object.Validate(self.problems)) | |
def ExpectMissingValueInClosure(self, column_name, c): | |
self.accumulator.AssertNoMoreExceptions() | |
rv = c() | |
e = self.accumulator.PopException('MissingValue') | |
self.assertEqual(column_name, e.column_name) | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.accumulator.AssertNoMoreExceptions() | |
def ExpectInvalidValue(self, object, column_name, value=INVALID_VALUE): | |
self.ExpectInvalidValueInClosure(column_name, value, | |
lambda: object.Validate(self.problems)) | |
def ExpectInvalidValueInClosure(self, column_name, value=INVALID_VALUE, | |
c=None): | |
self.accumulator.AssertNoMoreExceptions() | |
rv = c() | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual(column_name, e.column_name) | |
if value != INVALID_VALUE: | |
self.assertEqual(value, e.value) | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.accumulator.AssertNoMoreExceptions() | |
def ExpectInvalidFloatValue(self, object, value): | |
self.ExpectInvalidFloatValueInClosure(value, | |
lambda: object.Validate(self.problems)) | |
def ExpectInvalidFloatValueInClosure(self, value, | |
c=None): | |
self.accumulator.AssertNoMoreExceptions() | |
rv = c() | |
e = self.accumulator.PopException('InvalidFloatValue') | |
self.assertEqual(value, e.value) | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.accumulator.AssertNoMoreExceptions() | |
def ExpectOtherProblem(self, object): | |
self.ExpectOtherProblemInClosure(lambda: object.Validate(self.problems)) | |
def ExpectOtherProblemInClosure(self, c): | |
self.accumulator.AssertNoMoreExceptions() | |
rv = c() | |
e = self.accumulator.PopException('OtherProblem') | |
# these should not throw any exceptions | |
e.FormatProblem() | |
e.FormatContext() | |
self.accumulator.AssertNoMoreExceptions() | |
def SimpleSchedule(self): | |
"""Return a minimum schedule that will load without warnings.""" | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
schedule.AddAgency("Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetWeekdayService(True) | |
service_period.SetStartDate("20091203") | |
service_period.SetEndDate("20111203") | |
service_period.SetDateHasService("20091203") | |
schedule.AddServicePeriodObject(service_period) | |
stop1 = schedule.AddStop(lng=1.00, lat=48.2, name="Stop 1", stop_id="stop1") | |
stop2 = schedule.AddStop(lng=1.01, lat=48.2, name="Stop 2", stop_id="stop2") | |
stop3 = schedule.AddStop(lng=1.03, lat=48.2, name="Stop 3", stop_id="stop3") | |
route = schedule.AddRoute("54C", "", "Bus", route_id="054C") | |
trip = route.AddTrip(schedule, "bus trip", trip_id="CITY1") | |
trip.AddStopTime(stop1, stop_time="12:00:00") | |
trip.AddStopTime(stop2, stop_time="12:00:45") | |
trip.AddStopTime(stop3, stop_time="12:02:30") | |
return schedule | |
class AgencyValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='xh') | |
self.ExpectNoProblems(agency) | |
# bad agency | |
agency = transitfeed.Agency(name=' ', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA') | |
self.ExpectMissingValue(agency, 'agency_name') | |
# missing url | |
agency = transitfeed.Agency(name='Test Agency', | |
timezone='America/Los_Angeles', id='TA') | |
self.ExpectMissingValue(agency, 'agency_url') | |
# bad url | |
agency = transitfeed.Agency(name='Test Agency', url='www.example.com', | |
timezone='America/Los_Angeles', id='TA') | |
self.ExpectInvalidValue(agency, 'agency_url') | |
# bad time zone | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Alviso', id='TA') | |
agency.Validate(self.problems) | |
e = self.accumulator.PopInvalidValue('agency_timezone') | |
self.assertMatchesRegex('"America/Alviso" is not a common timezone', | |
e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
# bad language code | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='English') | |
self.ExpectInvalidValue(agency, 'agency_lang') | |
# bad 2-letter lanugage code | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='xx') | |
self.ExpectInvalidValue(agency, 'agency_lang') | |
# capitalized language code is OK | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', id='TA', | |
lang='EN') | |
self.ExpectNoProblems(agency) | |
# extra attribute in constructor is fine, only checked when loading a file | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles', | |
agency_mission='monorail you there') | |
self.ExpectNoProblems(agency) | |
# extra attribute in assigned later is also fine | |
agency = transitfeed.Agency(name='Test Agency', url='http://example.com', | |
timezone='America/Los_Angeles') | |
agency.agency_mission='monorail you there' | |
self.ExpectNoProblems(agency) | |
# Multiple problems | |
agency = transitfeed.Agency(name='Test Agency', url='www.example.com', | |
timezone='America/West Coast', id='TA') | |
self.assertEquals(False, agency.Validate(self.problems)) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual(e.column_name, 'agency_url') | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual(e.column_name, 'agency_timezone') | |
self.accumulator.AssertNoMoreExceptions() | |
class AgencyAttributesTestCase(ValidationTestCase): | |
def testCopy(self): | |
agency = transitfeed.Agency(field_dict={'agency_name': 'Test Agency', | |
'agency_url': 'http://example.com', | |
'timezone': 'America/Los_Angeles', | |
'agency_mission': 'get you there'}) | |
self.assertEquals(agency.agency_mission, 'get you there') | |
agency_copy = transitfeed.Agency(field_dict=agency) | |
self.assertEquals(agency_copy.agency_mission, 'get you there') | |
self.assertEquals(agency_copy['agency_mission'], 'get you there') | |
def testEq(self): | |
agency1 = transitfeed.Agency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
agency2 = transitfeed.Agency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
# Unknown columns, such as agency_mission, do affect equality | |
self.assertEquals(agency1, agency2) | |
agency1.agency_mission = "Get you there" | |
self.assertNotEquals(agency1, agency2) | |
agency2.agency_mission = "Move you" | |
self.assertNotEquals(agency1, agency2) | |
agency1.agency_mission = "Move you" | |
self.assertEquals(agency1, agency2) | |
# Private attributes don't affect equality | |
agency1._private_attr = "My private message" | |
self.assertEquals(agency1, agency2) | |
agency2._private_attr = "Another private thing" | |
self.assertEquals(agency1, agency2) | |
def testDict(self): | |
agency = transitfeed.Agency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
agency._private_attribute = "blah" | |
# Private attributes don't appear when iterating through an agency as a | |
# dict but can be directly accessed. | |
self.assertEquals("blah", agency._private_attribute) | |
self.assertEquals("blah", agency["_private_attribute"]) | |
self.assertEquals( | |
set("agency_name agency_url agency_timezone".split()), | |
set(agency.keys())) | |
self.assertEquals({"agency_name": "Test Agency", | |
"agency_url": "http://example.com", | |
"agency_timezone": "America/Los_Angeles"}, | |
dict(agency.iteritems())) | |
class StopValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
stop = transitfeed.Stop() | |
stop.stop_id = '45' | |
stop.stop_name = 'Couch AT End Table' | |
stop.stop_lat = 50.0 | |
stop.stop_lon = 50.0 | |
stop.stop_desc = 'Edge of the Couch' | |
stop.zone_id = 'A' | |
stop.stop_url = 'http://example.com' | |
stop.Validate(self.problems) | |
# latitude too large | |
stop.stop_lat = 100.0 | |
self.ExpectInvalidValue(stop, 'stop_lat') | |
stop.stop_lat = 50.0 | |
# latitude as a string works when it is valid | |
stop.stop_lat = '50.0' | |
stop.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
stop.stop_lat = '10f' | |
self.ExpectInvalidValue(stop, 'stop_lat') | |
stop.stop_lat = 50.0 | |
# longitude too large | |
stop.stop_lon = 200.0 | |
self.ExpectInvalidValue(stop, 'stop_lon') | |
stop.stop_lon = 50.0 | |
# lat, lon too close to 0, 0 | |
stop.stop_lat = 0.0 | |
stop.stop_lon = 0.0 | |
self.ExpectInvalidValue(stop, 'stop_lat') | |
stop.stop_lat = 50.0 | |
stop.stop_lon = 50.0 | |
# invalid stop_url | |
stop.stop_url = 'www.example.com' | |
self.ExpectInvalidValue(stop, 'stop_url') | |
stop.stop_url = 'http://example.com' | |
stop.stop_id = ' ' | |
self.ExpectMissingValue(stop, 'stop_id') | |
stop.stop_id = '45' | |
stop.stop_name = '' | |
self.ExpectMissingValue(stop, 'stop_name') | |
stop.stop_name = 'Couch AT End Table' | |
# description same as name | |
stop.stop_desc = 'Couch AT End Table' | |
self.ExpectInvalidValue(stop, 'stop_desc') | |
stop.stop_desc = 'Edge of the Couch' | |
self.accumulator.AssertNoMoreExceptions() | |
class StopAttributes(ValidationTestCase): | |
def testWithoutSchedule(self): | |
stop = transitfeed.Stop() | |
stop.Validate(self.problems) | |
for name in "stop_id stop_name stop_lat stop_lon".split(): | |
e = self.accumulator.PopException('MissingValue') | |
self.assertEquals(name, e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
stop = transitfeed.Stop() | |
# Test behaviour for unset and unknown attribute | |
self.assertEquals(stop['new_column'], '') | |
try: | |
t = stop.new_column | |
self.fail('Expecting AttributeError') | |
except AttributeError, e: | |
pass # Expected | |
stop.stop_id = 'a' | |
stop.stop_name = 'my stop' | |
stop.new_column = 'val' | |
stop.stop_lat = 5.909 | |
stop.stop_lon = '40.02' | |
self.assertEquals(stop.new_column, 'val') | |
self.assertEquals(stop['new_column'], 'val') | |
self.assertTrue(isinstance(stop['stop_lat'], basestring)) | |
self.assertAlmostEqual(float(stop['stop_lat']), 5.909) | |
self.assertTrue(isinstance(stop['stop_lon'], basestring)) | |
self.assertAlmostEqual(float(stop['stop_lon']), 40.02) | |
stop.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
# After validation stop.stop_lon has been converted to a float | |
self.assertAlmostEqual(stop.stop_lat, 5.909) | |
self.assertAlmostEqual(stop.stop_lon, 40.02) | |
self.assertEquals(stop.new_column, 'val') | |
self.assertEquals(stop['new_column'], 'val') | |
def testBlankAttributeName(self): | |
stop1 = transitfeed.Stop(field_dict={"": "a"}) | |
stop2 = transitfeed.Stop(field_dict=stop1) | |
self.assertEquals("a", getattr(stop1, "")) | |
# The attribute "" is treated as private and not copied | |
self.assertRaises(AttributeError, getattr, stop2, "") | |
self.assertEquals(set(), set(stop1.keys())) | |
self.assertEquals(set(), set(stop2.keys())) | |
def testWithSchedule(self): | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
stop = transitfeed.Stop(field_dict={}) | |
# AddStopObject silently fails for Stop objects without stop_id | |
schedule.AddStopObject(stop) | |
self.assertFalse(schedule.GetStopList()) | |
self.assertFalse(stop._schedule) | |
# Okay to add a stop with only stop_id | |
stop = transitfeed.Stop(field_dict={"stop_id": "b"}) | |
schedule.AddStopObject(stop) | |
stop.Validate(self.problems) | |
for name in "stop_name stop_lat stop_lon".split(): | |
e = self.accumulator.PopException("MissingValue") | |
self.assertEquals(name, e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
stop.new_column = "val" | |
self.assertTrue("new_column" in schedule.GetTableColumns("stops")) | |
# Adding a duplicate stop_id fails | |
schedule.AddStopObject(transitfeed.Stop(field_dict={"stop_id": "b"})) | |
self.accumulator.PopException("DuplicateID") | |
self.accumulator.AssertNoMoreExceptions() | |
class StopTimeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
stop = transitfeed.Stop() | |
self.ExpectInvalidValueInClosure('arrival_time', '1a:00:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="1a:00:00")) | |
self.ExpectInvalidValueInClosure('departure_time', '1a:00:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='1a:00:00')) | |
self.ExpectInvalidValueInClosure('pickup_type', '7.8', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='7.8', | |
drop_off_type='0')) | |
self.ExpectInvalidValueInClosure('drop_off_type', 'a', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='3', | |
drop_off_type='a')) | |
self.ExpectInvalidValueInClosure('shape_dist_traveled', '$', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='3', | |
drop_off_type='0', | |
shape_dist_traveled='$')) | |
self.ExpectInvalidValueInClosure('shape_dist_traveled', '0,53', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time='10:05:00', | |
pickup_type='3', | |
drop_off_type='0', | |
shape_dist_traveled='0,53')) | |
self.ExpectOtherProblemInClosure( | |
lambda: transitfeed.StopTime(self.problems, stop, | |
pickup_type='1', drop_off_type='1')) | |
self.ExpectInvalidValueInClosure('departure_time', '10:00:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="11:00:00", | |
departure_time="10:00:00")) | |
self.ExpectMissingValueInClosure('arrival_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
departure_time="10:00:00")) | |
self.ExpectMissingValueInClosure('arrival_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
departure_time="10:00:00", | |
arrival_time="")) | |
self.ExpectMissingValueInClosure('departure_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00")) | |
self.ExpectMissingValueInClosure('departure_time', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time="")) | |
self.ExpectInvalidValueInClosure('departure_time', '10:70:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time="10:70:00")) | |
self.ExpectInvalidValueInClosure('departure_time', '10:00:62', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:00", | |
departure_time="10:00:62")) | |
self.ExpectInvalidValueInClosure('arrival_time', '10:00:63', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:00:63", | |
departure_time="10:10:00")) | |
self.ExpectInvalidValueInClosure('arrival_time', '10:60:00', | |
lambda: transitfeed.StopTime(self.problems, stop, | |
arrival_time="10:60:00", | |
departure_time="11:02:00")) | |
self.ExpectInvalidValueInClosure('stop', "id", | |
lambda: transitfeed.StopTime(self.problems, "id", | |
arrival_time="10:00:00", | |
departure_time="11:02:00")) | |
self.ExpectInvalidValueInClosure('stop', "3", | |
lambda: transitfeed.StopTime(self.problems, "3", | |
arrival_time="10:00:00", | |
departure_time="11:02:00")) | |
self.ExpectInvalidValueInClosure('stop', None, | |
lambda: transitfeed.StopTime(self.problems, None, | |
arrival_time="10:00:00", | |
departure_time="11:02:00")) | |
# The following should work | |
transitfeed.StopTime(self.problems, stop, arrival_time="10:00:00", | |
departure_time="10:05:00", pickup_type='1', drop_off_type='1') | |
transitfeed.StopTime(self.problems, stop, arrival_time="10:00:00", | |
departure_time="10:05:00", pickup_type='1', drop_off_type='1') | |
transitfeed.StopTime(self.problems, stop, arrival_time="1:00:00", | |
departure_time="1:05:00") | |
transitfeed.StopTime(self.problems, stop, arrival_time="24:59:00", | |
departure_time="25:05:00") | |
transitfeed.StopTime(self.problems, stop, arrival_time="101:01:00", | |
departure_time="101:21:00") | |
transitfeed.StopTime(self.problems, stop) | |
self.accumulator.AssertNoMoreExceptions() | |
class TooFastTravelTestCase(ValidationTestCase): | |
def setUp(self): | |
super(TooFastTravelTestCase, self).setUp() | |
self.schedule = self.SimpleSchedule() | |
self.route = self.schedule.GetRoute("054C") | |
self.trip = self.route.AddTrip() | |
def AddStopDistanceTime(self, dist_time_list): | |
# latitude where each 0.01 degrees longitude is 1km | |
magic_lat = 26.062468289 | |
stop = self.schedule.AddStop(magic_lat, 0, "Demo Stop 0") | |
time = 0 | |
self.trip.AddStopTime(stop, arrival_secs=time, departure_secs=time) | |
for i, (dist_delta, time_delta) in enumerate(dist_time_list): | |
stop = self.schedule.AddStop( | |
magic_lat, stop.stop_lon + dist_delta * 0.00001, | |
"Demo Stop %d" % (i + 1)) | |
time += time_delta | |
self.trip.AddStopTime(stop, arrival_secs=time, departure_secs=time) | |
def testMovingTooFast(self): | |
self.AddStopDistanceTime([(1691, 60), | |
(1616, 60)]) | |
self.trip.Validate(self.problems) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex(r'1691 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'\(101 km/h\)', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
self.route.route_type = 4 # Ferry with max_speed 80 | |
self.trip.Validate(self.problems) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex(r'1691 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'\(101 km/h\)', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 1 to Demo Stop 2', e.FormatProblem()) | |
self.assertMatchesRegex(r'1616 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'97 km/h', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
# Run test without a route_type | |
self.route.route_type = None | |
self.trip.Validate(self.problems) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex(r'High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex(r'Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex(r'1691 meters in 60 seconds', e.FormatProblem()) | |
self.assertMatchesRegex(r'101 km/h', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
def testNoTimeDelta(self): | |
# See comments where TooFastTravel is called in transitfeed.py to | |
# understand why was added. | |
# Movement more than max_speed in 1 minute with no time change is a warning. | |
self.AddStopDistanceTime([(1616, 0), | |
(1000, 120), | |
(1691, 0)]) | |
self.trip.Validate(self.problems) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 2 to Demo Stop 3', e.FormatProblem()) | |
self.assertMatchesRegex('1691 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
self.route.route_type = 4 # Ferry with max_speed 80 | |
self.trip.Validate(self.problems) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 0 to Demo Stop 1', e.FormatProblem()) | |
self.assertMatchesRegex('1616 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 2 to Demo Stop 3', e.FormatProblem()) | |
self.assertMatchesRegex('1691 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
# Run test without a route_type | |
self.route.route_type = None | |
self.trip.Validate(self.problems) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 2 to Demo Stop 3', e.FormatProblem()) | |
self.assertMatchesRegex('1691 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
def testNoTimeDeltaNotRounded(self): | |
# See comments where TooFastTravel is called in transitfeed.py to | |
# understand why was added. | |
# Any movement with no time change and times not rounded to the nearest | |
# minute causes a warning. | |
self.AddStopDistanceTime([(500, 62), | |
(10, 0)]) | |
self.trip.Validate(self.problems) | |
e = self.accumulator.PopException('TooFastTravel') | |
self.assertMatchesRegex('High speed travel detected', e.FormatProblem()) | |
self.assertMatchesRegex('Stop 1 to Demo Stop 2', e.FormatProblem()) | |
self.assertMatchesRegex('10 meters in 0 seconds', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
class MemoryZipTestCase(util.TestCase): | |
"""Base for TestCase classes which read from an in-memory zip file. | |
A test that loads data from this zip file exercises almost all the code used | |
when the feedvalidator runs, but does not touch disk. Unfortunately it is very | |
difficult to add new stops to the default stops.txt because a new stop will | |
break tests in StopHierarchyTestCase and StopsNearEachOther.""" | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator(self, ("ExpirationDate",)) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
self.zip_contents = {} | |
self.SetArchiveContents( | |
"agency.txt", | |
"agency_id,agency_name,agency_url,agency_timezone\n" | |
"DTA,Demo Agency,http://google.com,America/Los_Angeles\n") | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20101231\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
self.SetArchiveContents( | |
"calendar_dates.txt", | |
"service_id,date,exception_type\n" | |
"FULLW,20070101,1\n") | |
self.SetArchiveContents( | |
"routes.txt", | |
"route_id,agency_id,route_short_name,route_long_name,route_type\n" | |
"AB,DTA,,Airport Bullfrog,3\n") | |
self.SetArchiveContents( | |
"trips.txt", | |
"route_id,service_id,trip_id\n" | |
"AB,FULLW,AB1\n") | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677\n") | |
self.SetArchiveContents( | |
"stop_times.txt", | |
"trip_id,arrival_time,departure_time,stop_id,stop_sequence\n" | |
"AB1,10:00:00,10:00:00,BEATTY_AIRPORT,1\n" | |
"AB1,10:20:00,10:20:00,BULLFROG,2\n" | |
"AB1,10:25:00,10:25:00,STAGECOACH,3\n") | |
def MakeLoaderAndLoad(self, | |
problems=None, | |
extra_validation=True): | |
"""Returns a Schedule loaded with the contents of the file dict.""" | |
if problems is None: | |
problems = self.problems | |
self.CreateZip() | |
self.loader = transitfeed.Loader( | |
problems=problems, | |
extra_validation=extra_validation, | |
zip=self.zip) | |
return self.loader.Load() | |
def AppendToArchiveContents(self, arcname, s): | |
"""Append string s to file arcname in the file dict. | |
All calls to this function, if any, should be made before calling | |
MakeLoaderAndLoad.""" | |
current_contents = self.zip_contents[arcname] | |
self.zip_contents[arcname] = current_contents + s | |
def SetArchiveContents(self, arcname, contents): | |
"""Set the contents of file arcname in the file dict. | |
All calls to this function, if any, should be made before calling | |
MakeLoaderAndLoad.""" | |
self.zip_contents[arcname] = contents | |
def GetArchiveContents(self, arcname): | |
"""Get the contents of file arcname in the file dict.""" | |
return self.zip_contents[arcname] | |
def RemoveArchive(self, arcname): | |
"""Remove file arcname from the file dict. | |
All calls to this function, if any, should be made before calling | |
MakeLoaderAndLoad.""" | |
del self.zip_contents[arcname] | |
def GetArchiveNames(self): | |
"""Get a list of all the archive names in the file dict.""" | |
return self.zip_contents.keys() | |
def CreateZip(self): | |
"""Create an in-memory GTFS zipfile from the contents of the file dict.""" | |
self.zipfile = StringIO() | |
self.zip = zipfile.ZipFile(self.zipfile, 'a') | |
for (arcname, contents) in self.zip_contents.items(): | |
self.zip.writestr(arcname, contents) | |
def DumpZipFile(self, zf): | |
"""Print the contents of something zipfile can open, such as a StringIO.""" | |
# Handy for debugging | |
z = zipfile.ZipFile(zf) | |
for n in z.namelist(): | |
print "--\n%s\n%s" % (n, z.read(n)) | |
class CsvDictTestCase(util.TestCase): | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator(self) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
self.zip = zipfile.ZipFile(StringIO(), 'a') | |
self.loader = transitfeed.Loader( | |
problems=self.problems, | |
zip=self.zip) | |
def tearDown(self): | |
self.accumulator.TearDownAssertNoMoreExceptions() | |
def testEmptyFile(self): | |
self.zip.writestr("test.txt", "") | |
results = list(self.loader._ReadCsvDict("test.txt", [], [])) | |
self.assertEquals([], results) | |
self.accumulator.PopException("EmptyFile") | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderOnly(self): | |
self.zip.writestr("test.txt", "test_id,test_name") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderAndNewLineOnly(self): | |
self.zip.writestr("test.txt", "test_id,test_name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderWithSpaceBefore(self): | |
self.zip.writestr("test.txt", " test_id, test_name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderWithSpaceBeforeAfter(self): | |
self.zip.writestr("test.txt", "test_id , test_name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("CsvSyntax") | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderQuoted(self): | |
self.zip.writestr("test.txt", "\"test_id\", \"test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderSpaceAfterQuoted(self): | |
self.zip.writestr("test.txt", "\"test_id\" , \"test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("CsvSyntax") | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderSpaceInQuotesAfterValue(self): | |
self.zip.writestr("test.txt", "\"test_id \",\"test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("CsvSyntax") | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderSpaceInQuotesBeforeValue(self): | |
self.zip.writestr("test.txt", "\"test_id\",\" test_name\"\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("CsvSyntax") | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderEmptyColumnName(self): | |
self.zip.writestr("test.txt", 'test_id,test_name,\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("CsvSyntax") | |
self.accumulator.AssertNoMoreExceptions() | |
def testHeaderAllUnknownColumnNames(self): | |
self.zip.writestr("test.txt", 'id,nam\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem().find("missing the header") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testFieldWithSpaces(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1 , my name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1 ", "test_name": "my name"}, 2, | |
["test_id", "test_name"], ["id1 ","my name"])], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testFieldWithOnlySpaces(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1, \n") # spaces are skipped to yield empty field | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1", "test_name": ""}, 2, | |
["test_id", "test_name"], ["id1",""])], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testQuotedFieldWithSpaces(self): | |
self.zip.writestr("test.txt", | |
'test_id,"test_name",test_size\n' | |
'"id1" , "my name" , "234 "\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name", | |
"test_size"], [])) | |
self.assertEquals( | |
[({"test_id": "id1 ", "test_name": "my name ", "test_size": "234 "}, 2, | |
["test_id", "test_name", "test_size"], ["id1 ", "my name ", "234 "])], | |
results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testQuotedFieldWithCommas(self): | |
self.zip.writestr("test.txt", | |
'id,name1,name2\n' | |
'"1", "brown, tom", "brown, ""tom"""\n') | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["id", "name1", "name2"], [])) | |
self.assertEquals( | |
[({"id": "1", "name1": "brown, tom", "name2": "brown, \"tom\""}, 2, | |
["id", "name1", "name2"], ["1", "brown, tom", "brown, \"tom\""])], | |
results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testUnknownColumn(self): | |
# A small typo (omitting '_' in a header name) is detected | |
self.zip.writestr("test.txt", "test_id,testname\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("UnrecognizedColumn") | |
self.assertEquals("testname", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testMissingRequiredColumn(self): | |
self.zip.writestr("test.txt", "test_id,test_size\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_size"], | |
["test_name"])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("MissingColumn") | |
self.assertEquals("test_name", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testRequiredNotInAllCols(self): | |
self.zip.writestr("test.txt", "test_id,test_name,test_size\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_size"], | |
["test_name"])) | |
self.assertEquals([], results) | |
e = self.accumulator.PopException("UnrecognizedColumn") | |
self.assertEquals("test_name", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testBlankLine(self): | |
# line_num is increased for an empty line | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"\n" | |
"id1,my name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1", "test_name": "my name"}, 3, | |
["test_id", "test_name"], ["id1","my name"])], results) | |
self.accumulator.AssertNoMoreExceptions() | |
def testExtraComma(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1,my name,\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1", "test_name": "my name"}, 2, | |
["test_id", "test_name"], ["id1","my name"])], | |
results) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertTrue(e.FormatProblem().find("too many cells") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testMissingComma(self): | |
self.zip.writestr("test.txt", | |
"test_id,test_name\n" | |
"id1 my name\n") | |
results = list(self.loader._ReadCsvDict("test.txt", | |
["test_id", "test_name"], [])) | |
self.assertEquals([({"test_id": "id1 my name"}, 2, | |
["test_id", "test_name"], ["id1 my name"])], results) | |
e = self.accumulator.PopException("OtherProblem") | |
self.assertTrue(e.FormatProblem().find("missing cells") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testDetectsDuplicateHeaders(self): | |
self.zip.writestr( | |
"transfers.txt", | |
"from_stop_id,from_stop_id,to_stop_id,transfer_type,min_transfer_time," | |
"min_transfer_time,min_transfer_time,min_transfer_time,unknown," | |
"unknown\n" | |
"BEATTY_AIRPORT,BEATTY_AIRPORT,BULLFROG,3,,2,,,,\n" | |
"BULLFROG,BULLFROG,BEATTY_AIRPORT,2,1200,1,,,,\n") | |
list(self.loader._ReadCsvDict("transfers.txt", | |
transitfeed.Transfer._FIELD_NAMES, | |
transitfeed.Transfer._REQUIRED_FIELD_NAMES)) | |
self.accumulator.PopDuplicateColumn("transfers.txt","min_transfer_time",4) | |
self.accumulator.PopDuplicateColumn("transfers.txt","from_stop_id",2) | |
self.accumulator.PopDuplicateColumn("transfers.txt","unknown",2) | |
e = self.accumulator.PopException("UnrecognizedColumn") | |
self.assertEquals("unknown", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class ReadCsvTestCase(util.TestCase): | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator(self) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
self.zip = zipfile.ZipFile(StringIO(), 'a') | |
self.loader = transitfeed.Loader( | |
problems=self.problems, | |
zip=self.zip) | |
def tearDown(self): | |
self.accumulator.TearDownAssertNoMoreExceptions() | |
def testDetectsDuplicateHeaders(self): | |
self.zip.writestr( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date,end_date,end_date,tuesday,unknown,unknown\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20101231,,,,,\n") | |
list(self.loader._ReadCSV("calendar.txt", | |
transitfeed.ServicePeriod._FIELD_NAMES, | |
transitfeed.ServicePeriod._FIELD_NAMES_REQUIRED)) | |
self.accumulator.PopDuplicateColumn("calendar.txt","end_date",3) | |
self.accumulator.PopDuplicateColumn("calendar.txt","unknown",2) | |
self.accumulator.PopDuplicateColumn("calendar.txt","tuesday",2) | |
e = self.accumulator.PopException("UnrecognizedColumn") | |
self.assertEquals("unknown", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class BasicMemoryZipTestCase(MemoryZipTestCase): | |
def runTest(self): | |
self.MakeLoaderAndLoad() | |
self.accumulator.AssertNoMoreExceptions() | |
class ZipCompressionTestCase(MemoryZipTestCase): | |
def runTest(self): | |
schedule = self.MakeLoaderAndLoad() | |
self.zip.close() | |
write_output = StringIO() | |
schedule.WriteGoogleTransitFeed(write_output) | |
recompressedzip = zlib.compress(write_output.getvalue()) | |
write_size = len(write_output.getvalue()) | |
recompressedzip_size = len(recompressedzip) | |
# If zlib can compress write_output it probably wasn't compressed | |
self.assertFalse( | |
recompressedzip_size < write_size * 0.60, | |
"Are you sure WriteGoogleTransitFeed wrote a compressed zip? " | |
"Orginial size: %d recompressed: %d" % | |
(write_size, recompressedzip_size)) | |
class StopHierarchyTestCase(MemoryZipTestCase): | |
def testParentAtSameLatLon(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
self.assertEquals(1, schedule.stops["STATION"].location_type) | |
self.assertEquals(0, schedule.stops["BEATTY_AIRPORT"].location_type) | |
self.accumulator.AssertNoMoreExceptions() | |
def testBadLocationType(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,2\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,notvalid\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
self.assertEquals(2, e.row_num) | |
self.assertEquals(1, e.type) | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
self.assertEquals(3, e.row_num) | |
self.assertEquals(0, e.type) | |
self.accumulator.AssertNoMoreExceptions() | |
def testBadLocationTypeAtSameLatLon(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,2,\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
self.assertEquals(3, e.row_num) | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStationUsed(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,1\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,\n") | |
schedule = self.MakeLoaderAndLoad() | |
self.accumulator.PopException("UsedStation") | |
self.accumulator.AssertNoMoreExceptions() | |
def testParentNotFound(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testParentIsStop(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,BULLFROG\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testParentOfEntranceIsStop(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,2,BULLFROG\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("location_type", e.column_name) | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.assertTrue(e.FormatProblem().find("location_type=1") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStationWithParent(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,STATION2\n" | |
"STATION2,Airport 2,36.868000,-116.784000,1,\n" | |
"BULLFROG,Bullfrog,36.868088,-116.784797,,STATION2\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.assertEquals(3, e.row_num) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStationWithSelfParent(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,STATION\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("InvalidValue") | |
self.assertEquals("parent_station", e.column_name) | |
self.assertEquals(3, e.row_num) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStopNearToNonParentStation(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,\n" | |
"BULLFROG,Bullfrog,36.868446,-116.784582,,\n" | |
"BULLFROG_ST,Bullfrog,36.868446,-116.784582,1,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("DifferentStationTooClose") | |
self.assertMatchesRegex( | |
"The parent_station of stop \"Bullfrog\"", e.FormatProblem()) | |
e = self.accumulator.PopException("StopsTooClose") | |
self.assertMatchesRegex("BEATTY_AIRPORT", e.FormatProblem()) | |
self.assertMatchesRegex("BULLFROG", e.FormatProblem()) | |
self.assertMatchesRegex("are 0.00m apart", e.FormatProblem()) | |
e = self.accumulator.PopException("DifferentStationTooClose") | |
self.assertMatchesRegex( | |
"The parent_station of stop \"Airport\"", e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStopTooFarFromParentStation(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BULLFROG_ST,Bullfrog,36.880,-116.817,1,\n" # Parent station of all. | |
"BEATTY_AIRPORT,Airport,36.880,-116.816,,BULLFROG_ST\n" # ~ 90m far | |
"BULLFROG,Bullfrog,36.881,-116.818,,BULLFROG_ST\n" # ~ 150m far | |
"STAGECOACH,Stagecoach,36.915,-116.751,,BULLFROG_ST\n") # > 3km far | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("StopTooFarFromParentStation") | |
self.assertEqual(1, e.type) # Warning | |
self.assertTrue(e.FormatProblem().find( | |
"Bullfrog (ID BULLFROG) is too far from its parent" | |
" station Bullfrog (ID BULLFROG_ST)") != -1) | |
e = self.accumulator.PopException("StopTooFarFromParentStation") | |
self.assertEqual(0, e.type) # Error | |
self.assertTrue(e.FormatProblem().find( | |
"Stagecoach (ID STAGECOACH) is too far from its parent" | |
" station Bullfrog (ID BULLFROG_ST)") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
#Uncomment once validation is implemented | |
#def testStationWithoutReference(self): | |
# self.SetArchiveContents( | |
# "stops.txt", | |
# "stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
# "BEATTY_AIRPORT,Airport,36.868446,-116.784582,,\n" | |
# "STATION,Airport,36.868446,-116.784582,1,\n" | |
# "BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
# "STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
# schedule = self.MakeLoaderAndLoad() | |
# e = self.accumulator.PopException("OtherProblem") | |
# self.assertEquals("parent_station", e.column_name) | |
# self.assertEquals(2, e.row_num) | |
# self.accumulator.AssertNoMoreExceptions() | |
class StopSpacesTestCase(MemoryZipTestCase): | |
def testFieldsWithSpace(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_code,stop_name,stop_lat,stop_lon,stop_url,location_type," | |
"parent_station\n" | |
"BEATTY_AIRPORT, ,Airport,36.868446,-116.784582, , ,\n" | |
"BULLFROG,,Bullfrog,36.88108,-116.81797,,,\n" | |
"STAGECOACH,,Stagecoach Hotel,36.915682,-116.751677,,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
self.accumulator.AssertNoMoreExceptions() | |
class StopBlankHeaders(MemoryZipTestCase): | |
def testBlankHeaderValueAtEnd(self): | |
# Modify the stops.txt added by MemoryZipTestCase.setUp. This allows the | |
# original stops.txt to be changed without modifying anything in this test. | |
# Add a column to the end of every row, leaving the header name blank. | |
new = [] | |
for i, row in enumerate( | |
self.GetArchiveContents("stops.txt").split("\n")): | |
if i == 0: | |
new.append(row + ",") | |
elif row: | |
new.append(row + "," + str(i)) # Put a junk value in data rows | |
self.SetArchiveContents("stops.txt", "\n".join(new)) | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem(). | |
find("header row should not contain any blank") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testBlankHeaderValueAtStart(self): | |
# Modify the stops.txt added by MemoryZipTestCase.setUp. This allows the | |
# original stops.txt to be changed without modifying anything in this test. | |
# Add a column to the start of every row, leaving the header name blank. | |
new = [] | |
for i, row in enumerate( | |
self.GetArchiveContents("stops.txt").split("\n")): | |
if i == 0: | |
new.append("," + row) | |
elif row: | |
new.append(str(i) + "," + row) # Put a junk value in data rows | |
self.SetArchiveContents("stops.txt", "\n".join(new)) | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem(). | |
find("header row should not contain any blank") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testBlankHeaderValueInMiddle(self): | |
# Modify the stops.txt added by MemoryZipTestCase.setUp. This allows the | |
# original stops.txt to be changed without modifying anything in this test. | |
# Add two columns to the start of every row, leaving the second header name | |
# blank. | |
new = [] | |
for i, row in enumerate( | |
self.GetArchiveContents("stops.txt").split("\n")): | |
if i == 0: | |
new.append("test_name,," + row) | |
elif row: | |
# Put a junk value in data rows | |
new.append(str(i) + "," + str(i) + "," + row) | |
self.SetArchiveContents("stops.txt", "\n".join(new)) | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("CsvSyntax") | |
self.assertTrue(e.FormatProblem(). | |
find("header row should not contain any blank") != -1) | |
e = self.accumulator.PopException("UnrecognizedColumn") | |
self.assertEquals("test_name", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class StopsNearEachOther(MemoryZipTestCase): | |
def testTooNear(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140\n" | |
"BULLFROG,Bullfrog,48.20001,140\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('StopsTooClose') | |
self.assertTrue(e.FormatProblem().find("1.11m apart") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testJustFarEnough(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140\n" | |
"BULLFROG,Bullfrog,48.20002,140\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140\n") | |
schedule = self.MakeLoaderAndLoad() | |
# Stops are 2.2m apart | |
self.accumulator.AssertNoMoreExceptions() | |
def testSameLocation(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,48.2,140\n" | |
"BULLFROG,Bullfrog,48.2,140\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('StopsTooClose') | |
self.assertTrue(e.FormatProblem().find("0.00m apart") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStationsTooNear(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140,,BEATTY_AIRPORT_STATION\n" | |
"BULLFROG,Bullfrog,48.20003,140,,BULLFROG_STATION\n" | |
"BEATTY_AIRPORT_STATION,Airport,48.20001,140,1,\n" | |
"BULLFROG_STATION,Bullfrog,48.20002,140,1,\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('StationsTooClose') | |
self.assertTrue(e.FormatProblem().find("1.11m apart") != -1) | |
self.assertTrue(e.FormatProblem().find("BEATTY_AIRPORT_STATION") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
def testStopNearNonParentStation(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,48.20000,140,,\n" | |
"BULLFROG,Bullfrog,48.20005,140,,\n" | |
"BULLFROG_STATION,Bullfrog,48.20006,140,1,\n" | |
"STAGECOACH,Stagecoach Hotel,48.20016,140,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('DifferentStationTooClose') | |
fmt = e.FormatProblem() | |
self.assertTrue(re.search( | |
r"parent_station of.*BULLFROG.*station.*BULLFROG_STATION.* 1.11m apart", | |
fmt), fmt) | |
self.accumulator.AssertNoMoreExceptions() | |
class BadLatLonInStopUnitTest(ValidationTestCase): | |
def runTest(self): | |
stop = transitfeed.Stop(field_dict={"stop_id": "STOP1", | |
"stop_name": "Stop one", | |
"stop_lat": "0x20", | |
"stop_lon": "140.01"}) | |
self.ExpectInvalidValue(stop, "stop_lat") | |
stop = transitfeed.Stop(field_dict={"stop_id": "STOP1", | |
"stop_name": "Stop one", | |
"stop_lat": "13.0", | |
"stop_lon": "1e2"}) | |
self.ExpectInvalidFloatValue(stop, "1e2") | |
class BadLatLonInFileUnitTest(MemoryZipTestCase): | |
def runTest(self): | |
self.SetArchiveContents( | |
"stops.txt", | |
"stop_id,stop_name,stop_lat,stop_lon\n" | |
"BEATTY_AIRPORT,Airport,0x20,140.00\n" | |
"BULLFROG,Bullfrog,48.20001,140.0123\n" | |
"STAGECOACH,Stagecoach Hotel,48.002,bogus\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("stop_lat", e.column_name) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEquals(4, e.row_num) | |
self.assertEquals("stop_lon", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class LoadUnknownFileInZipTestCase(MemoryZipTestCase): | |
def runTest(self): | |
self.SetArchiveContents( | |
"stpos.txt", | |
"stop_id,stop_name,stop_lat,stop_lon,location_type,parent_station\n" | |
"BEATTY_AIRPORT,Airport,36.868446,-116.784582,,STATION\n" | |
"STATION,Airport,36.868446,-116.784582,1,\n" | |
"BULLFROG,Bullfrog,36.88108,-116.81797,,\n" | |
"STAGECOACH,Stagecoach Hotel,36.915682,-116.751677,,\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('UnknownFile') | |
self.assertEquals('stpos.txt', e.file_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class TabDelimitedTestCase(MemoryZipTestCase): | |
def runTest(self): | |
# Create an extremely corrupt file by replacing each comma with a tab, | |
# ignoring csv quoting. | |
for arcname in self.GetArchiveNames(): | |
contents = self.GetArchiveContents(arcname) | |
self.SetArchiveContents(arcname, contents.replace(",", "\t")) | |
schedule = self.MakeLoaderAndLoad() | |
# Don't call self.accumulator.AssertNoMoreExceptions() because there are | |
# lots of problems but I only care that the validator doesn't crash. In the | |
# magical future the validator will stop when the csv is obviously hosed. | |
class RouteMemoryZipTestCase(MemoryZipTestCase): | |
def assertLoadAndCheckExtraValues(self, schedule_file): | |
"""Load file-like schedule_file and check for extra route columns.""" | |
load_problems = GetTestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
loaded_schedule = transitfeed.Loader(schedule_file, | |
problems=load_problems, | |
extra_validation=True).Load() | |
self.assertEqual("foo", loaded_schedule.GetRoute("t")["t_foo"]) | |
self.assertEqual("", loaded_schedule.GetRoute("AB")["t_foo"]) | |
self.assertEqual("bar", loaded_schedule.GetRoute("n")["n_foo"]) | |
self.assertEqual("", loaded_schedule.GetRoute("AB")["n_foo"]) | |
# Uncomment the following lines to print the string in testExtraFileColumn | |
# print repr(zipfile.ZipFile(schedule_file).read("routes.txt")) | |
# self.fail() | |
def testExtraObjectAttribute(self): | |
"""Extra columns added to an object are preserved when writing.""" | |
schedule = self.MakeLoaderAndLoad() | |
# Add an attribute after AddRouteObject | |
route_t = transitfeed.Route(short_name="T", route_type="Bus", route_id="t") | |
schedule.AddRouteObject(route_t) | |
route_t.t_foo = "foo" | |
# Add an attribute before AddRouteObject | |
route_n = transitfeed.Route(short_name="N", route_type="Bus", route_id="n") | |
route_n.n_foo = "bar" | |
schedule.AddRouteObject(route_n) | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
def testExtraFileColumn(self): | |
"""Extra columns loaded from a file are preserved when writing.""" | |
# Uncomment the code in assertLoadAndCheckExtraValues to generate this | |
# string. | |
self.SetArchiveContents( | |
"routes.txt", | |
"route_id,agency_id,route_short_name,route_long_name,route_type," | |
"t_foo,n_foo\n" | |
"AB,DTA,,Airport Bullfrog,3,,\n" | |
"t,DTA,T,,3,foo,\n" | |
"n,DTA,N,,3,,bar\n") | |
load1_problems = GetTestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
schedule = self.MakeLoaderAndLoad(problems=load1_problems) | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
class RouteConstructorTestCase(util.TestCase): | |
def setUp(self): | |
self.accumulator = RecordingProblemAccumulator(self) | |
self.problems = transitfeed.ProblemReporter(self.accumulator) | |
def tearDown(self): | |
self.accumulator.TearDownAssertNoMoreExceptions() | |
def testDefault(self): | |
route = transitfeed.Route() | |
repr(route) | |
self.assertEqual({}, dict(route)) | |
route.Validate(self.problems) | |
repr(route) | |
self.assertEqual({}, dict(route)) | |
e = self.accumulator.PopException('MissingValue') | |
self.assertEqual('route_id', e.column_name) | |
e = self.accumulator.PopException('MissingValue') | |
self.assertEqual('route_type', e.column_name) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual('route_short_name', e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testInitArgs(self): | |
# route_type name | |
route = transitfeed.Route(route_id='id1', short_name='22', route_type='Bus') | |
repr(route) | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals(3, route.route_type) # converted to an int | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '3'}, dict(route)) | |
# route_type as an int | |
route = transitfeed.Route(route_id='i1', long_name='Twenty 2', route_type=1) | |
repr(route) | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals(1, route.route_type) # kept as an int | |
self.assertEquals({'route_id': 'i1', 'route_long_name': 'Twenty 2', | |
'route_type': '1'}, dict(route)) | |
# route_type as a string | |
route = transitfeed.Route(route_id='id1', short_name='22', route_type='1') | |
repr(route) | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals(1, route.route_type) # converted to an int | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '1'}, dict(route)) | |
# route_type has undefined int value | |
route = transitfeed.Route(route_id='id1', short_name='22', | |
route_type='8') | |
repr(route) | |
route.Validate(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual('route_type', e.column_name) | |
self.assertEqual(1, e.type) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '8'}, dict(route)) | |
# route_type that doesn't parse | |
route = transitfeed.Route(route_id='id1', short_name='22', | |
route_type='1foo') | |
repr(route) | |
route.Validate(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual('route_type', e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '1foo'}, dict(route)) | |
# agency_id | |
route = transitfeed.Route(route_id='id1', short_name='22', route_type=1, | |
agency_id='myage') | |
repr(route) | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'route_type': '1', 'agency_id': 'myage'}, dict(route)) | |
def testInitArgOrder(self): | |
"""Call Route.__init__ without any names so a change in order is noticed.""" | |
route = transitfeed.Route('short', 'long name', 'Bus', 'r1', 'a1') | |
self.assertEquals({'route_id': 'r1', 'route_short_name': 'short', | |
'route_long_name': 'long name', | |
'route_type': '3', 'agency_id': 'a1'}, dict(route)) | |
def testFieldDict(self): | |
route = transitfeed.Route(field_dict={}) | |
self.assertEquals({}, dict(route)) | |
route = transitfeed.Route(field_dict={ | |
'route_id': 'id1', 'route_short_name': '22', 'agency_id': 'myage', | |
'route_type': '1'}) | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'agency_id': 'myage', 'route_type': '1'}, dict(route)) | |
route = transitfeed.Route(field_dict={ | |
'route_id': 'id1', 'route_short_name': '22', 'agency_id': 'myage', | |
'route_type': '1', 'my_column': 'v'}) | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'agency_id': 'myage', 'route_type': '1', | |
'my_column':'v'}, dict(route)) | |
route._private = 0.3 # Isn't copied | |
route_copy = transitfeed.Route(field_dict=route) | |
self.assertEquals({'route_id': 'id1', 'route_short_name': '22', | |
'agency_id': 'myage', 'route_type': '1', | |
'my_column':'v'}, dict(route_copy)) | |
class RouteValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
route = transitfeed.Route() | |
route.route_id = '054C' | |
route.route_short_name = '54C' | |
route.route_long_name = 'South Side - North Side' | |
route.route_type = 7 | |
route.Validate(self.problems) | |
# blank short & long names | |
route.route_short_name = '' | |
route.route_long_name = ' ' | |
self.ExpectInvalidValue(route, 'route_short_name') | |
# short name too long | |
route.route_short_name = 'South Side' | |
route.route_long_name = '' | |
self.ExpectInvalidValue(route, 'route_short_name') | |
route.route_short_name = 'M7bis' # 5 is OK | |
route.Validate(self.problems) | |
# long name contains short name | |
route.route_short_name = '54C' | |
route.route_long_name = '54C South Side - North Side' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
route.route_long_name = '54C(South Side - North Side)' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
route.route_long_name = '54C-South Side - North Side' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
# long name is same as short name | |
route.route_short_name = '54C' | |
route.route_long_name = '54C' | |
self.ExpectInvalidValue(route, 'route_long_name') | |
# route description is same as short name | |
route.route_desc = '54C' | |
route.route_short_name = '54C' | |
route.route_long_name = '' | |
self.ExpectInvalidValue(route, 'route_desc') | |
route.route_desc = None | |
# route description is same as long name | |
route.route_desc = 'South Side - North Side' | |
route.route_long_name = 'South Side - North Side' | |
self.ExpectInvalidValue(route, 'route_desc') | |
route.route_desc = None | |
# invalid route types | |
route.route_type = 8 | |
self.ExpectInvalidValue(route, 'route_type') | |
route.route_type = -1 | |
self.ExpectInvalidValue(route, 'route_type') | |
route.route_type = 7 | |
# invalid route URL | |
route.route_url = 'www.example.com' | |
self.ExpectInvalidValue(route, 'route_url') | |
route.route_url = None | |
# invalid route color | |
route.route_color = 'orange' | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = None | |
# invalid route text color | |
route.route_text_color = 'orange' | |
self.ExpectInvalidValue(route, 'route_text_color') | |
route.route_text_color = None | |
# missing route ID | |
route.route_id = None | |
self.ExpectMissingValue(route, 'route_id') | |
route.route_id = '054C' | |
# bad color contrast | |
route.route_text_color = None # black | |
route.route_color = '0000FF' # Bad | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = '00BF00' # OK | |
route.Validate(self.problems) | |
route.route_color = '005F00' # Bad | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = 'FF00FF' # OK | |
route.Validate(self.problems) | |
route.route_text_color = 'FFFFFF' # OK too | |
route.Validate(self.problems) | |
route.route_text_color = '00FF00' # think of color-blind people! | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_text_color = '007F00' | |
route.route_color = 'FF0000' | |
self.ExpectInvalidValue(route, 'route_color') | |
route.route_color = '00FFFF' # OK | |
route.Validate(self.problems) | |
route.route_text_color = None # black | |
route.route_color = None # white | |
route.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
class ShapeValidationTestCase(ValidationTestCase): | |
def ExpectFailedAdd(self, shape, lat, lon, dist, column_name, value): | |
self.ExpectInvalidValueInClosure( | |
column_name, value, | |
lambda: shape.AddPoint(lat, lon, dist, self.problems)) | |
def runTest(self): | |
shape = transitfeed.Shape('TEST') | |
repr(shape) # shouldn't crash | |
self.ExpectOtherProblem(shape) # no points! | |
self.ExpectFailedAdd(shape, 36.905019, -116.763207, -1, | |
'shape_dist_traveled', -1) | |
shape.AddPoint(36.915760, -116.751709, 0, self.problems) | |
shape.AddPoint(36.905018, -116.763206, 5, self.problems) | |
shape.Validate(self.problems) | |
shape.shape_id = None | |
self.ExpectMissingValue(shape, 'shape_id') | |
shape.shape_id = 'TEST' | |
self.ExpectFailedAdd(shape, 91, -116.751709, 6, 'shape_pt_lat', 91) | |
self.ExpectFailedAdd(shape, -91, -116.751709, 6, 'shape_pt_lat', -91) | |
self.ExpectFailedAdd(shape, 36.915760, -181, 6, 'shape_pt_lon', -181) | |
self.ExpectFailedAdd(shape, 36.915760, 181, 6, 'shape_pt_lon', 181) | |
self.ExpectFailedAdd(shape, 0.5, -0.5, 6, 'shape_pt_lat', 0.5) | |
self.ExpectFailedAdd(shape, 0, 0, 6, 'shape_pt_lat', 0) | |
# distance decreasing is bad, but staying the same is OK | |
shape.AddPoint(36.905019, -116.763206, 4, self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('Each subsequent point', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 5.000000.', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
shape.AddPoint(36.925019, -116.764206, 6, self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
shapepoint = transitfeed.ShapePoint('TEST', 36.915760, -116.7156, 6, 8) | |
shape.AddShapePointObjectUnsorted(shapepoint, self.problems) | |
shapepoint = transitfeed.ShapePoint('TEST', 36.915760, -116.7156, 5, 10) | |
shape.AddShapePointObjectUnsorted(shapepoint, self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('Each subsequent point', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 8.000000.', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
shapepoint = transitfeed.ShapePoint('TEST', 36.915760, -116.7156, 6, 11) | |
shape.AddShapePointObjectUnsorted(shapepoint, self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('The sequence number 6 occurs ', e.FormatProblem()) | |
self.assertMatchesRegex('once in shape TEST.', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
class ShapePointValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
shapepoint = transitfeed.ShapePoint('', 36.915720, -116.7156, 0, 0) | |
self.ExpectMissingValueInClosure('shape_id', | |
lambda: shapepoint.ParseAttributes(self.problems)) | |
shapepoint = transitfeed.ShapePoint('T', '36.9151', '-116.7611', '00', '0') | |
shapepoint.ParseAttributes(self.problems) | |
e = self.accumulator.PopException('InvalidNonNegativeIntegerValue') | |
self.assertMatchesRegex('not have a leading zero', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
shapepoint = transitfeed.ShapePoint('T', '36.9151', '-116.7611', -1, '0') | |
shapepoint.ParseAttributes(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('Value should be a number', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
shapepoint = transitfeed.ShapePoint('T', '0.1', '0.1', '1', '0') | |
shapepoint.ParseAttributes(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('too close to 0, 0,', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
shapepoint = transitfeed.ShapePoint('T', '36.9151', '-116.7611', '0', '') | |
shapepoint.ParseAttributes(self.problems) | |
shapepoint = transitfeed.ShapePoint('T', '36.9151', '-116.7611', '0', '-1') | |
shapepoint.ParseAttributes(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('Invalid value -1.0', e.FormatProblem()) | |
self.assertMatchesRegex('should be a positive number', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
class FareAttributeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
fare = transitfeed.FareAttribute() | |
fare.fare_id = "normal" | |
fare.price = 1.50 | |
fare.currency_type = "USD" | |
fare.payment_method = 0 | |
fare.transfers = 1 | |
fare.transfer_duration = 7200 | |
fare.Validate(self.problems) | |
fare.fare_id = None | |
self.ExpectMissingValue(fare, "fare_id") | |
fare.fare_id = '' | |
self.ExpectMissingValue(fare, "fare_id") | |
fare.fare_id = "normal" | |
fare.price = "1.50" | |
self.ExpectInvalidValue(fare, "price") | |
fare.price = 1 | |
fare.Validate(self.problems) | |
fare.price = None | |
self.ExpectMissingValue(fare, "price") | |
fare.price = 0.0 | |
fare.Validate(self.problems) | |
fare.price = -1.50 | |
self.ExpectInvalidValue(fare, "price") | |
fare.price = 1.50 | |
fare.currency_type = "" | |
self.ExpectMissingValue(fare, "currency_type") | |
fare.currency_type = None | |
self.ExpectMissingValue(fare, "currency_type") | |
fare.currency_type = "usd" | |
self.ExpectInvalidValue(fare, "currency_type") | |
fare.currency_type = "KML" | |
self.ExpectInvalidValue(fare, "currency_type") | |
fare.currency_type = "USD" | |
fare.payment_method = "0" | |
self.ExpectInvalidValue(fare, "payment_method") | |
fare.payment_method = -1 | |
self.ExpectInvalidValue(fare, "payment_method") | |
fare.payment_method = 1 | |
fare.Validate(self.problems) | |
fare.payment_method = 2 | |
self.ExpectInvalidValue(fare, "payment_method") | |
fare.payment_method = None | |
self.ExpectMissingValue(fare, "payment_method") | |
fare.payment_method = "" | |
self.ExpectMissingValue(fare, "payment_method") | |
fare.payment_method = 0 | |
fare.transfers = "1" | |
self.ExpectInvalidValue(fare, "transfers") | |
fare.transfers = -1 | |
self.ExpectInvalidValue(fare, "transfers") | |
fare.transfers = 2 | |
fare.Validate(self.problems) | |
fare.transfers = 3 | |
self.ExpectInvalidValue(fare, "transfers") | |
fare.transfers = None | |
fare.Validate(self.problems) | |
fare.transfers = 1 | |
fare.transfer_duration = 0 | |
fare.Validate(self.problems) | |
fare.transfer_duration = None | |
fare.Validate(self.problems) | |
fare.transfer_duration = -3600 | |
self.ExpectInvalidValue(fare, "transfer_duration") | |
fare.transfers = 0 # no transfers allowed and duration specified! | |
fare.transfer_duration = 3600 | |
fare.Validate(self.problems) | |
fare.transfers = 1 | |
fare.transfer_duration = "3600" | |
self.ExpectInvalidValue(fare, "transfer_duration") | |
fare.transfer_duration = 7200 | |
self.accumulator.AssertNoMoreExceptions() | |
class TransferObjectTestCase(ValidationTestCase): | |
def testValidation(self): | |
# Totally bogus data shouldn't cause a crash | |
transfer = transitfeed.Transfer(field_dict={"ignored": "foo"}) | |
self.assertEquals(0, transfer.transfer_type) | |
transfer = transitfeed.Transfer(from_stop_id = "S1", to_stop_id = "S2", | |
transfer_type = "1") | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(1, transfer.transfer_type) | |
self.assertEquals(None, transfer.min_transfer_time) | |
# references to other tables aren't checked without schedule so this | |
# validates even though from_stop_id and to_stop_id are invalid. | |
transfer.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(1, transfer.transfer_type) | |
self.assertEquals(None, transfer.min_transfer_time) | |
self.accumulator.AssertNoMoreExceptions() | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "2", \ | |
"min_transfer_time": "2"}) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(2, transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals(2, transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
self.accumulator.AssertNoMoreExceptions() | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "-4", \ | |
"min_transfer_time": "2"}) | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals("-4", transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
e = self.accumulator.PopInvalidValue("transfer_type") | |
e = self.accumulator.PopException( | |
"MinimumTransferTimeSetWithInvalidTransferType") | |
self.assertEquals("S1", transfer.from_stop_id) | |
self.assertEquals("S2", transfer.to_stop_id) | |
self.assertEquals("-4", transfer.transfer_type) | |
self.assertEquals(2, transfer.min_transfer_time) | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "", \ | |
"min_transfer_time": "-1"}) | |
self.assertEquals(0, transfer.transfer_type) | |
transfer.Validate(self.problems) | |
# It's negative *and* transfer_type is not 2 | |
e = self.accumulator.PopException( | |
"MinimumTransferTimeSetWithInvalidTransferType") | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
# Non-integer min_transfer_time with transfer_type == 2 | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "2", \ | |
"min_transfer_time": "foo"}) | |
self.assertEquals("foo", transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
# Non-integer min_transfer_time with transfer_type != 2 | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "1", \ | |
"min_transfer_time": "foo"}) | |
self.assertEquals("foo", transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
# It's not an integer *and* transfer_type is not 2 | |
e = self.accumulator.PopException( | |
"MinimumTransferTimeSetWithInvalidTransferType") | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
# Fractional min_transfer_time with transfer_type == 2 | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "2", \ | |
"min_transfer_time": "2.5"}) | |
self.assertEquals("2.5", transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
# Fractional min_transfer_time with transfer_type != 2 | |
transfer = transitfeed.Transfer(field_dict={"from_stop_id": "S1", \ | |
"to_stop_id": "S2", \ | |
"transfer_type": "1", \ | |
"min_transfer_time": "2.5"}) | |
self.assertEquals("2.5", transfer.min_transfer_time) | |
transfer.Validate(self.problems) | |
# It's not an integer *and* transfer_type is not 2 | |
e = self.accumulator.PopException( | |
"MinimumTransferTimeSetWithInvalidTransferType") | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
# simple successes | |
transfer = transitfeed.Transfer() | |
transfer.from_stop_id = "S1" | |
transfer.to_stop_id = "S2" | |
transfer.transfer_type = 0 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
transfer.transfer_type = 3 | |
transfer.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
# transfer_type is out of range | |
transfer.transfer_type = 4 | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
transfer.transfer_type = -1 | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
transfer.transfer_type = "text" | |
self.ExpectInvalidValue(transfer, "transfer_type") | |
transfer.transfer_type = 2 | |
# invalid min_transfer_time | |
transfer.min_transfer_time = -1 | |
self.ExpectInvalidValue(transfer, "min_transfer_time") | |
transfer.min_transfer_time = "text" | |
self.ExpectInvalidValue(transfer, "min_transfer_time") | |
transfer.min_transfer_time = 4*3600 | |
transfer.Validate(self.problems) | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
self.assertEquals(e.type, transitfeed.TYPE_WARNING) | |
transfer.min_transfer_time = 25*3600 | |
transfer.Validate(self.problems) | |
e = self.accumulator.PopInvalidValue("min_transfer_time") | |
self.assertEquals(e.type, transitfeed.TYPE_ERROR) | |
transfer.min_transfer_time = 250 | |
transfer.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
# missing stop ids | |
transfer.from_stop_id = "" | |
self.ExpectMissingValue(transfer, 'from_stop_id') | |
transfer.from_stop_id = "S1" | |
transfer.to_stop_id = None | |
self.ExpectMissingValue(transfer, 'to_stop_id') | |
transfer.to_stop_id = "S2" | |
# from_stop_id and to_stop_id are present in schedule | |
schedule = transitfeed.Schedule() | |
# 597m appart | |
stop1 = schedule.AddStop(57.5, 30.2, "stop 1") | |
stop2 = schedule.AddStop(57.5, 30.21, "stop 2") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop2.stop_id | |
transfer.transfer_type = 2 | |
transfer.min_transfer_time = 600 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
# only from_stop_id is present in schedule | |
schedule = transitfeed.Schedule() | |
stop1 = schedule.AddStop(57.5, 30.2, "stop 1") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = "unexist" | |
transfer.transfer_type = 2 | |
transfer.min_transfer_time = 250 | |
self.ExpectInvalidValue(transfer, 'to_stop_id') | |
transfer.from_stop_id = "unexist" | |
transfer.to_stop_id = stop1.stop_id | |
self.ExpectInvalidValue(transfer, "from_stop_id") | |
self.accumulator.AssertNoMoreExceptions() | |
# Transfer can only be added to a schedule once because _schedule is set | |
transfer = transitfeed.Transfer() | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop1.stop_id | |
schedule.AddTransferObject(transfer) | |
self.assertRaises(AssertionError, schedule.AddTransferObject, transfer) | |
def testValidationSpeedDistanceAllTransferTypes(self): | |
schedule = transitfeed.Schedule() | |
transfer = transitfeed.Transfer(schedule=schedule) | |
stop1 = schedule.AddStop(1, 0, "stop 1") | |
stop2 = schedule.AddStop(0, 1, "stop 2") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop2.stop_id | |
for transfer_type in [0, 1, 2, 3]: | |
transfer.transfer_type = transfer_type | |
# from_stop_id and to_stop_id are present in schedule | |
# and a bit far away (should be warning) | |
# 2303m appart | |
stop1.stop_lat = 57.5 | |
stop1.stop_lon = 30.32 | |
stop2.stop_lat = 57.52 | |
stop2.stop_lon = 30.33 | |
transfer.min_transfer_time = 2500 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
if transfer_type != 2: | |
e = self.accumulator.PopException( | |
"MinimumTransferTimeSetWithInvalidTransferType") | |
self.assertEquals(e.transfer_type, transfer.transfer_type) | |
e = self.accumulator.PopException('TransferDistanceTooBig') | |
self.assertEquals(e.type, transitfeed.TYPE_WARNING) | |
self.assertEquals(e.from_stop_id, stop1.stop_id) | |
self.assertEquals(e.to_stop_id, stop2.stop_id) | |
self.accumulator.AssertNoMoreExceptions() | |
# from_stop_id and to_stop_id are present in schedule | |
# and too far away (should be error) | |
# 11140m appart | |
stop1.stop_lat = 57.5 | |
stop1.stop_lon = 30.32 | |
stop2.stop_lat = 57.4 | |
stop2.stop_lon = 30.33 | |
transfer.min_transfer_time = 3600 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
if transfer_type != 2: | |
e = self.accumulator.PopException( | |
"MinimumTransferTimeSetWithInvalidTransferType") | |
self.assertEquals(e.transfer_type, transfer.transfer_type) | |
e = self.accumulator.PopException('TransferDistanceTooBig') | |
self.assertEquals(e.type, transitfeed.TYPE_ERROR) | |
self.assertEquals(e.from_stop_id, stop1.stop_id) | |
self.assertEquals(e.to_stop_id, stop2.stop_id) | |
e = self.accumulator.PopException('TransferWalkingSpeedTooFast') | |
self.assertEquals(e.type, transitfeed.TYPE_WARNING) | |
self.assertEquals(e.from_stop_id, stop1.stop_id) | |
self.assertEquals(e.to_stop_id, stop2.stop_id) | |
self.accumulator.AssertNoMoreExceptions() | |
def testSmallTransferTimeTriggersWarning(self): | |
# from_stop_id and to_stop_id are present in schedule | |
# and transfer time is too small | |
schedule = transitfeed.Schedule() | |
# 298m appart | |
stop1 = schedule.AddStop(57.5, 30.2, "stop 1") | |
stop2 = schedule.AddStop(57.5, 30.205, "stop 2") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop2.stop_id | |
transfer.transfer_type = 2 | |
transfer.min_transfer_time = 1 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
e = self.accumulator.PopException('TransferWalkingSpeedTooFast') | |
self.assertEquals(e.type, transitfeed.TYPE_WARNING) | |
self.assertEquals(e.from_stop_id, stop1.stop_id) | |
self.assertEquals(e.to_stop_id, stop2.stop_id) | |
self.accumulator.AssertNoMoreExceptions() | |
def testVeryCloseStationsDoNotTriggerWarning(self): | |
# from_stop_id and to_stop_id are present in schedule | |
# and transfer time is too small, but stations | |
# are very close together. | |
schedule = transitfeed.Schedule() | |
# 239m appart | |
stop1 = schedule.AddStop(57.5, 30.2, "stop 1") | |
stop2 = schedule.AddStop(57.5, 30.204, "stop 2") | |
transfer = transitfeed.Transfer(schedule=schedule) | |
transfer.from_stop_id = stop1.stop_id | |
transfer.to_stop_id = stop2.stop_id | |
transfer.transfer_type = 2 | |
transfer.min_transfer_time = 1 | |
repr(transfer) # shouldn't crash | |
transfer.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
def testCustomAttribute(self): | |
"""Add unknown attributes to a Transfer and make sure they are saved.""" | |
transfer = transitfeed.Transfer() | |
transfer.attr1 = "foo1" | |
schedule = self.SimpleSchedule() | |
transfer.to_stop_id = "stop1" | |
transfer.from_stop_id = "stop1" | |
schedule.AddTransferObject(transfer) | |
transfer.attr2 = "foo2" | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.accumulator.AssertNoMoreExceptions() | |
# Ignore NoServiceExceptions error to keep the test simple | |
load_problems = GetTestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn", "NoServiceExceptions")) | |
loaded_schedule = transitfeed.Loader(saved_schedule_file, | |
problems=load_problems, | |
extra_validation=True).Load() | |
transfers = loaded_schedule.GetTransferList() | |
self.assertEquals(1, len(transfers)) | |
self.assertEquals("foo1", transfers[0].attr1) | |
self.assertEquals("foo1", transfers[0]["attr1"]) | |
self.assertEquals("foo2", transfers[0].attr2) | |
self.assertEquals("foo2", transfers[0]["attr2"]) | |
def testDuplicateId(self): | |
schedule = self.SimpleSchedule() | |
transfer1 = transitfeed.Transfer(from_stop_id="stop1", to_stop_id="stop2") | |
schedule.AddTransferObject(transfer1) | |
transfer2 = transitfeed.Transfer(field_dict=transfer1) | |
transfer2.transfer_type = 3 | |
schedule.AddTransferObject(transfer2) | |
transfer2.Validate() | |
e = self.accumulator.PopException('DuplicateID') | |
self.assertEquals('(from_stop_id, to_stop_id)', e.column_name) | |
self.assertEquals('(stop1, stop2)', e.value) | |
self.assertTrue(e.IsWarning()) | |
self.accumulator.AssertNoMoreExceptions() | |
# Check that both transfers were kept | |
self.assertEquals(transfer1, schedule.GetTransferList()[0]) | |
self.assertEquals(transfer2, schedule.GetTransferList()[1]) | |
# Adding a transfer with a different ID shouldn't cause a problem report. | |
transfer3 = transitfeed.Transfer(from_stop_id="stop1", to_stop_id="stop3") | |
schedule.AddTransferObject(transfer3) | |
self.assertEquals(3, len(schedule.GetTransferList())) | |
self.accumulator.AssertNoMoreExceptions() | |
# GetTransferIter should return all Transfers | |
transfer4 = transitfeed.Transfer(from_stop_id="stop1") | |
schedule.AddTransferObject(transfer4) | |
self.assertEquals( | |
",stop2,stop2,stop3", | |
",".join(sorted(t["to_stop_id"] for t in schedule.GetTransferIter()))) | |
self.accumulator.AssertNoMoreExceptions() | |
class TransferValidationTestCase(MemoryZipTestCase): | |
"""Integration test for transfers.""" | |
def testInvalidStopIds(self): | |
self.SetArchiveContents( | |
"transfers.txt", | |
"from_stop_id,to_stop_id,transfer_type\n" | |
"DOESNOTEXIST,BULLFROG,2\n" | |
",BULLFROG,2\n" | |
"BULLFROG,,2\n" | |
"BULLFROG,DOESNOTEXISTEITHER,2\n" | |
"DOESNOTEXIT,DOESNOTEXISTEITHER,2\n" | |
",,2\n") | |
schedule = self.MakeLoaderAndLoad() | |
# First row | |
e = self.accumulator.PopInvalidValue('from_stop_id') | |
# Second row | |
e = self.accumulator.PopMissingValue('from_stop_id') | |
# Third row | |
e = self.accumulator.PopMissingValue('to_stop_id') | |
# Fourth row | |
e = self.accumulator.PopInvalidValue('to_stop_id') | |
# Fifth row | |
e = self.accumulator.PopInvalidValue('from_stop_id') | |
e = self.accumulator.PopInvalidValue('to_stop_id') | |
# Sixth row | |
e = self.accumulator.PopMissingValue('from_stop_id') | |
e = self.accumulator.PopMissingValue('to_stop_id') | |
self.accumulator.AssertNoMoreExceptions() | |
def testDuplicateTransfer(self): | |
self.AppendToArchiveContents( | |
"stops.txt", | |
"BEATTY_AIRPORT_HANGER,Airport Hanger,36.868178,-116.784915\n" | |
"BEATTY_AIRPORT_34,Runway 34,36.85352,-116.786316\n") | |
self.AppendToArchiveContents( | |
"trips.txt", | |
"AB,FULLW,AIR1\n") | |
self.AppendToArchiveContents( | |
"stop_times.txt", | |
"AIR1,7:00:00,7:00:00,BEATTY_AIRPORT_HANGER,1\n" | |
"AIR1,7:05:00,7:05:00,BEATTY_AIRPORT_34,2\n" | |
"AIR1,7:10:00,7:10:00,BEATTY_AIRPORT_HANGER,3\n") | |
self.SetArchiveContents( | |
"transfers.txt", | |
"from_stop_id,to_stop_id,transfer_type\n" | |
"BEATTY_AIRPORT,BEATTY_AIRPORT_HANGER,0\n" | |
"BEATTY_AIRPORT,BEATTY_AIRPORT_HANGER,3") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('DuplicateID') | |
self.assertEquals('(from_stop_id, to_stop_id)', e.column_name) | |
self.assertEquals('(BEATTY_AIRPORT, BEATTY_AIRPORT_HANGER)', e.value) | |
self.assertTrue(e.IsWarning()) | |
self.assertEquals('transfers.txt', e.file_name) | |
self.assertEquals(3, e.row_num) | |
self.accumulator.AssertNoMoreExceptions() | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.accumulator.AssertNoMoreExceptions() | |
load_problems = GetTestFailureProblemReporter( | |
self, ("ExpirationDate", "DuplicateID")) | |
loaded_schedule = transitfeed.Loader(saved_schedule_file, | |
problems=load_problems, | |
extra_validation=True).Load() | |
self.assertEquals( | |
[0, 3], | |
[int(t.transfer_type) for t in loaded_schedule.GetTransferIter()]) | |
class ServicePeriodValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
# success case | |
period = transitfeed.ServicePeriod() | |
repr(period) # shouldn't crash | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20070101' | |
period.end_date = '20071231' | |
period.day_of_week[0] = True | |
repr(period) # shouldn't crash | |
period.Validate(self.problems) | |
# missing start_date. If one of start_date or end_date is None then | |
# ServicePeriod.Validate assumes the required column is missing and already | |
# generated an error. Instead set it to an empty string, such as when the | |
# csv cell is empty. See also comment in ServicePeriod.Validate. | |
period.start_date = '' | |
self.ExpectMissingValue(period, 'start_date') | |
period.start_date = '20070101' | |
# missing end_date | |
period.end_date = '' | |
self.ExpectMissingValue(period, 'end_date') | |
period.end_date = '20071231' | |
# invalid start_date | |
period.start_date = '2007-01-01' | |
self.ExpectInvalidValue(period, 'start_date') | |
period.start_date = '20070101' | |
# impossible start_date | |
period.start_date = '20070229' | |
self.ExpectInvalidValue(period, 'start_date') | |
period.start_date = '20070101' | |
# invalid end_date | |
period.end_date = '2007/12/31' | |
self.ExpectInvalidValue(period, 'end_date') | |
period.end_date = '20071231' | |
# start & end dates out of order | |
period.end_date = '20060101' | |
self.ExpectInvalidValue(period, 'end_date') | |
period.end_date = '20071231' | |
# no service in period | |
period.day_of_week[0] = False | |
self.ExpectOtherProblem(period) | |
period.day_of_week[0] = True | |
# invalid exception date | |
period.SetDateHasService('2007', False) | |
self.ExpectInvalidValue(period, 'date', '2007') | |
period.ResetDateToNormalService('2007') | |
period2 = transitfeed.ServicePeriod( | |
field_list=['serviceid1', '20060101', '20071231', '1', '0', 'h', '1', | |
'1', '1', '1']) | |
self.ExpectInvalidValue(period2, 'wednesday', 'h') | |
repr(period) # shouldn't crash | |
def testHasExceptions(self): | |
# A new ServicePeriod object has no exceptions | |
period = transitfeed.ServicePeriod() | |
self.assertFalse(period.HasExceptions()) | |
# Only regular service, no exceptions | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20070101' | |
period.end_date = '20071231' | |
period.day_of_week[0] = True | |
self.assertFalse(period.HasExceptions()) | |
# Regular service + removed service exception | |
period.SetDateHasService('20070101', False) | |
self.assertTrue(period.HasExceptions()) | |
# Regular service + added service exception | |
period.SetDateHasService('20070101', True) | |
self.assertTrue(period.HasExceptions()) | |
# Only added service exception | |
period = transitfeed.ServicePeriod() | |
period.SetDateHasService('20070101', True) | |
self.assertTrue(period.HasExceptions()) | |
# Only removed service exception | |
period = transitfeed.ServicePeriod() | |
period.SetDateHasService('20070101', False) | |
self.assertTrue(period.HasExceptions()) | |
class ServicePeriodDateRangeTestCase(ValidationTestCase): | |
def runTest(self): | |
period = transitfeed.ServicePeriod() | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20070101' | |
period.end_date = '20071231' | |
period.SetWeekdayService(True) | |
period.SetDateHasService('20071231', False) | |
period.Validate(self.problems) | |
self.assertEqual(('20070101', '20071231'), period.GetDateRange()) | |
period2 = transitfeed.ServicePeriod() | |
period2.service_id = 'HOLIDAY' | |
period2.SetDateHasService('20071225', True) | |
period2.SetDateHasService('20080101', True) | |
period2.SetDateHasService('20080102', False) | |
period2.Validate(self.problems) | |
self.assertEqual(('20071225', '20080101'), period2.GetDateRange()) | |
period2.start_date = '20071201' | |
period2.end_date = '20071225' | |
period2.Validate(self.problems) | |
self.assertEqual(('20071201', '20080101'), period2.GetDateRange()) | |
period3 = transitfeed.ServicePeriod() | |
self.assertEqual((None, None), period3.GetDateRange()) | |
period4 = transitfeed.ServicePeriod() | |
period4.service_id = 'halloween' | |
period4.SetDateHasService('20051031', True) | |
self.assertEqual(('20051031', '20051031'), period4.GetDateRange()) | |
period4.Validate(self.problems) | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
self.assertEqual((None, None), schedule.GetDateRange()) | |
schedule.AddServicePeriodObject(period) | |
self.assertEqual(('20070101', '20071231'), schedule.GetDateRange()) | |
schedule.AddServicePeriodObject(period2) | |
self.assertEqual(('20070101', '20080101'), schedule.GetDateRange()) | |
schedule.AddServicePeriodObject(period4) | |
self.assertEqual(('20051031', '20080101'), schedule.GetDateRange()) | |
self.accumulator.AssertNoMoreExceptions() | |
class NoServiceExceptionsTestCase(MemoryZipTestCase): | |
def testNoCalendarDates(self): | |
self.RemoveArchive("calendar_dates.txt") | |
self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("NoServiceExceptions") | |
self.accumulator.AssertNoMoreExceptions() | |
def testNoExceptionsWhenFeedActiveForShortPeriodOfTime(self): | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20070630\n" | |
"WE,0,0,0,0,0,1,1,20070101,20070331\n") | |
self.RemoveArchive("calendar_dates.txt") | |
self.MakeLoaderAndLoad() | |
self.accumulator.AssertNoMoreExceptions() | |
def testEmptyCalendarDates(self): | |
self.SetArchiveContents( | |
"calendar_dates.txt", | |
"") | |
self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("EmptyFile") | |
e = self.accumulator.PopException("NoServiceExceptions") | |
self.accumulator.AssertNoMoreExceptions() | |
def testCalendarDatesWithHeaderOnly(self): | |
self.SetArchiveContents( | |
"calendar_dates.txt", | |
"service_id,date,exception_type\n") | |
self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("NoServiceExceptions") | |
self.accumulator.AssertNoMoreExceptions() | |
def testCalendarDatesWithAddedServiceException(self): | |
self.SetArchiveContents( | |
"calendar_dates.txt", | |
"service_id,date,exception_type\n" | |
"FULLW,20070101,1\n") | |
self.MakeLoaderAndLoad() | |
self.accumulator.AssertNoMoreExceptions() | |
def testCalendarDatesWithRemovedServiceException(self): | |
self.SetArchiveContents( | |
"calendar_dates.txt", | |
"service_id,date,exception_type\n" | |
"FULLW,20070101,2\n") | |
self.MakeLoaderAndLoad() | |
self.accumulator.AssertNoMoreExceptions() | |
class ServicePeriodTestCase(util.TestCase): | |
def testActive(self): | |
"""Test IsActiveOn and ActiveDates""" | |
period = transitfeed.ServicePeriod() | |
period.service_id = 'WEEKDAY' | |
period.start_date = '20071226' | |
period.end_date = '20071231' | |
period.SetWeekdayService(True) | |
period.SetDateHasService('20071230', True) | |
period.SetDateHasService('20071231', False) | |
period.SetDateHasService('20080102', True) | |
# December 2007 | |
# Su Mo Tu We Th Fr Sa | |
# 23 24 25 26 27 28 29 | |
# 30 31 | |
# Some tests have named arguments and others do not to ensure that any | |
# (possibly unwanted) changes to the API get caught | |
# calendar_date exceptions near start date | |
self.assertFalse(period.IsActiveOn(date='20071225')) | |
self.assertFalse(period.IsActiveOn(date='20071225', | |
date_object=date(2007, 12, 25))) | |
self.assertTrue(period.IsActiveOn(date='20071226')) | |
self.assertTrue(period.IsActiveOn(date='20071226', | |
date_object=date(2007, 12, 26))) | |
# calendar_date exceptions near end date | |
self.assertTrue(period.IsActiveOn('20071230')) | |
self.assertTrue(period.IsActiveOn('20071230', date(2007, 12, 30))) | |
self.assertFalse(period.IsActiveOn('20071231')) | |
self.assertFalse(period.IsActiveOn('20071231', date(2007, 12, 31))) | |
# date just outside range, both weekday and an exception | |
self.assertFalse(period.IsActiveOn('20080101')) | |
self.assertFalse(period.IsActiveOn('20080101', date(2008, 1, 1))) | |
self.assertTrue(period.IsActiveOn('20080102')) | |
self.assertTrue(period.IsActiveOn('20080102', date(2008, 1, 2))) | |
self.assertEquals(period.ActiveDates(), | |
['20071226', '20071227', '20071228', '20071230', | |
'20080102']) | |
# Test of period without start_date, end_date | |
period_dates = transitfeed.ServicePeriod() | |
period_dates.SetDateHasService('20071230', True) | |
period_dates.SetDateHasService('20071231', False) | |
self.assertFalse(period_dates.IsActiveOn(date='20071229')) | |
self.assertFalse(period_dates.IsActiveOn(date='20071229', | |
date_object=date(2007, 12, 29))) | |
self.assertTrue(period_dates.IsActiveOn('20071230')) | |
self.assertTrue(period_dates.IsActiveOn('20071230', date(2007, 12, 30))) | |
self.assertFalse(period_dates.IsActiveOn('20071231')) | |
self.assertFalse(period_dates.IsActiveOn('20071231', date(2007, 12, 31))) | |
self.assertEquals(period_dates.ActiveDates(), ['20071230']) | |
# Test with an invalid ServicePeriod; one of start_date, end_date is set | |
period_no_end = transitfeed.ServicePeriod() | |
period_no_end.start_date = '20071226' | |
self.assertFalse(period_no_end.IsActiveOn(date='20071231')) | |
self.assertFalse(period_no_end.IsActiveOn(date='20071231', | |
date_object=date(2007, 12, 31))) | |
self.assertEquals(period_no_end.ActiveDates(), []) | |
period_no_start = transitfeed.ServicePeriod() | |
period_no_start.end_date = '20071230' | |
self.assertFalse(period_no_start.IsActiveOn('20071229')) | |
self.assertFalse(period_no_start.IsActiveOn('20071229', date(2007, 12, 29))) | |
self.assertEquals(period_no_start.ActiveDates(), []) | |
period_empty = transitfeed.ServicePeriod() | |
self.assertFalse(period_empty.IsActiveOn('20071231')) | |
self.assertFalse(period_empty.IsActiveOn('20071231', date(2007, 12, 31))) | |
self.assertEquals(period_empty.ActiveDates(), []) | |
class GetServicePeriodsActiveEachDateTestCase(util.TestCase): | |
def testEmpty(self): | |
schedule = transitfeed.Schedule() | |
self.assertEquals( | |
[], | |
schedule.GetServicePeriodsActiveEachDate(date(2009, 1, 1), | |
date(2009, 1, 1))) | |
self.assertEquals( | |
[(date(2008, 12, 31), []), (date(2009, 1, 1), [])], | |
schedule.GetServicePeriodsActiveEachDate(date(2008, 12, 31), | |
date(2009, 1, 2))) | |
def testOneService(self): | |
schedule = transitfeed.Schedule() | |
sp1 = transitfeed.ServicePeriod() | |
sp1.service_id = "sp1" | |
sp1.SetDateHasService("20090101") | |
sp1.SetDateHasService("20090102") | |
schedule.AddServicePeriodObject(sp1) | |
self.assertEquals( | |
[], | |
schedule.GetServicePeriodsActiveEachDate(date(2009, 1, 1), | |
date(2009, 1, 1))) | |
self.assertEquals( | |
[(date(2008, 12, 31), []), (date(2009, 1, 1), [sp1])], | |
schedule.GetServicePeriodsActiveEachDate(date(2008, 12, 31), | |
date(2009, 1, 2))) | |
def testTwoService(self): | |
schedule = transitfeed.Schedule() | |
sp1 = transitfeed.ServicePeriod() | |
sp1.service_id = "sp1" | |
sp1.SetDateHasService("20081231") | |
sp1.SetDateHasService("20090101") | |
schedule.AddServicePeriodObject(sp1) | |
sp2 = transitfeed.ServicePeriod() | |
sp2.service_id = "sp2" | |
sp2.SetStartDate("20081201") | |
sp2.SetEndDate("20081231") | |
sp2.SetWeekendService() | |
sp2.SetWeekdayService() | |
schedule.AddServicePeriodObject(sp2) | |
self.assertEquals( | |
[], | |
schedule.GetServicePeriodsActiveEachDate(date(2009, 1, 1), | |
date(2009, 1, 1))) | |
date_services = schedule.GetServicePeriodsActiveEachDate(date(2008, 12, 31), | |
date(2009, 1, 2)) | |
self.assertEquals( | |
[date(2008, 12, 31), date(2009, 1, 1)], [d for d, _ in date_services]) | |
self.assertEquals(set([sp1, sp2]), set(date_services[0][1])) | |
self.assertEquals([sp1], date_services[1][1]) | |
class TripMemoryZipTestCase(MemoryZipTestCase): | |
def assertLoadAndCheckExtraValues(self, schedule_file): | |
"""Load file-like schedule_file and check for extra trip columns.""" | |
load_problems = GetTestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
loaded_schedule = transitfeed.Loader(schedule_file, | |
problems=load_problems, | |
extra_validation=True).Load() | |
self.assertEqual("foo", loaded_schedule.GetTrip("AB1")["t_foo"]) | |
self.assertEqual("", loaded_schedule.GetTrip("AB2")["t_foo"]) | |
self.assertEqual("", loaded_schedule.GetTrip("AB1")["n_foo"]) | |
self.assertEqual("bar", loaded_schedule.GetTrip("AB2")["n_foo"]) | |
# Uncomment the following lines to print the string in testExtraFileColumn | |
# print repr(zipfile.ZipFile(schedule_file).read("trips.txt")) | |
# self.fail() | |
def testExtraObjectAttribute(self): | |
"""Extra columns added to an object are preserved when writing.""" | |
schedule = self.MakeLoaderAndLoad() | |
# Add an attribute to an existing trip | |
trip1 = schedule.GetTrip("AB1") | |
trip1.t_foo = "foo" | |
# Make a copy of trip_id=AB1 and add an attribute before AddTripObject | |
trip2 = transitfeed.Trip(field_dict=trip1) | |
trip2.trip_id = "AB2" | |
trip2.t_foo = "" | |
trip2.n_foo = "bar" | |
schedule.AddTripObject(trip2) | |
trip2.AddStopTime(stop=schedule.GetStop("BULLFROG"), stop_time="09:00:00") | |
trip2.AddStopTime(stop=schedule.GetStop("STAGECOACH"), stop_time="09:30:00") | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.accumulator.AssertNoMoreExceptions() | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
def testExtraFileColumn(self): | |
"""Extra columns loaded from a file are preserved when writing.""" | |
# Uncomment the code in assertLoadAndCheckExtraValues to generate this | |
# string. | |
self.SetArchiveContents( | |
"trips.txt", | |
"route_id,service_id,trip_id,t_foo,n_foo\n" | |
"AB,FULLW,AB1,foo,\n" | |
"AB,FULLW,AB2,,bar\n") | |
self.AppendToArchiveContents( | |
"stop_times.txt", | |
"AB2,09:00:00,09:00:00,BULLFROG,1\n" | |
"AB2,09:30:00,09:30:00,STAGECOACH,2\n") | |
load1_problems = GetTestFailureProblemReporter( | |
self, ("ExpirationDate", "UnrecognizedColumn")) | |
schedule = self.MakeLoaderAndLoad(problems=load1_problems) | |
saved_schedule_file = StringIO() | |
schedule.WriteGoogleTransitFeed(saved_schedule_file) | |
self.assertLoadAndCheckExtraValues(saved_schedule_file) | |
class TripValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
trip = transitfeed.Trip() | |
repr(trip) # shouldn't crash | |
schedule = self.SimpleSchedule() | |
trip = transitfeed.Trip() | |
repr(trip) # shouldn't crash | |
trip = transitfeed.Trip() | |
trip.trip_headsign = '\xBA\xDF\x0D' # Not valid ascii or utf8 | |
repr(trip) # shouldn't crash | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.trip_headsign = 'via Polish Hill' | |
trip.direction_id = '0' | |
trip.block_id = None | |
trip.shape_id = None | |
trip.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
repr(trip) # shouldn't crash | |
# missing route ID | |
trip.route_id = None | |
self.ExpectMissingValue(trip, 'route_id') | |
trip.route_id = '054C' | |
# missing service ID | |
trip.service_id = None | |
self.ExpectMissingValue(trip, 'service_id') | |
trip.service_id = 'WEEK' | |
# missing trip ID | |
trip.trip_id = None | |
self.ExpectMissingValue(trip, 'trip_id') | |
trip.trip_id = '054C-00' | |
# invalid direction ID | |
trip.direction_id = 'NORTH' | |
self.ExpectInvalidValue(trip, 'direction_id') | |
trip.direction_id = '0' | |
# AddTripObject validates that route_id, service_id, .... are found in the | |
# schedule. The Validate calls made by self.Expect... above can't make this | |
# check because trip is not in a schedule. | |
trip.route_id = '054C-notfound' | |
schedule.AddTripObject(trip, self.problems, True) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual('route_id', e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
trip.route_id = '054C' | |
# Make sure calling Trip.Validate validates that route_id and service_id | |
# are found in the schedule. | |
trip.service_id = 'WEEK-notfound' | |
trip.Validate(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual('service_id', e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
trip.service_id = 'WEEK' | |
trip.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
# expect no problems for non-overlapping periods | |
trip.AddFrequency("06:00:00", "12:00:00", 600) | |
trip.AddFrequency("01:00:00", "02:00:00", 1200) | |
trip.AddFrequency("04:00:00", "05:00:00", 1000) | |
trip.AddFrequency("12:00:00", "19:00:00", 700) | |
trip.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
trip.ClearFrequencies() | |
# overlapping headway periods | |
trip.AddFrequency("00:00:00", "12:00:00", 600) | |
trip.AddFrequency("06:00:00", "18:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearFrequencies() | |
trip.AddFrequency("12:00:00", "20:00:00", 600) | |
trip.AddFrequency("06:00:00", "18:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearFrequencies() | |
trip.AddFrequency("06:00:00", "12:00:00", 600) | |
trip.AddFrequency("00:00:00", "25:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearFrequencies() | |
trip.AddFrequency("00:00:00", "20:00:00", 600) | |
trip.AddFrequency("06:00:00", "18:00:00", 1200) | |
self.ExpectOtherProblem(trip) | |
trip.ClearFrequencies() | |
self.accumulator.AssertNoMoreExceptions() | |
class FrequencyValidationTestCase(ValidationTestCase): | |
def setUp(self): | |
ValidationTestCase.setUp(self) | |
self.schedule = self.SimpleSchedule() | |
trip = transitfeed.Trip() | |
trip.route_id = '054C' | |
trip.service_id = 'WEEK' | |
trip.trip_id = '054C-00' | |
trip.trip_headsign = 'via Polish Hill' | |
trip.direction_id = '0' | |
trip.block_id = None | |
trip.shape_id = None | |
self.schedule.AddTripObject(trip, self.problems, True) | |
self.trip = trip | |
def testNonOverlappingPeriods(self): | |
headway_period1 = transitfeed.Frequency({'trip_id': '054C-00', | |
'start_time': '06:00:00', | |
'end_time': '12:00:00', | |
'headway_secs': 600, | |
}) | |
headway_period2 = transitfeed.Frequency({'trip_id': '054C-00', | |
'start_time': '01:00:00', | |
'end_time': '02:00:00', | |
'headway_secs': 1200, | |
}) | |
headway_period3 = transitfeed.Frequency({'trip_id': '054C-00', | |
'start_time': '04:00:00', | |
'end_time': '05:00:00', | |
'headway_secs': 1000, | |
}) | |
headway_period4 = transitfeed.Frequency({'trip_id': '054C-00', | |
'start_time': '12:00:00', | |
'end_time': '19:00:00', | |
'headway_secs': 700, | |
}) | |
# expect no problems for non-overlapping periods | |
headway_period1.AddToSchedule(self.schedule, self.problems) | |
headway_period2.AddToSchedule(self.schedule, self.problems) | |
headway_period3.AddToSchedule(self.schedule, self.problems) | |
headway_period4.AddToSchedule(self.schedule, self.problems) | |
self.trip.Validate(self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
self.trip.ClearFrequencies() | |
def testOverlappingPeriods(self): | |
# overlapping headway periods | |
headway_period1 = transitfeed.Frequency({'trip_id': '054C-00', | |
'start_time': '00:00:00', | |
'end_time': '12:00:00', | |
'headway_secs': 600, | |
}) | |
headway_period2 = transitfeed.Frequency({'trip_id': '054C-00', | |
'start_time': '06:00:00', | |
'end_time': '18:00:00', | |
'headway_secs': 1200, | |
}) | |
headway_period1.AddToSchedule(self.schedule, self.problems) | |
headway_period2.AddToSchedule(self.schedule, self.problems) | |
self.ExpectOtherProblem(self.trip) | |
self.trip.ClearFrequencies() | |
self.accumulator.AssertNoMoreExceptions() | |
def testPeriodWithInvalidTripId(self): | |
headway_period1 = transitfeed.Frequency({'trip_id': 'foo', | |
'start_time': '00:00:00', | |
'end_time': '12:00:00', | |
'headway_secs': 600, | |
}) | |
headway_period1.AddToSchedule(self.schedule, self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertEqual('trip_id', e.column_name) | |
self.trip.ClearFrequencies() | |
class TripSequenceValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = self.SimpleSchedule() | |
# Make a new trip without any stop times | |
trip = schedule.GetRoute("054C").AddTrip(trip_id="054C-00") | |
stop1 = schedule.GetStop('stop1') | |
stop2 = schedule.GetStop('stop2') | |
stop3 = schedule.GetStop('stop3') | |
stoptime1 = transitfeed.StopTime(self.problems, stop1, | |
stop_time='12:00:00', stop_sequence=1) | |
stoptime2 = transitfeed.StopTime(self.problems, stop2, | |
stop_time='11:30:00', stop_sequence=2) | |
stoptime3 = transitfeed.StopTime(self.problems, stop3, | |
stop_time='12:15:00', stop_sequence=3) | |
trip._AddStopTimeObjectUnordered(stoptime1, schedule) | |
trip._AddStopTimeObjectUnordered(stoptime2, schedule) | |
trip._AddStopTimeObjectUnordered(stoptime3, schedule) | |
trip.Validate(self.problems) | |
e = self.accumulator.PopException('OtherProblem') | |
self.assertTrue(e.FormatProblem().find('Timetravel detected') != -1) | |
self.assertTrue(e.FormatProblem().find('number 2 in trip 054C-00') != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
class TripServiceIDValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = self.SimpleSchedule() | |
trip1 = transitfeed.Trip() | |
trip1.route_id = "054C" | |
trip1.service_id = "WEEKDAY" | |
trip1.trip_id = "054C_WEEK" | |
self.ExpectInvalidValueInClosure(column_name="service_id", | |
value="WEEKDAY", | |
c=lambda: schedule.AddTripObject(trip1, | |
validate=True)) | |
class TripHasStopTimeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = self.SimpleSchedule() | |
trip = schedule.GetRoute("054C").AddTrip(trip_id="054C-00") | |
# We should get an OtherProblem here because the trip has no stops. | |
self.ExpectOtherProblem(schedule) | |
# It should trigger a TYPE_ERROR if there are frequencies for the trip | |
# but no stops | |
trip.AddFrequency("01:00:00","12:00:00", 600) | |
schedule.Validate(self.problems) | |
self.accumulator.PopException('OtherProblem') # pop first warning | |
e = self.accumulator.PopException('OtherProblem') # pop frequency error | |
self.assertTrue(e.FormatProblem().find('Frequencies defined, but') != -1) | |
self.assertTrue(e.FormatProblem().find('given in trip 054C-00') != -1) | |
self.assertEquals(transitfeed.TYPE_ERROR, e.type) | |
self.accumulator.AssertNoMoreExceptions() | |
trip.ClearFrequencies() | |
# Add a stop, but with only one stop passengers have nowhere to exit! | |
stop = transitfeed.Stop(36.425288, -117.133162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:11:00", departure_time="5:12:00") | |
self.ExpectOtherProblem(schedule) | |
# Add another stop, and then validation should be happy. | |
stop = transitfeed.Stop(36.424288, -117.133142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:15:00", departure_time="5:16:00") | |
schedule.Validate(self.problems) | |
trip.AddStopTime(stop, stop_time="05:20:00") | |
trip.AddStopTime(stop, stop_time="05:22:00") | |
# Last stop must always have a time | |
trip.AddStopTime(stop, arrival_secs=None, departure_secs=None) | |
self.ExpectInvalidValueInClosure( | |
'arrival_time', c=lambda: trip.GetEndTime(problems=self.problems)) | |
class ShapeDistTraveledOfStopTimeValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = self.SimpleSchedule() | |
shape = transitfeed.Shape("shape_1") | |
shape.AddPoint(36.425288, -117.133162, 0) | |
shape.AddPoint(36.424288, -117.133142, 1) | |
schedule.AddShapeObject(shape) | |
trip = schedule.GetRoute("054C").AddTrip(trip_id="054C-00") | |
trip.shape_id = "shape_1" | |
stop = transitfeed.Stop(36.425288, -117.133162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
stop = transitfeed.Stop(36.424288, -117.133142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:15:00", departure_time="5:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
stop = transitfeed.Stop(36.423288, -117.133122, "Demo Stop 3", "STOP3") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:18:00", departure_time="5:19:00", | |
stop_sequence=2, shape_dist_traveled=2) | |
self.accumulator.AssertNoMoreExceptions() | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('OtherProblem') | |
self.assertMatchesRegex('shape_dist_traveled=2', e.FormatProblem()) | |
self.accumulator.AssertNoMoreExceptions() | |
# Error if the distance decreases. | |
shape.AddPoint(36.421288, -117.133132, 2) | |
stop = transitfeed.Stop(36.421288, -117.133122, "Demo Stop 4", "STOP4") | |
schedule.AddStopObject(stop) | |
stoptime = transitfeed.StopTime(self.problems, stop, | |
arrival_time="5:29:00", | |
departure_time="5:29:00",stop_sequence=3, | |
shape_dist_traveled=1.7) | |
trip.AddStopTimeObject(stoptime, schedule=schedule) | |
self.accumulator.AssertNoMoreExceptions() | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('stop STOP4 has', e.FormatProblem()) | |
self.assertMatchesRegex('shape_dist_traveled=1.7', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 2.0.', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_ERROR) | |
self.accumulator.AssertNoMoreExceptions() | |
# Warning if distance remains the same between two stop_times | |
stoptime.shape_dist_traveled = 2.0 | |
trip.ReplaceStopTimeObject(stoptime, schedule=schedule) | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('InvalidValue') | |
self.assertMatchesRegex('stop STOP4 has', e.FormatProblem()) | |
self.assertMatchesRegex('shape_dist_traveled=2.0', e.FormatProblem()) | |
self.assertMatchesRegex('distance was 2.0.', e.FormatProblem()) | |
self.assertEqual(e.type, transitfeed.TYPE_WARNING) | |
self.accumulator.AssertNoMoreExceptions() | |
class StopMatchWithShapeTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = self.SimpleSchedule() | |
shape = transitfeed.Shape("shape_1") | |
shape.AddPoint(36.425288, -117.133162, 0) | |
shape.AddPoint(36.424288, -117.143142, 1) | |
schedule.AddShapeObject(shape) | |
trip = schedule.GetRoute("054C").AddTrip(trip_id="054C-00") | |
trip.shape_id = "shape_1" | |
# Stop 1 is only 600 meters away from shape, which is allowed. | |
stop = transitfeed.Stop(36.425288, -117.139162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
# Stop 2 is more than 1000 meters away from shape, which is not allowed. | |
stop = transitfeed.Stop(36.424288, -117.158142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop) | |
trip.AddStopTime(stop, arrival_time="5:15:00", departure_time="5:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('StopTooFarFromShapeWithDistTraveled') | |
self.assertTrue(e.FormatProblem().find('Demo Stop 2') != -1) | |
self.assertTrue(e.FormatProblem().find('1344 meters away') != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
class TripAddStopTimeObjectTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
schedule.AddAgency("\xc8\x8b Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod().SetDateHasService('20070101') | |
stop1 = schedule.AddStop(lng=140, lat=48.2, name="Stop 1") | |
stop2 = schedule.AddStop(lng=140.001, lat=48.201, name="Stop 2") | |
route = schedule.AddRoute("B", "Beta", "Bus") | |
trip = route.AddTrip(schedule, "bus trip") | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=10, | |
departure_secs=10), | |
schedule=schedule, problems=self.problems) | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop2, | |
arrival_secs=20, | |
departure_secs=20), | |
schedule=schedule, problems=self.problems) | |
# TODO: Factor out checks or use mock problems object | |
self.ExpectOtherProblemInClosure(lambda: | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=15, | |
departure_secs=15), | |
schedule=schedule, problems=self.problems)) | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1), | |
schedule=schedule, problems=self.problems) | |
self.ExpectOtherProblemInClosure(lambda: | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=15, | |
departure_secs=15), | |
schedule=schedule, problems=self.problems)) | |
trip.AddStopTimeObject(transitfeed.StopTime(self.problems, stop1, | |
arrival_secs=30, | |
departure_secs=30), | |
schedule=schedule, problems=self.problems) | |
self.accumulator.AssertNoMoreExceptions() | |
class DuplicateTripTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule._check_duplicate_trips = True; | |
agency = transitfeed.Agency('Demo agency', 'http://google.com', | |
'America/Los_Angeles', 'agency1') | |
schedule.AddAgencyObject(agency) | |
service = schedule.GetDefaultServicePeriod() | |
service.SetDateHasService('20070101') | |
route1 = transitfeed.Route('Route1', 'route 1', 3, 'route_1', 'agency1') | |
schedule.AddRouteObject(route1) | |
route2 = transitfeed.Route('Route2', 'route 2', 3, 'route_2', 'agency1') | |
schedule.AddRouteObject(route2) | |
trip1 = transitfeed.Trip() | |
trip1.route_id = 'route_1' | |
trip1.trip_id = 't1' | |
trip1.trip_headsign = 'via Polish Hill' | |
trip1.direction_id = '0' | |
trip1.service_id = service.service_id | |
schedule.AddTripObject(trip1) | |
trip2 = transitfeed.Trip() | |
trip2.route_id = 'route_2' | |
trip2.trip_id = 't2' | |
trip2.trip_headsign = 'New' | |
trip2.direction_id = '0' | |
trip2.service_id = service.service_id | |
schedule.AddTripObject(trip2) | |
trip3 = transitfeed.Trip() | |
trip3.route_id = 'route_1' | |
trip3.trip_id = 't3' | |
trip3.trip_headsign = 'New Demo' | |
trip3.direction_id = '0' | |
trip3.service_id = service.service_id | |
schedule.AddTripObject(trip3) | |
stop1 = transitfeed.Stop(36.425288, -117.139162, "Demo Stop 1", "STOP1") | |
schedule.AddStopObject(stop1) | |
trip1.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
trip2.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
trip3.AddStopTime(stop1, arrival_time="6:11:00", departure_time="6:12:00", | |
stop_sequence=0, shape_dist_traveled=0) | |
stop2 = transitfeed.Stop(36.424288, -117.158142, "Demo Stop 2", "STOP2") | |
schedule.AddStopObject(stop2) | |
trip1.AddStopTime(stop2, arrival_time="5:15:00", departure_time="5:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
trip2.AddStopTime(stop2, arrival_time="5:25:00", departure_time="5:26:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
trip3.AddStopTime(stop2, arrival_time="6:15:00", departure_time="6:16:00", | |
stop_sequence=1, shape_dist_traveled=1) | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException('DuplicateTrip') | |
self.assertTrue(e.FormatProblem().find('t1 of route') != -1) | |
self.assertTrue(e.FormatProblem().find('t2 of route') != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
class StopBelongsToBothSubwayAndBusTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(self.problems) | |
schedule.AddAgency("Demo Agency", "http://example.com", | |
"America/Los_Angeles") | |
route1 = schedule.AddRoute(short_name="route1", long_name="route_1", | |
route_type=3) | |
route2 = schedule.AddRoute(short_name="route2", long_name="route_2", | |
route_type=1) | |
service = schedule.GetDefaultServicePeriod() | |
service.SetDateHasService("20070101") | |
trip1 = route1.AddTrip(schedule, "trip1", service, "t1") | |
trip2 = route2.AddTrip(schedule, "trip2", service, "t2") | |
stop1 = schedule.AddStop(36.425288, -117.133162, "stop1") | |
stop2 = schedule.AddStop(36.424288, -117.133142, "stop2") | |
stop3 = schedule.AddStop(36.423288, -117.134142, "stop3") | |
trip1.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00") | |
trip1.AddStopTime(stop2, arrival_time="5:21:00", departure_time="5:22:00") | |
trip2.AddStopTime(stop1, arrival_time="6:11:00", departure_time="6:12:00") | |
trip2.AddStopTime(stop3, arrival_time="6:21:00", departure_time="6:22:00") | |
schedule.Validate(self.problems) | |
e = self.accumulator.PopException("StopWithMultipleRouteTypes") | |
self.assertTrue(e.FormatProblem().find("Stop stop1") != -1) | |
self.assertTrue(e.FormatProblem().find("subway (ID=1)") != -1) | |
self.assertTrue(e.FormatProblem().find("bus line (ID=0)") != -1) | |
self.accumulator.AssertNoMoreExceptions() | |
class TripReplaceStopTimeObjectTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("\xc8\x8b Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = \ | |
schedule.GetDefaultServicePeriod().SetDateHasService('20070101') | |
stop1 = schedule.AddStop(lng=140, lat=48.2, name="Stop 1") | |
route = schedule.AddRoute("B", "Beta", "Bus") | |
trip = route.AddTrip(schedule, "bus trip") | |
stoptime = transitfeed.StopTime(transitfeed.default_problem_reporter, stop1, | |
arrival_secs=10, | |
departure_secs=10) | |
trip.AddStopTimeObject(stoptime, schedule=schedule) | |
stoptimes = trip.GetStopTimes() | |
stoptime.departure_secs = 20 | |
trip.ReplaceStopTimeObject(stoptime, schedule=schedule) | |
stoptimes = trip.GetStopTimes() | |
self.assertEqual(len(stoptimes), 1) | |
self.assertEqual(stoptimes[0].departure_secs, 20) | |
unknown_stop = schedule.AddStop(lng=140, lat=48.2, name="unknown") | |
unknown_stoptime = transitfeed.StopTime( | |
transitfeed.default_problem_reporter, unknown_stop, | |
arrival_secs=10, | |
departure_secs=10) | |
unknown_stoptime.stop_sequence = 5 | |
# Attempting to replace a non-existent StopTime raises an error | |
self.assertRaises(transitfeed.Error, trip.ReplaceStopTimeObject, | |
unknown_stoptime, schedule=schedule) | |
class TripStopTimeAccessorsTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.NewDefaultAgency(agency_name="Test Agency", | |
agency_url="http://example.com", | |
agency_timezone="America/Los_Angeles") | |
route = schedule.AddRoute(short_name="54C", long_name="Polish Hill", route_type=3) | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService("20070101") | |
trip = route.AddTrip(schedule, 'via Polish Hill') | |
stop1 = schedule.AddStop(36.425288, -117.133162, "Demo Stop 1") | |
stop2 = schedule.AddStop(36.424288, -117.133142, "Demo Stop 2") | |
trip.AddStopTime(stop1, arrival_time="5:11:00", departure_time="5:12:00") | |
trip.AddStopTime(stop2, arrival_time="5:15:00", departure_time="5:16:00") | |
# Add some more stop times and test GetEndTime does the correct thing | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetStartTime()), | |
"05:11:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetEndTime()), | |
"05:16:00") | |
trip.AddStopTime(stop1, stop_time="05:20:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetEndTime()), | |
"05:20:00") | |
trip.AddStopTime(stop2, stop_time="05:22:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(trip.GetEndTime()), | |
"05:22:00") | |
self.assertEqual(len(trip.GetStopTimesTuples()), 4) | |
self.assertEqual(trip.GetStopTimesTuples()[0], (trip.trip_id, "05:11:00", | |
"05:12:00", stop1.stop_id, | |
1, '', '', '', '')) | |
self.assertEqual(trip.GetStopTimesTuples()[3], (trip.trip_id, "05:22:00", | |
"05:22:00", stop2.stop_id, | |
4, '', '', '', '')) | |
class TripClearStopTimesTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.NewDefaultAgency(agency_name="Test Agency", | |
agency_timezone="America/Los_Angeles") | |
route = schedule.AddRoute(short_name="54C", long_name="Hill", route_type=3) | |
schedule.GetDefaultServicePeriod().SetDateHasService("20070101") | |
stop1 = schedule.AddStop(36, -117.1, "Demo Stop 1") | |
stop2 = schedule.AddStop(36, -117.2, "Demo Stop 2") | |
stop3 = schedule.AddStop(36, -117.3, "Demo Stop 3") | |
trip = route.AddTrip(schedule, "via Polish Hill") | |
trip.ClearStopTimes() | |
self.assertFalse(trip.GetStopTimes()) | |
trip.AddStopTime(stop1, stop_time="5:11:00") | |
self.assertTrue(trip.GetStopTimes()) | |
trip.ClearStopTimes() | |
self.assertFalse(trip.GetStopTimes()) | |
trip.AddStopTime(stop3, stop_time="4:00:00") # Can insert earlier time | |
trip.AddStopTime(stop2, stop_time="4:15:00") | |
trip.AddStopTime(stop1, stop_time="4:21:00") | |
old_stop_times = trip.GetStopTimes() | |
self.assertTrue(old_stop_times) | |
trip.ClearStopTimes() | |
self.assertFalse(trip.GetStopTimes()) | |
for st in old_stop_times: | |
trip.AddStopTimeObject(st) | |
self.assertEqual(trip.GetStartTime(), 4 * 3600) | |
self.assertEqual(trip.GetEndTime(), 4 * 3600 + 21 * 60) | |
class BasicParsingTestCase(util.TestCase): | |
"""Checks that we're getting the number of child objects that we expect.""" | |
def assertLoadedCorrectly(self, schedule): | |
"""Check that the good_feed looks correct""" | |
self.assertEqual(1, len(schedule._agencies)) | |
self.assertEqual(5, len(schedule.routes)) | |
self.assertEqual(2, len(schedule.service_periods)) | |
self.assertEqual(10, len(schedule.stops)) | |
self.assertEqual(11, len(schedule.trips)) | |
self.assertEqual(0, len(schedule.fare_zones)) | |
def assertLoadedStopTimesCorrectly(self, schedule): | |
self.assertEqual(5, len(schedule.GetTrip('CITY1').GetStopTimes())) | |
self.assertEqual('to airport', schedule.GetTrip('STBA').GetStopTimes()[0].stop_headsign) | |
self.assertEqual(2, schedule.GetTrip('CITY1').GetStopTimes()[1].pickup_type) | |
self.assertEqual(3, schedule.GetTrip('CITY1').GetStopTimes()[1].drop_off_type) | |
def test_MemoryDb(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems=GetTestFailureProblemReporter(self), | |
extra_validation=True, | |
memory_db=True) | |
schedule = loader.Load() | |
self.assertLoadedCorrectly(schedule) | |
self.assertLoadedStopTimesCorrectly(schedule) | |
def test_TemporaryFile(self): | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems=GetTestFailureProblemReporter(self), | |
extra_validation=True, | |
memory_db=False) | |
schedule = loader.Load() | |
self.assertLoadedCorrectly(schedule) | |
self.assertLoadedStopTimesCorrectly(schedule) | |
def test_NoLoadStopTimes(self): | |
problems = GetTestFailureProblemReporter( | |
self, ignore_types=("ExpirationDate", "UnusedStop", "OtherProblem")) | |
loader = transitfeed.Loader( | |
DataPath('good_feed.zip'), | |
problems=problems, | |
extra_validation=True, | |
load_stop_times=False) | |
schedule = loader.Load() | |
self.assertLoadedCorrectly(schedule) | |
self.assertEqual(0, len(schedule.GetTrip('CITY1').GetStopTimes())) | |
class RepeatedRouteNameTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('repeated_route_name', 'route_long_name') | |
class InvalidRouteAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('invalid_route_agency') | |
self.accumulator.PopInvalidValue("agency_id", "routes.txt") | |
self.accumulator.PopInvalidValue("route_id", "trips.txt") | |
self.accumulator.AssertNoMoreExceptions() | |
class UndefinedStopAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('undefined_stop', 'stop_id') | |
class SameShortLongNameTestCase(LoadTestCase): | |
def runTest(self): | |
self.ExpectInvalidValue('same_short_long_name', 'route_long_name') | |
class UnusedStopAgencyTestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('unused_stop'), | |
e = self.accumulator.PopException("UnusedStop") | |
self.assertEqual("Bogus Stop (Demo)", e.stop_name) | |
self.assertEqual("BOGUS", e.stop_id) | |
self.accumulator.AssertNoMoreExceptions() | |
class OnlyCalendarDatesTestCase(LoadTestCase): | |
def runTest(self): | |
self.Load('only_calendar_dates'), | |
self.accumulator.AssertNoMoreExceptions() | |
class DuplicateServiceIdDateWarningTestCase(MemoryZipTestCase): | |
def runTest(self): | |
# Two lines with the same value of service_id and date. | |
# Test for the warning. | |
self.SetArchiveContents( | |
'calendar_dates.txt', | |
'service_id,date,exception_type\n' | |
'FULLW,20100604,1\n' | |
'FULLW,20100604,2\n') | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException('DuplicateID') | |
self.assertEquals('(service_id, date)', e.column_name) | |
self.assertEquals('(FULLW, 20100604)', e.value) | |
class AddStopTimeParametersTestCase(util.TestCase): | |
def runTest(self): | |
problem_reporter = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problem_reporter) | |
route = schedule.AddRoute(short_name="10", long_name="", route_type="Bus") | |
stop = schedule.AddStop(40, -128, "My stop") | |
# Stop must be added to schedule so that the call | |
# AddStopTime -> AddStopTimeObject -> GetStopTimes -> GetStop can work | |
trip = transitfeed.Trip() | |
trip.route_id = route.route_id | |
trip.service_id = schedule.GetDefaultServicePeriod().service_id | |
trip.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip) | |
# First stop must have time | |
trip.AddStopTime(stop, arrival_secs=300, departure_secs=360) | |
trip.AddStopTime(stop) | |
trip.AddStopTime(stop, arrival_time="00:07:00", departure_time="00:07:30") | |
trip.Validate(problem_reporter) | |
class ExpirationDateTestCase(util.TestCase): | |
def runTest(self): | |
accumulator = RecordingProblemAccumulator(self, ("NoServiceExceptions")) | |
problems = transitfeed.ProblemReporter(accumulator) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
now = time.mktime(time.localtime()) | |
seconds_per_day = 60 * 60 * 24 | |
two_weeks_ago = time.localtime(now - 14 * seconds_per_day) | |
two_weeks_from_now = time.localtime(now + 14 * seconds_per_day) | |
two_months_from_now = time.localtime(now + 60 * seconds_per_day) | |
date_format = "%Y%m%d" | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService(True) | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate(time.strftime(date_format, two_months_from_now)) | |
schedule.Validate() # should have no problems | |
accumulator.AssertNoMoreExceptions() | |
service_period.SetEndDate(time.strftime(date_format, two_weeks_from_now)) | |
schedule.Validate() | |
e = accumulator.PopException('ExpirationDate') | |
self.assertTrue(e.FormatProblem().index('will soon expire')) | |
accumulator.AssertNoMoreExceptions() | |
service_period.SetEndDate(time.strftime(date_format, two_weeks_ago)) | |
schedule.Validate() | |
e = accumulator.PopException('ExpirationDate') | |
self.assertTrue(e.FormatProblem().index('expired')) | |
accumulator.AssertNoMoreExceptions() | |
class FutureServiceStartDateTestCase(util.TestCase): | |
def runTest(self): | |
accumulator = RecordingProblemAccumulator(self) | |
problems = transitfeed.ProblemReporter(accumulator) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
today = datetime.date.today() | |
yesterday = today - datetime.timedelta(days=1) | |
tomorrow = today + datetime.timedelta(days=1) | |
two_months_from_today = today + datetime.timedelta(days=60) | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetWeekdayService(True) | |
service_period.SetWeekendService(True) | |
service_period.SetEndDate(two_months_from_today.strftime("%Y%m%d")) | |
service_period.SetStartDate(yesterday.strftime("%Y%m%d")) | |
schedule.Validate() | |
accumulator.AssertNoMoreExceptions() | |
service_period.SetStartDate(today.strftime("%Y%m%d")) | |
schedule.Validate() | |
accumulator.AssertNoMoreExceptions() | |
service_period.SetStartDate(tomorrow.strftime("%Y%m%d")) | |
schedule.Validate() | |
accumulator.PopException('FutureService') | |
accumulator.AssertNoMoreExceptions() | |
class CalendarTxtIntegrationTestCase(MemoryZipTestCase): | |
def testBadEndDateFormat(self): | |
# A badly formatted end_date used to generate an InvalidValue report from | |
# Schedule.Validate and ServicePeriod.Validate. Test for the bug. | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101,20101232\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopInvalidValue('end_date') | |
self.accumulator.AssertNoMoreExceptions() | |
def testBadStartDateFormat(self): | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,200701xx,20101231\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopInvalidValue('start_date') | |
self.accumulator.AssertNoMoreExceptions() | |
def testNoStartDateAndEndDate(self): | |
"""Regression test for calendar.txt with empty start_date and end_date. | |
See http://code.google.com/p/googletransitdatafeed/issues/detail?id=41 | |
""" | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1, ,\t\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("MissingValue") | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("start_date", e.column_name) | |
e = self.accumulator.PopException("MissingValue") | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("end_date", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
def testNoStartDateAndBadEndDate(self): | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,,abc\n" | |
"WE,0,0,0,0,0,1,1,20070101,20101231\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("MissingValue") | |
self.assertEquals(2, e.row_num) | |
self.assertEquals("start_date", e.column_name) | |
e = self.accumulator.PopInvalidValue("end_date") | |
self.assertEquals(2, e.row_num) | |
self.accumulator.AssertNoMoreExceptions() | |
def testMissingEndDateColumn(self): | |
self.SetArchiveContents( | |
"calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday,saturday,sunday," | |
"start_date\n" | |
"FULLW,1,1,1,1,1,1,1,20070101\n" | |
"WE,0,0,0,0,0,1,1,20070101\n") | |
schedule = self.MakeLoaderAndLoad() | |
e = self.accumulator.PopException("MissingColumn") | |
self.assertEquals("end_date", e.column_name) | |
self.accumulator.AssertNoMoreExceptions() | |
class DuplicateTripIDValidationTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip1 = transitfeed.Trip() | |
trip1.route_id = "SAMPLE_ID" | |
trip1.service_id = "WEEK" | |
trip1.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip1) | |
trip2 = transitfeed.Trip() | |
trip2.route_id = "SAMPLE_ID" | |
trip2.service_id = "WEEK" | |
trip2.trip_id = "SAMPLE_TRIP" | |
try: | |
schedule.AddTripObject(trip2) | |
self.fail("Expected Duplicate ID validation failure") | |
except transitfeed.DuplicateID, e: | |
self.assertEqual("trip_id", e.column_name) | |
self.assertEqual("SAMPLE_TRIP", e.value) | |
class DuplicateStopValidationTestCase(ValidationTestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule(problem_reporter=self.problems) | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = "SAMPLE_ID" | |
trip.service_id = "WEEK" | |
trip.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip) | |
stop1 = transitfeed.Stop() | |
stop1.stop_id = "STOP1" | |
stop1.stop_name = "Stop 1" | |
stop1.stop_lat = 78.243587 | |
stop1.stop_lon = 32.258937 | |
schedule.AddStopObject(stop1) | |
trip.AddStopTime(stop1, arrival_time="12:00:00", departure_time="12:00:00") | |
stop2 = transitfeed.Stop() | |
stop2.stop_id = "STOP2" | |
stop2.stop_name = "Stop 2" | |
stop2.stop_lat = 78.253587 | |
stop2.stop_lon = 32.258937 | |
schedule.AddStopObject(stop2) | |
trip.AddStopTime(stop2, arrival_time="12:05:00", departure_time="12:05:00") | |
schedule.Validate() | |
stop3 = transitfeed.Stop() | |
stop3.stop_id = "STOP3" | |
stop3.stop_name = "Stop 3" | |
stop3.stop_lat = 78.243587 | |
stop3.stop_lon = 32.268937 | |
schedule.AddStopObject(stop3) | |
trip.AddStopTime(stop3, arrival_time="12:10:00", departure_time="12:10:00") | |
schedule.Validate() | |
self.accumulator.AssertNoMoreExceptions() | |
stop4 = transitfeed.Stop() | |
stop4.stop_id = "STOP4" | |
stop4.stop_name = "Stop 4" | |
stop4.stop_lat = 78.243588 | |
stop4.stop_lon = 32.268936 | |
schedule.AddStopObject(stop4) | |
trip.AddStopTime(stop4, arrival_time="12:15:00", departure_time="12:15:00") | |
schedule.Validate() | |
e = self.accumulator.PopException('StopsTooClose') | |
self.accumulator.AssertNoMoreExceptions() | |
class TempFileTestCaseBase(util.TestCase): | |
""" | |
Subclass of TestCase which sets self.tempfilepath to a valid temporary zip | |
file name and removes the file if it exists when the test is done. | |
""" | |
def setUp(self): | |
(fd, self.tempfilepath) = tempfile.mkstemp(".zip") | |
# Open file handle causes an exception during remove in Windows | |
os.close(fd) | |
def tearDown(self): | |
if os.path.exists(self.tempfilepath): | |
os.remove(self.tempfilepath) | |
class MinimalWriteTestCase(TempFileTestCaseBase): | |
""" | |
This test case simply constructs an incomplete feed with very few | |
fields set and ensures that there are no exceptions when writing it out. | |
This is very similar to TransitFeedSampleCodeTestCase below, but that one | |
will no doubt change as the sample code is altered. | |
""" | |
def runTest(self): | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_short_name = "66" | |
route.route_long_name = "Sample Route acute letter e\202" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = "SAMPLE_ID" | |
trip.service_period = service_period | |
trip.trip_id = "SAMPLE_TRIP" | |
schedule.AddTripObject(trip) | |
stop1 = transitfeed.Stop() | |
stop1.stop_id = "STOP1" | |
stop1.stop_name = u'Stop 1 acute letter e\202' | |
stop1.stop_lat = 78.243587 | |
stop1.stop_lon = 32.258937 | |
schedule.AddStopObject(stop1) | |
trip.AddStopTime(stop1, arrival_time="12:00:00", departure_time="12:00:00") | |
stop2 = transitfeed.Stop() | |
stop2.stop_id = "STOP2" | |
stop2.stop_name = "Stop 2" | |
stop2.stop_lat = 78.253587 | |
stop2.stop_lon = 32.258937 | |
schedule.AddStopObject(stop2) | |
trip.AddStopTime(stop2, arrival_time="12:05:00", departure_time="12:05:00") | |
schedule.Validate() | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
class TransitFeedSampleCodeTestCase(util.TestCase): | |
""" | |
This test should simply contain the sample code printed on the page: | |
http://code.google.com/p/googletransitdatafeed/wiki/TransitFeed | |
to ensure that it doesn't cause any exceptions. | |
""" | |
def runTest(self): | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
schedule.AddAgency("Sample Agency", "http://example.com", | |
"America/Los_Angeles") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_short_name = "66" | |
route.route_long_name = "Sample Route" | |
schedule.AddRouteObject(route) | |
service_period = transitfeed.ServicePeriod("WEEK") | |
service_period.SetStartDate("20070101") | |
service_period.SetEndDate("20071231") | |
service_period.SetWeekdayService(True) | |
schedule.AddServicePeriodObject(service_period) | |
trip = transitfeed.Trip() | |
trip.route_id = "SAMPLE_ID" | |
trip.service_period = service_period | |
trip.trip_id = "SAMPLE_TRIP" | |
trip.direction_id = "0" | |
trip.block_id = None | |
schedule.AddTripObject(trip) | |
stop1 = transitfeed.Stop() | |
stop1.stop_id = "STOP1" | |
stop1.stop_name = "Stop 1" | |
stop1.stop_lat = 78.243587 | |
stop1.stop_lon = 32.258937 | |
schedule.AddStopObject(stop1) | |
trip.AddStopTime(stop1, arrival_time="12:00:00", departure_time="12:00:00") | |
stop2 = transitfeed.Stop() | |
stop2.stop_id = "STOP2" | |
stop2.stop_name = "Stop 2" | |
stop2.stop_lat = 78.253587 | |
stop2.stop_lon = 32.258937 | |
schedule.AddStopObject(stop2) | |
trip.AddStopTime(stop2, arrival_time="12:05:00", departure_time="12:05:00") | |
schedule.Validate() # not necessary, but helpful for finding problems | |
schedule.WriteGoogleTransitFeed("new_feed.zip") | |
class AgencyIDValidationTestCase(util.TestCase): | |
def runTest(self): | |
schedule = transitfeed.Schedule( | |
problem_reporter=ExceptionProblemReporterNoExpiration()) | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route" | |
# no agency defined yet, failure. | |
try: | |
schedule.AddRouteObject(route) | |
self.fail("Expected validation error") | |
except transitfeed.InvalidValue, e: | |
self.assertEqual('agency_id', e.column_name) | |
self.assertEqual(None, e.value) | |
# one agency defined, assume that the route belongs to it | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles", "TEST_AGENCY") | |
schedule.AddRouteObject(route) | |
schedule.AddAgency("Test Agency 2", "http://example.com", | |
"America/Los_Angeles", "TEST_AGENCY_2") | |
route = transitfeed.Route() | |
route.route_id = "SAMPLE_ID_2" | |
route.route_type = 3 | |
route.route_long_name = "Sample Route 2" | |
# multiple agencies defined, don't know what omitted agency_id should be | |
try: | |
schedule.AddRouteObject(route) | |
self.fail("Expected validation error") | |
except transitfeed.InvalidValue, e: | |
self.assertEqual('agency_id', e.column_name) | |
self.assertEqual(None, e.value) | |
# agency with no agency_id defined, matches route with no agency id | |
schedule.AddAgency("Test Agency 3", "http://example.com", | |
"America/Los_Angeles") | |
schedule.AddRouteObject(route) | |
class AddFrequencyValidationTestCase(ValidationTestCase): | |
def ExpectInvalidValue(self, start_time, end_time, headway, | |
column_name, value): | |
try: | |
trip = transitfeed.Trip() | |
trip.AddFrequency(start_time, end_time, headway) | |
self.fail("Expected InvalidValue error on %s" % column_name) | |
except transitfeed.InvalidValue, e: | |
self.assertEqual(column_name, e.column_name) | |
self.assertEqual(value, e.value) | |
self.assertEqual(0, len(trip.GetFrequencyTuples())) | |
def ExpectMissingValue(self, start_time, end_time, headway, column_name): | |
try: | |
trip = transitfeed.Trip() | |
trip.AddFrequency(start_time, end_time, headway) | |
self.fail("Expected MissingValue error on %s" % column_name) | |
except transitfeed.MissingValue, e: | |
self.assertEqual(column_name, e.column_name) | |
self.assertEqual(0, len(trip.GetFrequencyTuples())) | |
def runTest(self): | |
# these should work fine | |
trip = transitfeed.Trip() | |
trip.trip_id = "SAMPLE_ID" | |
trip.AddFrequency(0, 50, 1200) | |
trip.AddFrequency("01:00:00", "02:00:00", "600") | |
trip.AddFrequency(u"02:00:00", u"03:00:00", u"1800") | |
headways = trip.GetFrequencyTuples() | |
self.assertEqual(3, len(headways)) | |
self.assertEqual((0, 50, 1200), headways[0]) | |
self.assertEqual((3600, 7200, 600), headways[1]) | |
self.assertEqual((7200, 10800, 1800), headways[2]) | |
self.assertEqual([("SAMPLE_ID", "00:00:00", "00:00:50", "1200"), | |
("SAMPLE_ID", "01:00:00", "02:00:00", "600"), | |
("SAMPLE_ID", "02:00:00", "03:00:00", "1800")], | |
trip.GetFrequencyOutputTuples()) | |
# now test invalid input | |
self.ExpectMissingValue(None, 50, 1200, "start_time") | |
self.ExpectMissingValue("", 50, 1200, "start_time") | |
self.ExpectInvalidValue("midnight", 50, 1200, "start_time", "midnight") | |
self.ExpectInvalidValue(-50, 50, 1200, "start_time", -50) | |
self.ExpectMissingValue(0, None, 1200, "end_time") | |
self.ExpectMissingValue(0, "", 1200, "end_time") | |
self.ExpectInvalidValue(0, "noon", 1200, "end_time", "noon") | |
self.ExpectInvalidValue(0, -50, 1200, "end_time", -50) | |
self.ExpectMissingValue(0, 600, 0, "headway_secs") | |
self.ExpectMissingValue(0, 600, None, "headway_secs") | |
self.ExpectMissingValue(0, 600, "", "headway_secs") | |
self.ExpectInvalidValue(0, 600, "test", "headway_secs", "test") | |
self.ExpectInvalidValue(0, 600, -60, "headway_secs", -60) | |
self.ExpectInvalidValue(0, 0, 1200, "end_time", 0) | |
self.ExpectInvalidValue("12:00:00", "06:00:00", 1200, "end_time", 21600) | |
class ScheduleBuilderTestCase(TempFileTestCaseBase): | |
"""Tests for using a Schedule object to build a GTFS file.""" | |
def testBuildFeedWithUtf8Names(self): | |
problems = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
schedule.AddAgency("\xc8\x8b Fly Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService('20070101') | |
# "u020b i with inverted accent breve" encoded in utf-8 | |
stop1 = schedule.AddStop(lng=140, lat=48.2, name="\xc8\x8b hub") | |
# "u020b i with inverted accent breve" as unicode string | |
stop2 = schedule.AddStop(lng=140.001, lat=48.201, name=u"remote \u020b station") | |
route = schedule.AddRoute(u"\u03b2", "Beta", "Bus") | |
trip = route.AddTrip(schedule, u"to remote \u020b station") | |
repr(stop1) | |
repr(stop2) | |
repr(route) | |
repr(trip) | |
trip.AddStopTime(stop1, schedule=schedule, stop_time='10:00:00') | |
trip.AddStopTime(stop2, stop_time='10:10:00') | |
schedule.Validate(problems) | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
read_schedule = \ | |
transitfeed.Loader(self.tempfilepath, problems=problems, | |
extra_validation=True).Load() | |
self.assertEquals(u'\u020b Fly Agency', | |
read_schedule.GetDefaultAgency().agency_name) | |
self.assertEquals(u'\u03b2', | |
read_schedule.GetRoute(route.route_id).route_short_name) | |
self.assertEquals(u'to remote \u020b station', | |
read_schedule.GetTrip(trip.trip_id).trip_headsign) | |
def testBuildSimpleFeed(self): | |
"""Make a very simple feed using the Schedule class.""" | |
problems = GetTestFailureProblemReporter(self, ("ExpirationDate", | |
"NoServiceExceptions")) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
schedule.AddAgency("Test Agency", "http://example.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
self.assertTrue(service_period.service_id) | |
service_period.SetWeekdayService(has_service=True) | |
service_period.SetStartDate("20070320") | |
service_period.SetEndDate("20071231") | |
stop1 = schedule.AddStop(lng=-140.12, lat=48.921, | |
name="one forty at forty eight") | |
stop2 = schedule.AddStop(lng=-140.22, lat=48.421, name="west and south") | |
stop3 = schedule.AddStop(lng=-140.32, lat=48.121, name="more away") | |
stop4 = schedule.AddStop(lng=-140.42, lat=48.021, name="more more away") | |
route = schedule.AddRoute(short_name="R", long_name="My Route", | |
route_type="Bus") | |
self.assertTrue(route.route_id) | |
self.assertEqual(route.route_short_name, "R") | |
self.assertEqual(route.route_type, 3) | |
trip = route.AddTrip(schedule, headsign="To The End", | |
service_period=service_period) | |
trip_id = trip.trip_id | |
self.assertTrue(trip_id) | |
trip = schedule.GetTrip(trip_id) | |
self.assertEqual("To The End", trip.trip_headsign) | |
self.assertEqual(service_period, trip.service_period) | |
trip.AddStopTime(stop=stop1, arrival_secs=3600*8, departure_secs=3600*8) | |
trip.AddStopTime(stop=stop2) | |
trip.AddStopTime(stop=stop3, arrival_secs=3600*8 + 60*60, | |
departure_secs=3600*8 + 60*60) | |
trip.AddStopTime(stop=stop4, arrival_time="9:13:00", | |
departure_secs=3600*8 + 60*103, stop_headsign="Last stop", | |
pickup_type=1, drop_off_type=3) | |
schedule.Validate() | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
read_schedule = \ | |
transitfeed.Loader(self.tempfilepath, problems=problems, | |
extra_validation=True).Load() | |
self.assertEqual(4, len(read_schedule.GetTrip(trip_id).GetTimeStops())) | |
self.assertEqual(1, len(read_schedule.GetRouteList())) | |
self.assertEqual(4, len(read_schedule.GetStopList())) | |
def testStopIdConflict(self): | |
problems = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
schedule.AddStop(lat=3, lng=4.1, name="stop1", stop_id="1") | |
schedule.AddStop(lat=3, lng=4.0, name="stop0", stop_id="0") | |
schedule.AddStop(lat=3, lng=4.2, name="stop2") | |
schedule.AddStop(lat=3, lng=4.2, name="stop4", stop_id="4") | |
# AddStop will try to use stop_id=4 first but it is taken | |
schedule.AddStop(lat=3, lng=4.2, name="stop5") | |
stop_list = sorted(schedule.GetStopList(), key=lambda s: s.stop_name) | |
self.assertEqual("stop0 stop1 stop2 stop4 stop5", | |
" ".join([s.stop_name for s in stop_list])) | |
self.assertMatchesRegex(r"0 1 2 4 \d{7,9}", | |
" ".join(s.stop_id for s in stop_list)) | |
def testRouteIdConflict(self): | |
problems = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
route0 = schedule.AddRoute("0", "Long Name", "Bus") | |
route1 = schedule.AddRoute("1", "", "Bus", route_id="1") | |
route3 = schedule.AddRoute("3", "", "Bus", route_id="3") | |
route_rand = schedule.AddRoute("R", "LNR", "Bus") | |
route4 = schedule.AddRoute("4", "GooCar", "Bus") | |
route_list = schedule.GetRouteList() | |
route_list.sort(key=lambda r: r.route_short_name) | |
self.assertEqual("0 1 3 4 R", | |
" ".join(r.route_short_name for r in route_list)) | |
self.assertMatchesRegex("0 1 3 4 \d{7,9}", | |
" ".join(r.route_id for r in route_list)) | |
self.assertEqual("Long Name,,,GooCar,LNR", | |
",".join(r.route_long_name for r in route_list)) | |
def testTripIdConflict(self): | |
problems = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService("20070101") | |
route = schedule.AddRoute("0", "Long Name", "Bus") | |
route.AddTrip() | |
route.AddTrip(schedule=schedule, headsign="hs1", | |
service_period=service_period, trip_id="1") | |
route.AddTrip(schedule, "hs2", service_period, "2") | |
route.AddTrip(trip_id="4") | |
route.AddTrip() # This will be given a random trip_id | |
trip_list = sorted(schedule.GetTripList(), key=lambda t: int(t.trip_id)) | |
self.assertMatchesRegex("0 1 2 4 \d{7,9}", | |
" ".join(t.trip_id for t in trip_list)) | |
self.assertEqual(",hs1,hs2,,", | |
",".join(t["trip_headsign"] for t in trip_list)) | |
for t in trip_list: | |
self.assertEqual(service_period.service_id, t.service_id) | |
self.assertEqual(route.route_id, t.route_id) | |
class WriteSampleFeedTestCase(TempFileTestCaseBase): | |
def assertEqualTimeString(self, a, b): | |
"""Assert that a and b are equal, even if they don't have the same zero | |
padding on the hour. IE 08:45:00 vs 8:45:00.""" | |
if a[1] == ':': | |
a = '0' + a | |
if b[1] == ':': | |
b = '0' + b | |
self.assertEqual(a, b) | |
def assertEqualWithDefault(self, a, b, default): | |
"""Assert that a and b are equal. Treat None and default as equal.""" | |
if a == b: | |
return | |
if a in (None, default) and b in (None, default): | |
return | |
self.assertTrue(False, "a=%s b=%s" % (a, b)) | |
def runTest(self): | |
accumulator = RecordingProblemAccumulator(self, | |
ignore_types=("ExpirationDate",)) | |
problems = transitfeed.ProblemReporter(accumulator) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
agency = transitfeed.Agency() | |
agency.agency_id = "DTA" | |
agency.agency_name = "Demo Transit Authority" | |
agency.agency_url = "http://google.com" | |
agency.agency_timezone = "America/Los_Angeles" | |
agency.agency_lang = 'en' | |
# Test that unknown columns, such as agency_mission, are preserved | |
agency.agency_mission = "Get You There" | |
schedule.AddAgencyObject(agency) | |
routes = [] | |
route_data = [ | |
("AB", "DTA", "10", "Airport - Bullfrog", 3), | |
("BFC", "DTA", "20", "Bullfrog - Furnace Creek Resort", 3), | |
("STBA", "DTA", "30", "Stagecoach - Airport Shuttle", 3), | |
("CITY", "DTA", "40", "City", 3), | |
("AAMV", "DTA", "50", "Airport - Amargosa Valley", 3) | |
] | |
for route_entry in route_data: | |
route = transitfeed.Route() | |
(route.route_id, route.agency_id, route.route_short_name, | |
route.route_long_name, route.route_type) = route_entry | |
routes.append(route) | |
schedule.AddRouteObject(route) | |
shape_data = [ | |
(36.915760, -116.751709), | |
(36.905018, -116.763206), | |
(36.902134, -116.777969), | |
(36.904091, -116.788185), | |
(36.883602, -116.814537), | |
(36.874523, -116.795593), | |
(36.873302, -116.786491), | |
(36.869202, -116.784241), | |
(36.868515, -116.784729), | |
] | |
shape = transitfeed.Shape("BFC1S") | |
for (lat, lon) in shape_data: | |
shape.AddPoint(lat, lon) | |
schedule.AddShapeObject(shape) | |
week_period = transitfeed.ServicePeriod() | |
week_period.service_id = "FULLW" | |
week_period.start_date = "20070101" | |
week_period.end_date = "20071231" | |
week_period.SetWeekdayService() | |
week_period.SetWeekendService() | |
week_period.SetDateHasService("20070604", False) | |
schedule.AddServicePeriodObject(week_period) | |
weekend_period = transitfeed.ServicePeriod() | |
weekend_period.service_id = "WE" | |
weekend_period.start_date = "20070101" | |
weekend_period.end_date = "20071231" | |
weekend_period.SetWeekendService() | |
schedule.AddServicePeriodObject(weekend_period) | |
stops = [] | |
stop_data = [ | |
("FUR_CREEK_RES", "Furnace Creek Resort (Demo)", | |
36.425288, -117.133162, "zone-a", "1234"), | |
("BEATTY_AIRPORT", "Nye County Airport (Demo)", | |
36.868446, -116.784682, "zone-a", "1235"), | |
("BULLFROG", "Bullfrog (Demo)", 36.88108, -116.81797, "zone-b", "1236"), | |
("STAGECOACH", "Stagecoach Hotel & Casino (Demo)", | |
36.915682, -116.751677, "zone-c", "1237"), | |
("NADAV", "North Ave / D Ave N (Demo)", 36.914893, -116.76821, "", ""), | |
("NANAA", "North Ave / N A Ave (Demo)", 36.914944, -116.761472, "", ""), | |
("DADAN", "Doing AVe / D Ave N (Demo)", 36.909489, -116.768242, "", ""), | |
("EMSI", "E Main St / S Irving St (Demo)", | |
36.905697, -116.76218, "", ""), | |
("AMV", "Amargosa Valley (Demo)", 36.641496, -116.40094, "", ""), | |
] | |
for stop_entry in stop_data: | |
stop = transitfeed.Stop() | |
(stop.stop_id, stop.stop_name, stop.stop_lat, stop.stop_lon, | |
stop.zone_id, stop.stop_code) = stop_entry | |
schedule.AddStopObject(stop) | |
stops.append(stop) | |
# Add a value to an unknown column and make sure it is preserved | |
schedule.GetStop("BULLFROG").stop_sound = "croak!" | |
trip_data = [ | |
("AB", "FULLW", "AB1", "to Bullfrog", "0", "1", None), | |
("AB", "FULLW", "AB2", "to Airport", "1", "2", None), | |
("STBA", "FULLW", "STBA", "Shuttle", None, None, None), | |
("CITY", "FULLW", "CITY1", None, "0", None, None), | |
("CITY", "FULLW", "CITY2", None, "1", None, None), | |
("BFC", "FULLW", "BFC1", "to Furnace Creek Resort", "0", "1", "BFC1S"), | |
("BFC", "FULLW", "BFC2", "to Bullfrog", "1", "2", None), | |
("AAMV", "WE", "AAMV1", "to Amargosa Valley", "0", None, None), | |
("AAMV", "WE", "AAMV2", "to Airport", "1", None, None), | |
("AAMV", "WE", "AAMV3", "to Amargosa Valley", "0", None, None), | |
("AAMV", "WE", "AAMV4", "to Airport", "1", None, None), | |
] | |
trips = [] | |
for trip_entry in trip_data: | |
trip = transitfeed.Trip() | |
(trip.route_id, trip.service_id, trip.trip_id, trip.trip_headsign, | |
trip.direction_id, trip.block_id, trip.shape_id) = trip_entry | |
trips.append(trip) | |
schedule.AddTripObject(trip) | |
stop_time_data = { | |
"STBA": [("6:00:00", "6:00:00", "STAGECOACH", None, None, None, None), | |
("6:20:00", "6:20:00", "BEATTY_AIRPORT", None, None, None, None)], | |
"CITY1": [("6:00:00", "6:00:00", "STAGECOACH", 1.34, 0, 0, "stop 1"), | |
("6:05:00", "6:07:00", "NANAA", 2.40, 1, 2, "stop 2"), | |
("6:12:00", "6:14:00", "NADAV", 3.0, 2, 2, "stop 3"), | |
("6:19:00", "6:21:00", "DADAN", 4, 2, 2, "stop 4"), | |
("6:26:00", "6:28:00", "EMSI", 5.78, 2, 3, "stop 5")], | |
"CITY2": [("6:28:00", "6:28:00", "EMSI", None, None, None, None), | |
("6:35:00", "6:37:00", "DADAN", None, None, None, None), | |
("6:42:00", "6:44:00", "NADAV", None, None, None, None), | |
("6:49:00", "6:51:00", "NANAA", None, None, None, None), | |
("6:56:00", "6:58:00", "STAGECOACH", None, None, None, None)], | |
"AB1": [("8:00:00", "8:00:00", "BEATTY_AIRPORT", None, None, None, None), | |
("8:10:00", "8:15:00", "BULLFROG", None, None, None, None)], | |
"AB2": [("12:05:00", "12:05:00", "BULLFROG", None, None, None, None), | |
("12:15:00", "12:15:00", "BEATTY_AIRPORT", None, None, None, None)], | |
"BFC1": [("8:20:00", "8:20:00", "BULLFROG", None, None, None, None), | |
("9:20:00", "9:20:00", "FUR_CREEK_RES", None, None, None, None)], | |
"BFC2": [("11:00:00", "11:00:00", "FUR_CREEK_RES", None, None, None, None), | |
("12:00:00", "12:00:00", "BULLFROG", None, None, None, None)], | |
"AAMV1": [("8:00:00", "8:00:00", "BEATTY_AIRPORT", None, None, None, None), | |
("9:00:00", "9:00:00", "AMV", None, None, None, None)], | |
"AAMV2": [("10:00:00", "10:00:00", "AMV", None, None, None, None), | |
("11:00:00", "11:00:00", "BEATTY_AIRPORT", None, None, None, None)], | |
"AAMV3": [("13:00:00", "13:00:00", "BEATTY_AIRPORT", None, None, None, None), | |
("14:00:00", "14:00:00", "AMV", None, None, None, None)], | |
"AAMV4": [("15:00:00", "15:00:00", "AMV", None, None, None, None), | |
("16:00:00", "16:00:00", "BEATTY_AIRPORT", None, None, None, None)], | |
} | |
for trip_id, stop_time_list in stop_time_data.items(): | |
for stop_time_entry in stop_time_list: | |
(arrival_time, departure_time, stop_id, shape_dist_traveled, | |
pickup_type, drop_off_type, stop_headsign) = stop_time_entry | |
trip = schedule.GetTrip(trip_id) | |
stop = schedule.GetStop(stop_id) | |
trip.AddStopTime(stop, arrival_time=arrival_time, | |
departure_time=departure_time, | |
shape_dist_traveled=shape_dist_traveled, | |
pickup_type=pickup_type, drop_off_type=drop_off_type, | |
stop_headsign=stop_headsign) | |
self.assertEqual(0, schedule.GetTrip("CITY1").GetStopTimes()[0].pickup_type) | |
self.assertEqual(1, schedule.GetTrip("CITY1").GetStopTimes()[1].pickup_type) | |
headway_data = [ | |
("STBA", "6:00:00", "22:00:00", 1800), | |
("CITY1", "6:00:00", "7:59:59", 1800), | |
("CITY2", "6:00:00", "7:59:59", 1800), | |
("CITY1", "8:00:00", "9:59:59", 600), | |
("CITY2", "8:00:00", "9:59:59", 600), | |
("CITY1", "10:00:00", "15:59:59", 1800), | |
("CITY2", "10:00:00", "15:59:59", 1800), | |
("CITY1", "16:00:00", "18:59:59", 600), | |
("CITY2", "16:00:00", "18:59:59", 600), | |
("CITY1", "19:00:00", "22:00:00", 1800), | |
("CITY2", "19:00:00", "22:00:00", 1800), | |
] | |
headway_trips = {} | |
for headway_entry in headway_data: | |
(trip_id, start_time, end_time, headway) = headway_entry | |
headway_trips[trip_id] = [] # adding to set to check later | |
trip = schedule.GetTrip(trip_id) | |
trip.AddFrequency(start_time, end_time, headway, problems) | |
for trip_id in headway_trips: | |
headway_trips[trip_id] = \ | |
schedule.GetTrip(trip_id).GetFrequencyTuples() | |
fare_data = [ | |
("p", 1.25, "USD", 0, 0), | |
("a", 5.25, "USD", 0, 0), | |
] | |
fares = [] | |
for fare_entry in fare_data: | |
fare = transitfeed.FareAttribute(fare_entry[0], fare_entry[1], | |
fare_entry[2], fare_entry[3], | |
fare_entry[4]) | |
fares.append(fare) | |
schedule.AddFareAttributeObject(fare) | |
fare_rule_data = [ | |
("p", "AB", "zone-a", "zone-b", None), | |
("p", "STBA", "zone-a", None, "zone-c"), | |
("p", "BFC", None, "zone-b", "zone-a"), | |
("a", "AAMV", None, None, None), | |
] | |
for fare_id, route_id, orig_id, dest_id, contains_id in fare_rule_data: | |
rule = transitfeed.FareRule( | |
fare_id=fare_id, route_id=route_id, origin_id=orig_id, | |
destination_id=dest_id, contains_id=contains_id) | |
schedule.AddFareRuleObject(rule, problems) | |
schedule.Validate(problems) | |
accumulator.AssertNoMoreExceptions() | |
schedule.WriteGoogleTransitFeed(self.tempfilepath) | |
read_schedule = \ | |
transitfeed.Loader(self.tempfilepath, problems=problems, | |
extra_validation=True).Load() | |
e = accumulator.PopException("UnrecognizedColumn") | |
self.assertEqual(e.file_name, "agency.txt") | |
self.assertEqual(e.column_name, "agency_mission") | |
e = accumulator.PopException("UnrecognizedColumn") | |
self.assertEqual(e.file_name, "stops.txt") | |
self.assertEqual(e.column_name, "stop_sound") | |
accumulator.AssertNoMoreExceptions() | |
self.assertEqual(1, len(read_schedule.GetAgencyList())) | |
self.assertEqual(agency, read_schedule.GetAgency(agency.agency_id)) | |
self.assertEqual(len(routes), len(read_schedule.GetRouteList())) | |
for route in routes: | |
self.assertEqual(route, read_schedule.GetRoute(route.route_id)) | |
self.assertEqual(2, len(read_schedule.GetServicePeriodList())) | |
self.assertEqual(week_period, | |
read_schedule.GetServicePeriod(week_period.service_id)) | |
self.assertEqual(weekend_period, | |
read_schedule.GetServicePeriod(weekend_period.service_id)) | |
self.assertEqual(len(stops), len(read_schedule.GetStopList())) | |
for stop in stops: | |
self.assertEqual(stop, read_schedule.GetStop(stop.stop_id)) | |
self.assertEqual("croak!", read_schedule.GetStop("BULLFROG").stop_sound) | |
self.assertEqual(len(trips), len(read_schedule.GetTripList())) | |
for trip in trips: | |
self.assertEqual(trip, read_schedule.GetTrip(trip.trip_id)) | |
for trip_id in headway_trips: | |
self.assertEqual(headway_trips[trip_id], | |
read_schedule.GetTrip(trip_id).GetFrequencyTuples()) | |
for trip_id, stop_time_list in stop_time_data.items(): | |
trip = read_schedule.GetTrip(trip_id) | |
read_stoptimes = trip.GetStopTimes() | |
self.assertEqual(len(read_stoptimes), len(stop_time_list)) | |
for stop_time_entry, read_stoptime in zip(stop_time_list, read_stoptimes): | |
(arrival_time, departure_time, stop_id, shape_dist_traveled, | |
pickup_type, drop_off_type, stop_headsign) = stop_time_entry | |
self.assertEqual(stop_id, read_stoptime.stop_id) | |
self.assertEqual(read_schedule.GetStop(stop_id), read_stoptime.stop) | |
self.assertEqualTimeString(arrival_time, read_stoptime.arrival_time) | |
self.assertEqualTimeString(departure_time, read_stoptime.departure_time) | |
self.assertEqual(shape_dist_traveled, read_stoptime.shape_dist_traveled) | |
self.assertEqualWithDefault(pickup_type, read_stoptime.pickup_type, 0) | |
self.assertEqualWithDefault(drop_off_type, read_stoptime.drop_off_type, 0) | |
self.assertEqualWithDefault(stop_headsign, read_stoptime.stop_headsign, '') | |
self.assertEqual(len(fares), len(read_schedule.GetFareAttributeList())) | |
for fare in fares: | |
self.assertEqual(fare, read_schedule.GetFareAttribute(fare.fare_id)) | |
read_fare_rules_data = [] | |
for fare in read_schedule.GetFareAttributeList(): | |
for rule in fare.GetFareRuleList(): | |
self.assertEqual(fare.fare_id, rule.fare_id) | |
read_fare_rules_data.append((fare.fare_id, rule.route_id, | |
rule.origin_id, rule.destination_id, | |
rule.contains_id)) | |
fare_rule_data.sort() | |
read_fare_rules_data.sort() | |
self.assertEqual(len(read_fare_rules_data), len(fare_rule_data)) | |
for rf, f in zip(read_fare_rules_data, fare_rule_data): | |
self.assertEqual(rf, f) | |
self.assertEqual(1, len(read_schedule.GetShapeList())) | |
self.assertEqual(shape, read_schedule.GetShape(shape.shape_id)) | |
# TODO: test GetPattern | |
class DefaultAgencyTestCase(util.TestCase): | |
def freeAgency(self, ex=''): | |
agency = transitfeed.Agency() | |
agency.agency_id = 'agencytestid' + ex | |
agency.agency_name = 'Foo Bus Line' + ex | |
agency.agency_url = 'http://gofoo.com/' + ex | |
agency.agency_timezone='America/Los_Angeles' | |
return agency | |
def test_SetDefault(self): | |
schedule = transitfeed.Schedule() | |
agency = self.freeAgency() | |
schedule.SetDefaultAgency(agency) | |
self.assertEqual(agency, schedule.GetDefaultAgency()) | |
def test_NewDefaultAgency(self): | |
schedule = transitfeed.Schedule() | |
agency1 = schedule.NewDefaultAgency() | |
self.assertTrue(agency1.agency_id) | |
self.assertEqual(agency1.agency_id, schedule.GetDefaultAgency().agency_id) | |
self.assertEqual(1, len(schedule.GetAgencyList())) | |
agency2 = schedule.NewDefaultAgency() | |
self.assertTrue(agency2.agency_id) | |
self.assertEqual(agency2.agency_id, schedule.GetDefaultAgency().agency_id) | |
self.assertEqual(2, len(schedule.GetAgencyList())) | |
self.assertNotEqual(agency1, agency2) | |
self.assertNotEqual(agency1.agency_id, agency2.agency_id) | |
agency3 = schedule.NewDefaultAgency(agency_id='agency3', | |
agency_name='Agency 3', | |
agency_url='http://goagency') | |
self.assertEqual(agency3.agency_id, 'agency3') | |
self.assertEqual(agency3.agency_name, 'Agency 3') | |
self.assertEqual(agency3.agency_url, 'http://goagency') | |
self.assertEqual(agency3, schedule.GetDefaultAgency()) | |
self.assertEqual('agency3', schedule.GetDefaultAgency().agency_id) | |
self.assertEqual(3, len(schedule.GetAgencyList())) | |
def test_NoAgencyMakeNewDefault(self): | |
schedule = transitfeed.Schedule() | |
agency = schedule.GetDefaultAgency() | |
self.assertTrue(isinstance(agency, transitfeed.Agency)) | |
self.assertTrue(agency.agency_id) | |
self.assertEqual(1, len(schedule.GetAgencyList())) | |
self.assertEqual(agency, schedule.GetAgencyList()[0]) | |
self.assertEqual(agency.agency_id, schedule.GetAgencyList()[0].agency_id) | |
def test_AssumeSingleAgencyIsDefault(self): | |
schedule = transitfeed.Schedule() | |
agency1 = self.freeAgency() | |
schedule.AddAgencyObject(agency1) | |
agency2 = self.freeAgency('2') # don't add to schedule | |
# agency1 is default because it is the only Agency in schedule | |
self.assertEqual(agency1, schedule.GetDefaultAgency()) | |
def test_MultipleAgencyCausesNoDefault(self): | |
schedule = transitfeed.Schedule() | |
agency1 = self.freeAgency() | |
schedule.AddAgencyObject(agency1) | |
agency2 = self.freeAgency('2') | |
schedule.AddAgencyObject(agency2) | |
self.assertEqual(None, schedule.GetDefaultAgency()) | |
def test_OverwriteExistingAgency(self): | |
schedule = transitfeed.Schedule() | |
agency1 = self.freeAgency() | |
agency1.agency_id = '1' | |
schedule.AddAgencyObject(agency1) | |
agency2 = schedule.NewDefaultAgency() | |
# Make sure agency1 was not overwritten by the new default | |
self.assertEqual(agency1, schedule.GetAgency(agency1.agency_id)) | |
self.assertNotEqual('1', agency2.agency_id) | |
class FindUniqueIdTestCase(util.TestCase): | |
def test_simple(self): | |
d = {} | |
for i in range(0, 5): | |
d[transitfeed.FindUniqueId(d)] = 1 | |
k = d.keys() | |
k.sort() | |
self.assertEqual(('0', '1', '2', '3', '4'), tuple(k)) | |
def test_AvoidCollision(self): | |
d = {'1': 1} | |
d[transitfeed.FindUniqueId(d)] = 1 | |
self.assertEqual(2, len(d)) | |
self.assertFalse('3' in d, "Ops, next statement should add something to d") | |
d['3'] = None | |
d[transitfeed.FindUniqueId(d)] = 1 | |
self.assertEqual(4, len(d)) | |
class DefaultServicePeriodTestCase(util.TestCase): | |
def test_SetDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = transitfeed.ServicePeriod() | |
service1.SetDateHasService('20070101', True) | |
service1.service_id = 'SERVICE1' | |
schedule.SetDefaultServicePeriod(service1) | |
self.assertEqual(service1, schedule.GetDefaultServicePeriod()) | |
self.assertEqual(service1, schedule.GetServicePeriod(service1.service_id)) | |
def test_NewDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = schedule.NewDefaultServicePeriod() | |
self.assertTrue(service1.service_id) | |
schedule.GetServicePeriod(service1.service_id) | |
service1.SetDateHasService('20070101', True) # Make service1 different | |
service2 = schedule.NewDefaultServicePeriod() | |
schedule.GetServicePeriod(service2.service_id) | |
self.assertTrue(service1.service_id) | |
self.assertTrue(service2.service_id) | |
self.assertNotEqual(service1, service2) | |
self.assertNotEqual(service1.service_id, service2.service_id) | |
def test_NoServicesMakesNewDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = schedule.GetDefaultServicePeriod() | |
self.assertEqual(service1, schedule.GetServicePeriod(service1.service_id)) | |
def test_AssumeSingleServiceIsDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = transitfeed.ServicePeriod() | |
service1.SetDateHasService('20070101', True) | |
service1.service_id = 'SERVICE1' | |
schedule.AddServicePeriodObject(service1) | |
self.assertEqual(service1, schedule.GetDefaultServicePeriod()) | |
self.assertEqual(service1.service_id, schedule.GetDefaultServicePeriod().service_id) | |
def test_MultipleServicesCausesNoDefault(self): | |
schedule = transitfeed.Schedule() | |
service1 = transitfeed.ServicePeriod() | |
service1.service_id = 'SERVICE1' | |
service1.SetDateHasService('20070101', True) | |
schedule.AddServicePeriodObject(service1) | |
service2 = transitfeed.ServicePeriod() | |
service2.service_id = 'SERVICE2' | |
service2.SetDateHasService('20070201', True) | |
schedule.AddServicePeriodObject(service2) | |
service_d = schedule.GetDefaultServicePeriod() | |
self.assertEqual(service_d, None) | |
class GetTripTimeTestCase(util.TestCase): | |
"""Test for GetStopTimeTrips and GetTimeInterpolatedStops""" | |
def setUp(self): | |
problems = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
self.schedule = schedule | |
schedule.AddAgency("Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetDateHasService('20070101') | |
self.stop1 = schedule.AddStop(lng=140.01, lat=0, name="140.01,0") | |
self.stop2 = schedule.AddStop(lng=140.02, lat=0, name="140.02,0") | |
self.stop3 = schedule.AddStop(lng=140.03, lat=0, name="140.03,0") | |
self.stop4 = schedule.AddStop(lng=140.04, lat=0, name="140.04,0") | |
self.stop5 = schedule.AddStop(lng=140.05, lat=0, name="140.05,0") | |
self.route1 = schedule.AddRoute("1", "One", "Bus") | |
self.trip1 = self.route1.AddTrip(schedule, "trip 1", trip_id='trip1') | |
self.trip1.AddStopTime(self.stop1, schedule=schedule, departure_secs=100, arrival_secs=100) | |
self.trip1.AddStopTime(self.stop2, schedule=schedule) | |
self.trip1.AddStopTime(self.stop3, schedule=schedule) | |
# loop back to stop2 to test that interpolated stops work ok even when | |
# a stop between timepoints is further from the timepoint than the | |
# preceding | |
self.trip1.AddStopTime(self.stop2, schedule=schedule) | |
self.trip1.AddStopTime(self.stop4, schedule=schedule, departure_secs=400, arrival_secs=400) | |
self.trip2 = self.route1.AddTrip(schedule, "trip 2", trip_id='trip2') | |
self.trip2.AddStopTime(self.stop2, schedule=schedule, departure_secs=500, arrival_secs=500) | |
self.trip2.AddStopTime(self.stop3, schedule=schedule, departure_secs=600, arrival_secs=600) | |
self.trip2.AddStopTime(self.stop4, schedule=schedule, departure_secs=700, arrival_secs=700) | |
self.trip2.AddStopTime(self.stop3, schedule=schedule, departure_secs=800, arrival_secs=800) | |
self.trip3 = self.route1.AddTrip(schedule, "trip 3", trip_id='trip3') | |
def testGetTimeInterpolatedStops(self): | |
rv = self.trip1.GetTimeInterpolatedStops() | |
self.assertEqual(5, len(rv)) | |
(secs, stoptimes, istimepoints) = tuple(zip(*rv)) | |
self.assertEqual((100, 160, 220, 280, 400), secs) | |
self.assertEqual(("140.01,0", "140.02,0", "140.03,0", "140.02,0", "140.04,0"), | |
tuple([st.stop.stop_name for st in stoptimes])) | |
self.assertEqual((True, False, False, False, True), istimepoints) | |
self.assertEqual([], self.trip3.GetTimeInterpolatedStops()) | |
def testGetTimeInterpolatedStopsUntimedEnd(self): | |
self.trip2.AddStopTime(self.stop3, schedule=self.schedule) | |
self.assertRaises(ValueError, self.trip2.GetTimeInterpolatedStops) | |
def testGetTimeInterpolatedStopsUntimedStart(self): | |
# Temporarily replace the problem reporter so that adding the first | |
# StopTime without a time doesn't throw an exception. | |
old_problems = self.schedule.problem_reporter | |
self.schedule.problem_reporter = GetTestFailureProblemReporter( | |
self, ("OtherProblem",)) | |
self.trip3.AddStopTime(self.stop3, schedule=self.schedule) | |
self.schedule.problem_reporter = old_problems | |
self.trip3.AddStopTime(self.stop2, schedule=self.schedule, | |
departure_secs=500, arrival_secs=500) | |
self.assertRaises(ValueError, self.trip3.GetTimeInterpolatedStops) | |
def testGetTimeInterpolatedStopsSingleStopTime(self): | |
self.trip3.AddStopTime(self.stop3, schedule=self.schedule, | |
departure_secs=500, arrival_secs=500) | |
rv = self.trip3.GetTimeInterpolatedStops() | |
self.assertEqual(1, len(rv)) | |
self.assertEqual(500, rv[0][0]) | |
self.assertEqual(True, rv[0][2]) | |
def testGetStopTimeTrips(self): | |
stopa = self.schedule.GetNearestStops(lon=140.03, lat=0)[0] | |
self.assertEqual("140.03,0", stopa.stop_name) # Got stop3? | |
rv = stopa.GetStopTimeTrips(self.schedule) | |
self.assertEqual(3, len(rv)) | |
(secs, trip_index, istimepoints) = tuple(zip(*rv)) | |
self.assertEqual((220, 600, 800), secs) | |
self.assertEqual(("trip1", "trip2", "trip2"), tuple([ti[0].trip_id for ti in trip_index])) | |
self.assertEqual((2, 1, 3), tuple([ti[1] for ti in trip_index])) | |
self.assertEqual((False, True, True), istimepoints) | |
def testStopTripIndex(self): | |
trip_index = self.stop3.trip_index | |
trip_ids = [t.trip_id for t, i in trip_index] | |
self.assertEqual(["trip1", "trip2", "trip2"], trip_ids) | |
self.assertEqual([2, 1, 3], [i for t, i in trip_index]) | |
def testGetTrips(self): | |
self.assertEqual(set([t.trip_id for t in self.stop1.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop2.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id, self.trip2.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop3.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id, self.trip2.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop4.GetTrips(self.schedule)]), | |
set([self.trip1.trip_id, self.trip2.trip_id])) | |
self.assertEqual(set([t.trip_id for t in self.stop5.GetTrips(self.schedule)]), | |
set()) | |
class ApproximateDistanceBetweenStopsTestCase(util.TestCase): | |
def testEquator(self): | |
stop1 = transitfeed.Stop(lat=0, lng=100, | |
name='Stop one', stop_id='1') | |
stop2 = transitfeed.Stop(lat=0.01, lng=100.01, | |
name='Stop two', stop_id='2') | |
self.assertAlmostEqual( | |
transitfeed.ApproximateDistanceBetweenStops(stop1, stop2), | |
1570, -1) # Compare first 3 digits | |
def testWhati(self): | |
stop1 = transitfeed.Stop(lat=63.1, lng=-117.2, | |
name='Stop whati one', stop_id='1') | |
stop2 = transitfeed.Stop(lat=63.102, lng=-117.201, | |
name='Stop whati two', stop_id='2') | |
self.assertAlmostEqual( | |
transitfeed.ApproximateDistanceBetweenStops(stop1, stop2), | |
228, 0) | |
class TimeConversionHelpersTestCase(util.TestCase): | |
def testTimeToSecondsSinceMidnight(self): | |
self.assertEqual(transitfeed.TimeToSecondsSinceMidnight("01:02:03"), 3723) | |
self.assertEqual(transitfeed.TimeToSecondsSinceMidnight("00:00:00"), 0) | |
self.assertEqual(transitfeed.TimeToSecondsSinceMidnight("25:24:23"), 91463) | |
try: | |
transitfeed.TimeToSecondsSinceMidnight("10:15:00am") | |
except transitfeed.Error: | |
pass # expected | |
else: | |
self.fail("Should have thrown Error") | |
def testFormatSecondsSinceMidnight(self): | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(3723), "01:02:03") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(0), "00:00:00") | |
self.assertEqual(transitfeed.FormatSecondsSinceMidnight(91463), "25:24:23") | |
def testDateStringToDateObject(self): | |
self.assertEqual(transitfeed.DateStringToDateObject("20080901"), | |
datetime.date(2008, 9, 1)) | |
try: | |
transitfeed.DateStringToDateObject("20080841") | |
except ValueError: | |
pass # expected | |
else: | |
self.fail("Should have thrown ValueError") | |
class FloatStringToFloatTestCase(util.TestCase): | |
def runTest(self): | |
accumulator = RecordingProblemAccumulator(self) | |
problems = transitfeed.ProblemReporter(accumulator) | |
self.assertAlmostEqual(0, transitfeed.FloatStringToFloat("0", problems)) | |
self.assertAlmostEqual(0, transitfeed.FloatStringToFloat(u"0", problems)) | |
self.assertAlmostEqual(1, transitfeed.FloatStringToFloat("1", problems)) | |
self.assertAlmostEqual(1, | |
transitfeed.FloatStringToFloat("1.00000", problems)) | |
self.assertAlmostEqual(1.5, | |
transitfeed.FloatStringToFloat("1.500", problems)) | |
self.assertAlmostEqual(-2, transitfeed.FloatStringToFloat("-2.0", problems)) | |
self.assertAlmostEqual(-2.5, | |
transitfeed.FloatStringToFloat("-2.5", problems)) | |
self.assertRaises(ValueError, | |
transitfeed.FloatStringToFloat, ".", problems) | |
self.assertRaises(ValueError, | |
transitfeed.FloatStringToFloat, "0x20", problems) | |
self.assertRaises(ValueError, | |
transitfeed.FloatStringToFloat, "-0x20", problems) | |
self.assertRaises(ValueError, | |
transitfeed.FloatStringToFloat, "0b10", problems) | |
# These should issue a warning, but otherwise parse successfully | |
self.assertAlmostEqual(0.001, | |
transitfeed.FloatStringToFloat("1E-3", problems)) | |
e = accumulator.PopException("InvalidFloatValue") | |
self.assertAlmostEqual(0.001, | |
transitfeed.FloatStringToFloat(".001", problems)) | |
e = accumulator.PopException("InvalidFloatValue") | |
self.assertAlmostEqual(-0.001, | |
transitfeed.FloatStringToFloat("-.001", problems)) | |
e = accumulator.PopException("InvalidFloatValue") | |
self.assertAlmostEqual(0, | |
transitfeed.FloatStringToFloat("0.", problems)) | |
e = accumulator.PopException("InvalidFloatValue") | |
accumulator.AssertNoMoreExceptions() | |
class NonNegIntStringToIntTestCase(util.TestCase): | |
def runTest(self): | |
accumulator = RecordingProblemAccumulator(self) | |
problems = transitfeed.ProblemReporter(accumulator) | |
self.assertEqual(0, transitfeed.NonNegIntStringToInt("0", problems)) | |
self.assertEqual(0, transitfeed.NonNegIntStringToInt(u"0", problems)) | |
self.assertEqual(1, transitfeed.NonNegIntStringToInt("1", problems)) | |
self.assertEqual(2, transitfeed.NonNegIntStringToInt("2", problems)) | |
self.assertEqual(10, transitfeed.NonNegIntStringToInt("10", problems)) | |
self.assertEqual(1234567890123456789, | |
transitfeed.NonNegIntStringToInt("1234567890123456789", | |
problems)) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "", problems) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "-1", problems) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "0x1", problems) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "1.0", problems) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "1e1", problems) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "0x20", problems) | |
self.assertRaises(ValueError, | |
transitfeed.NonNegIntStringToInt, "0b10", problems) | |
self.assertRaises(TypeError, | |
transitfeed.NonNegIntStringToInt, 1, problems) | |
self.assertRaises(TypeError, | |
transitfeed.NonNegIntStringToInt, None, problems) | |
# These should issue a warning, but otherwise parse successfully | |
self.assertEqual(1, transitfeed.NonNegIntStringToInt("+1", problems)) | |
e = accumulator.PopException("InvalidNonNegativeIntegerValue") | |
self.assertEqual(1, transitfeed.NonNegIntStringToInt("01", problems)) | |
e = accumulator.PopException("InvalidNonNegativeIntegerValue") | |
self.assertEqual(0, transitfeed.NonNegIntStringToInt("00", problems)) | |
e = accumulator.PopException("InvalidNonNegativeIntegerValue") | |
accumulator.AssertNoMoreExceptions() | |
class GetFrequencyTimesTestCase(util.TestCase): | |
"""Test for GetFrequencyStartTimes and GetFrequencyStopTimes""" | |
def setUp(self): | |
problems = GetTestFailureProblemReporter(self) | |
schedule = transitfeed.Schedule(problem_reporter=problems) | |
self.schedule = schedule | |
schedule.AddAgency("Agency", "http://iflyagency.com", | |
"America/Los_Angeles") | |
service_period = schedule.GetDefaultServicePeriod() | |
service_period.SetStartDate("20080101") | |
service_period.SetEndDate("20090101") | |
service_period.SetWeekdayService(True) | |
self.stop1 = schedule.AddStop(lng=140.01, lat=0, name="140.01,0") | |
self.stop2 = schedule.AddStop(lng=140.02, lat=0, name="140.02,0") | |
self.stop3 = schedule.AddStop(lng=140.03, lat=0, name="140.03,0") | |
self.stop4 = schedule.AddStop(lng=140.04, lat=0, name="140.04,0") | |
self.stop5 = schedule.AddStop(lng=140.05, lat=0, name="140.05,0") | |
self.route1 = schedule.AddRoute("1", "One", "Bus") | |
self.trip1 = self.route1.AddTrip(schedule, "trip 1", trip_id="trip1") | |
# add different types of stop times | |
self.trip1.AddStopTime(self.stop1, arrival_time="17:00:00", departure_time="17:01:00") # both arrival and departure time | |
self.trip1.AddStopTime(self.stop2, schedule=schedule) # non timed | |
self.trip1.AddStopTime(self.stop3, stop_time="17:45:00") # only stop_time | |
# add headways starting before the trip | |
self.trip1.AddFrequency("16:00:00","18:00:00",1800) # each 30 min | |
self.trip1.AddFrequency("18:00:00","20:00:00",2700) # each 45 min | |
def testGetFrequencyStartTimes(self): | |
start_times = self.trip1.GetFrequencyStartTimes() | |
self.assertEqual( | |
["16:00:00", "16:30:00", "17:00:00", "17:30:00", | |
"18:00:00", "18:45:00", "19:30:00"], | |
[transitfeed.FormatSecondsSinceMidnight(secs) for secs in start_times]) | |
# GetHeadwayStartTimes is deprecated, but should still return the same | |
# result as GetFrequencyStartTimes | |
self.assertEqual(start_times, | |
self.trip1.GetFrequencyStartTimes()) | |
def testGetFrequencyStopTimes(self): | |
stoptimes_list = self.trip1.GetFrequencyStopTimes() | |
arrival_secs = [] | |
departure_secs = [] | |
for stoptimes in stoptimes_list: | |
arrival_secs.append([st.arrival_secs for st in stoptimes]) | |
departure_secs.append([st.departure_secs for st in stoptimes]) | |
# GetHeadwayStopTimes is deprecated, but should still return the same | |
# result as GetFrequencyStopTimes | |
# StopTimes are instantiated as they're read from the DB so they can't be | |
# compared directly, but checking {arrival,departure}_secs should be enough | |
# to catch most errors. | |
headway_stoptimes_list = self.trip1.GetFrequencyStopTimes() | |
headway_arrival_secs = [] | |
headway_departure_secs = [] | |
for stoptimes in stoptimes_list: | |
headway_arrival_secs.append([st.arrival_secs for st in stoptimes]) | |
headway_departure_secs.append([st.departure_secs for st in stoptimes]) | |
self.assertEqual(arrival_secs, headway_arrival_secs) | |
self.assertEqual(departure_secs, headway_departure_secs) | |
self.assertEqual(([57600,None,60300],[59400,None,62100],[61200,None,63900], | |
[63000,None,65700],[64800,None,67500],[67500,None,70200], | |
[70200,None,72900]), | |
tuple(arrival_secs)) | |
self.assertEqual(([57660,None,60300],[59460,None,62100],[61260,None,63900], | |
[63060,None,65700],[64860,None,67500],[67560,None,70200], | |
[70260,None,72900]), | |
tuple(departure_secs)) | |
# test if stoptimes are created with same parameters than the ones from the original trip | |
stoptimes = self.trip1.GetStopTimes() | |
for stoptimes_clone in stoptimes_list: | |
self.assertEqual(len(stoptimes_clone), len(stoptimes)) | |
for st_clone, st in zip(stoptimes_clone, stoptimes): | |
for name in st.__slots__: | |
if name not in ('arrival_secs', 'departure_secs'): | |
self.assertEqual(getattr(st, name), getattr(st_clone, name)) | |
class ServiceGapsTestCase(MemoryZipTestCase): | |
def setUp(self): | |
super(ServiceGapsTestCase, self).setUp() | |
self.SetArchiveContents("calendar.txt", | |
"service_id,monday,tuesday,wednesday,thursday,friday," | |
"saturday,sunday,start_date,end_date\n" | |
"FULLW,1,1,1,1,1,1,1,20090601,20090610\n" | |
"WE,0,0,0,0,0,1,1,20090718,20101231\n") | |
self.SetArchiveContents("calendar_dates.txt", | |
"service_id,date,exception_type\n" | |
"WE,20090815,2\n" | |
"WE,20090816,2\n" | |
"WE,20090822,2\n" | |
# The following two lines are a 12-day service gap. | |
# Shouldn't issue a warning | |
"WE,20090829,2\n" | |
"WE,20090830,2\n" | |
"WE,20100102,2\n" | |
"WE,20100103,2\n" | |
"WE,20100109,2\n" | |
"WE,20100110,2\n" | |
"WE,20100612,2\n" | |
"WE,20100613,2\n" | |
"WE,20100619,2\n" | |
"WE,20100620,2\n") | |
self.SetArchiveContents("trips.txt", | |
"route_id,service_id,trip_id\n" | |
"AB,WE,AB1\n" | |
"AB,FULLW,AB2\n") | |
self.SetArchiveContents( | |
"stop_times.txt", | |
"trip_id,arrival_time,departure_time,stop_id,stop_sequence\n" | |
"AB1,10:00:00,10:00:00,BEATTY_AIRPORT,1\n" | |
"AB1,10:20:00,10:20:00,BULLFROG,2\n" | |
"AB2,10:25:00,10:25:00,STAGECOACH,1\n" | |
"AB2,10:55:00,10:55:00,BULLFROG,2\n") | |
self.schedule = self.MakeLoaderAndLoad(extra_validation=False) | |
# If there is a service gap starting before today, and today has no service, | |
# it should be found - even if tomorrow there is service | |
def testServiceGapBeforeTodayIsDiscovered(self): | |
self.schedule.Validate(today=date(2009, 7, 17), | |
service_gap_interval=13) | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 7, 5), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
self.AssertCommonExceptions(date(2010, 6, 25)) | |
# If today has service past service gaps should not appear | |
def testNoServiceGapBeforeTodayIfTodayHasService(self): | |
self.schedule.Validate(today=date(2009, 7, 18), | |
service_gap_interval=13) | |
self.AssertCommonExceptions(date(2010, 6, 25)) | |
# If the feed starts today NO previous service gap should be found | |
# even if today does not have service | |
def testNoServiceGapBeforeTodayIfTheFeedStartsToday(self): | |
self.schedule.Validate(today=date(2009, 06, 01), | |
service_gap_interval=13) | |
# This service gap is the one between FULLW and WE | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 6, 11), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
# The one-year period ends before the June 2010 gap, so that last | |
# service gap should _not_ be found | |
self.AssertCommonExceptions(None) | |
# If there is a gap at the end of the one-year period we should find it | |
def testGapAtTheEndOfTheOneYearPeriodIsDiscovered(self): | |
self.schedule.Validate(today=date(2009, 06, 22), | |
service_gap_interval=13) | |
# This service gap is the one between FULLW and WE | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 6, 11), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
self.AssertCommonExceptions(date(2010, 6, 21)) | |
# If we are right in the middle of a big service gap it should be | |
# report as starting on "today - 12 days" and lasting until | |
# service resumes | |
def testCurrentServiceGapIsDiscovered(self): | |
self.schedule.Validate(today=date(2009, 6, 30), | |
service_gap_interval=13) | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 6, 18), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 7, 17), | |
exception.last_day_without_service) | |
self.AssertCommonExceptions(date(2010, 6, 25)) | |
# Asserts the service gaps that appear towards the end of the calendar | |
# and which are common to all the tests | |
def AssertCommonExceptions(self, last_exception_date): | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 8, 10), | |
exception.first_day_without_service) | |
self.assertEquals(date(2009, 8, 22), | |
exception.last_day_without_service) | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2009, 12, 28), | |
exception.first_day_without_service) | |
self.assertEquals(date(2010, 1, 15), | |
exception.last_day_without_service) | |
if last_exception_date is not None: | |
exception = self.accumulator.PopException("TooManyDaysWithoutService") | |
self.assertEquals(date(2010, 6, 7), | |
exception.first_day_without_service) | |
self.assertEquals(last_exception_date, | |
exception.last_day_without_service) | |
self.accumulator.AssertNoMoreExceptions() | |
class TestGtfsFactory(util.TestCase): | |
def setUp(self): | |
self._factory = transitfeed.GetGtfsFactory() | |
def testCanUpdateMapping(self): | |
self._factory.UpdateMapping("agency.txt", | |
{"required": False, | |
"classes": ["Foo"]}) | |
self._factory.RemoveClass("Agency") | |
self._factory.AddClass("Foo", transitfeed.Stop) | |
self._factory.UpdateMapping("calendar.txt", | |
{"loading_order": -4, "classes": ["Bar"]}) | |
self._factory.AddClass("Bar", transitfeed.ServicePeriod) | |
self.assertFalse(self._factory.IsFileRequired("agency.txt")) | |
self.assertFalse(self._factory.IsFileRequired("calendar.txt")) | |
self.assertTrue(self._factory.GetLoadingOrder()[0] == "calendar.txt") | |
self.assertEqual(self._factory.Foo, transitfeed.Stop) | |
self.assertEqual(self._factory.Bar, transitfeed.ServicePeriod) | |
self.assertEqual(self._factory.GetGtfsClassByFileName("agency.txt"), | |
transitfeed.Stop) | |
self.assertFalse(self._factory.IsFileRequired("agency.txt")) | |
known_filenames = self._factory.GetKnownFilenames() | |
self.assertTrue("agency.txt" in known_filenames) | |
self.assertTrue("calendar.txt" in known_filenames) | |
def testCanAddMapping(self): | |
self._factory.AddMapping("newrequiredfile.txt", | |
{ "required":True, "classes": ["NewRequiredClass"], | |
"loading_order": -20}) | |
self._factory.AddClass("NewRequiredClass", transitfeed.Stop) | |
self._factory.AddMapping("newfile.txt", | |
{ "required": False, "classes": ["NewClass"], | |
"loading_order": -10}) | |
self._factory.AddClass("NewClass", transitfeed.FareAttribute) | |
self.assertEqual(self._factory.NewClass, transitfeed.FareAttribute) | |
self.assertEqual(self._factory.NewRequiredClass, transitfeed.Stop) | |
self.assertTrue(self._factory.IsFileRequired("newrequiredfile.txt")) | |
self.assertFalse(self._factory.IsFileRequired("newfile.txt")) | |
known_filenames = self._factory.GetKnownFilenames() | |
self.assertTrue("newfile.txt" in known_filenames) | |
self.assertTrue("newrequiredfile.txt" in known_filenames) | |
loading_order = self._factory.GetLoadingOrder() | |
self.assertTrue(loading_order[0] == "newrequiredfile.txt") | |
self.assertTrue(loading_order[1] == "newfile.txt") | |
def testThrowsExceptionWhenAddingDuplicateMapping(self): | |
self.assertRaises(transitfeed.DuplicateMapping, | |
self._factory.AddMapping, | |
"agency.txt", | |
{"required": True, "classes": ["Stop"], | |
"loading_order": -20}) | |
def testThrowsExceptionWhenAddingInvalidMapping(self): | |
self.assertRaises(transitfeed.InvalidMapping, | |
self._factory.AddMapping, | |
"foo.txt", | |
{"required": True, | |
"loading_order": -20}) | |
def testThrowsExceptionWhenUpdatingNonexistentMapping(self): | |
self.assertRaises(transitfeed.NonexistentMapping, | |
self._factory.UpdateMapping, | |
'doesnotexist.txt', | |
{'required': False}) | |
def testCanRemoveFileFromLoadingOrder(self): | |
self._factory.UpdateMapping("agency.txt", | |
{"loading_order": None}) | |
self.assertTrue("agency.txt" not in self._factory.GetLoadingOrder()) | |
def testCanRemoveMapping(self): | |
self._factory.RemoveMapping("agency.txt") | |
self.assertFalse("agency.txt" in self._factory.GetKnownFilenames()) | |
self.assertFalse("agency.txt" in self._factory.GetLoadingOrder()) | |
self.assertEqual(self._factory.GetGtfsClassByFileName("agency.txt"), | |
None) | |
self.assertFalse(self._factory.IsFileRequired("agency.txt")) | |
def testIsFileRequired(self): | |
self.assertTrue(self._factory.IsFileRequired("agency.txt")) | |
self.assertTrue(self._factory.IsFileRequired("stops.txt")) | |
self.assertTrue(self._factory.IsFileRequired("routes.txt")) | |
self.assertTrue(self._factory.IsFileRequired("trips.txt")) | |
self.assertTrue(self._factory.IsFileRequired("stop_times.txt")) | |
# We don't have yet a way to specify that one or the other (or both | |
# simultaneously) might be provided, so we don't consider them as required | |
# for now | |
self.assertFalse(self._factory.IsFileRequired("calendar.txt")) | |
self.assertFalse(self._factory.IsFileRequired("calendar_dates.txt")) | |
self.assertFalse(self._factory.IsFileRequired("fare_attributes.txt")) | |
self.assertFalse(self._factory.IsFileRequired("fare_rules.txt")) | |
self.assertFalse(self._factory.IsFileRequired("shapes.txt")) | |
self.assertFalse(self._factory.IsFileRequired("frequencies.txt")) | |
self.assertFalse(self._factory.IsFileRequired("transfers.txt")) | |
def testFactoryReturnsClassesAndNotInstances(self): | |
for filename in ("agency.txt", "fare_attributes.txt", | |
"fare_rules.txt", "frequencies.txt", "stops.txt", "stop_times.txt", | |
"transfers.txt", "routes.txt", "trips.txt"): | |
class_object = self._factory.GetGtfsClassByFileName(filename) | |
self.assertTrue(isinstance(class_object, | |
(types.TypeType, types.ClassType)), | |
"The mapping from filenames to classes must return " | |
"classes and not instances. This is not the case for " + | |
filename) | |
def testCanFindClassByClassName(self): | |
self.assertEqual(transitfeed.Agency, self._factory.Agency) | |
self.assertEqual(transitfeed.FareAttribute, self._factory.FareAttribute) | |
self.assertEqual(transitfeed.FareRule, self._factory.FareRule) | |
self.assertEqual(transitfeed.Frequency, self._factory.Frequency) | |
self.assertEqual(transitfeed.Route, self._factory.Route) | |
self.assertEqual(transitfeed.ServicePeriod, self._factory.ServicePeriod) | |
self.assertEqual(transitfeed.Shape, self._factory.Shape) | |
self.assertEqual(transitfeed.ShapePoint, self._factory.ShapePoint) | |
self.assertEqual(transitfeed.Stop, self._factory.Stop) | |
self.assertEqual(transitfeed.StopTime, self._factory.StopTime) | |
self.assertEqual(transitfeed.Transfer, self._factory.Transfer) | |
self.assertEqual(transitfeed.Trip, self._factory.Trip) | |
def testCanFindClassByFileName(self): | |
self.assertEqual(transitfeed.Agency, | |
self._factory.GetGtfsClassByFileName('agency.txt')) | |
self.assertEqual(transitfeed.FareAttribute, | |
self._factory.GetGtfsClassByFileName( | |
'fare_attributes.txt')) | |
self.assertEqual(transitfeed.FareRule, | |
self._factory.GetGtfsClassByFileName('fare_rules.txt')) | |
self.assertEqual(transitfeed.Frequency, | |
self._factory.GetGtfsClassByFileName('frequencies.txt')) | |
self.assertEqual(transitfeed.Route, | |
self._factory.GetGtfsClassByFileName('routes.txt')) | |
self.assertEqual(transitfeed.ServicePeriod, | |
self._factory.GetGtfsClassByFileName('calendar.txt')) | |
self.assertEqual(transitfeed.ServicePeriod, | |
self._factory.GetGtfsClassByFileName('calendar_dates.txt')) | |
self.assertEqual(transitfeed.Stop, | |
self._factory.GetGtfsClassByFileName('stops.txt')) | |
self.assertEqual(transitfeed.StopTime, | |
self._factory.GetGtfsClassByFileName('stop_times.txt')) | |
self.assertEqual(transitfeed.Transfer, | |
self._factory.GetGtfsClassByFileName('transfers.txt')) | |
self.assertEqual(transitfeed.Trip, | |
self._factory.GetGtfsClassByFileName('trips.txt')) | |
def testClassFunctionsRaiseExceptions(self): | |
self.assertRaises(transitfeed.NonexistentMapping, | |
self._factory.RemoveClass, | |
"Agenci") | |
self.assertRaises(transitfeed.DuplicateMapping, | |
self._factory.AddClass, | |
"Agency", transitfeed.Agency) | |
self.assertRaises(transitfeed.NonStandardMapping, | |
self._factory.GetGtfsClassByFileName, | |
'shapes.txt') | |
self.assertRaises(transitfeed.NonexistentMapping, | |
self._factory.UpdateClass, | |
"Agenci", transitfeed.Agency) | |
class TestGtfsFactoryUser(util.TestCase): | |
def AssertDefaultFactoryIsReturnedIfNoneIsSet(self, instance): | |
self.assertTrue(isinstance(instance.GetGtfsFactory(), | |
transitfeed.GtfsFactory)) | |
def AssertFactoryIsSavedAndReturned(self, instance, factory): | |
instance.SetGtfsFactory(factory) | |
self.assertEquals(factory, instance.GetGtfsFactory()) | |
def testClasses(self): | |
class FakeGtfsFactory(object): | |
pass | |
factory = transitfeed.GetGtfsFactory() | |
gtfs_class_instances = [ | |
factory.Shape("id"), | |
factory.ShapePoint(), | |
] | |
gtfs_class_instances += [factory.GetGtfsClassByFileName(filename)() for | |
filename in factory.GetLoadingOrder()] | |
for instance in gtfs_class_instances: | |
self.AssertDefaultFactoryIsReturnedIfNoneIsSet(instance) | |
self.AssertFactoryIsSavedAndReturned(instance, FakeGtfsFactory()) | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.4 | |
# | |
# Copyright (C) 2009 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""Tests for unusual_trip_filter.py""" | |
__author__ = 'Jiri Semecky <jiri.semecky@gmail.com>' | |
import unusual_trip_filter | |
import transitfeed | |
import unittest | |
import util | |
class UnusualTripFilterTestCase(util.TempDirTestCaseBase): | |
"""Test of unusual trip filter functionality.""" | |
def testFilter(self): | |
"""Test if filtering works properly.""" | |
expected_values = { | |
'CITY1':0, 'CITY2':0, 'CITY3':0, 'CITY4' :0, 'CITY5' :0, 'CITY6' :0, | |
'CITY7':0, 'CITY8':0, 'CITY9':0, 'CITY10':0, 'CITY11':1, 'CITY12':1, | |
} | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
for trip_id, expected_trip_type in expected_values.items(): | |
actual_trip_type = schedule.trips[trip_id]['trip_type'] | |
try: | |
self.assertEquals(int(actual_trip_type), expected_trip_type) | |
except ValueError: | |
self.assertEquals(actual_trip_type, '') | |
def testFilterNoForceFilter(self): | |
"""Test that force==False doesn't set default values""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, force=False, quiet=True) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
schedule.trips['CITY2'].trip_type = 'odd-trip' | |
filter.filter(schedule) | |
trip1 = schedule.trips['CITY1'] | |
self.assertEquals(trip1['trip_type'], '') | |
trip2 = schedule.trips['CITY2'] | |
self.assertEquals(trip2['trip_type'], 'odd-trip') | |
def testFilterForceFilter(self): | |
"""Test that force==True does set default values""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, force=True, quiet=False) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
schedule.trips['CITY2'].trip_type = 'odd-trip' | |
filter.filter(schedule) | |
trip1 = schedule.trips['CITY1'] | |
self.assertEquals(trip1['trip_type'], '0') | |
trip2 = schedule.trips['CITY2'] | |
self.assertEquals(trip2['trip_type'], '0') | |
def testFilterAppliedForSpecifiedRouteType(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type=3) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '1') | |
def testFilterNotAppliedForUnspecifiedRouteType(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type=2) | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '') | |
def testFilterAppliedForRouteTypeSpecifiedByName(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type='Bus') | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '1') | |
def testFilterNotAppliedForDifferentRouteTypeSpecifiedByName(self): | |
"""Setting integer route_type filters trips of this route type.""" | |
filter = unusual_trip_filter.UnusualTripFilter(0.1, quiet=True, | |
route_type='Ferry') | |
input = self.GetPath('test', 'data', 'filter_unusual_trips') | |
loader = transitfeed.Loader(input, extra_validation=True) | |
schedule = loader.Load() | |
filter.filter(schedule) | |
actual_trip_type = schedule.trips['CITY11']['trip_type'] | |
self.assertEquals(actual_trip_type, '') | |
if __name__ == '__main__': | |
unittest.main() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
# Code shared between tests. | |
import os | |
import os.path | |
import re | |
import cStringIO as StringIO | |
import shutil | |
import subprocess | |
import sys | |
import tempfile | |
import traceback | |
import transitfeed | |
import unittest | |
import zipfile | |
def check_call(cmd, expected_retcode=0, stdin_str="", **kwargs): | |
"""Convenience function that is in the docs for subprocess but not | |
installed on my system. Raises an Exception if the return code is not | |
expected_retcode. Returns a tuple of strings, (stdout, stderr).""" | |
try: | |
if 'stdout' in kwargs or 'stderr' in kwargs or 'stdin' in kwargs: | |
raise Exception("Don't pass stdout or stderr") | |
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, | |
stderr=subprocess.PIPE, stdin=subprocess.PIPE, | |
**kwargs) | |
(out, err) = p.communicate(stdin_str) | |
retcode = p.returncode | |
except Exception, e: | |
raise Exception("When running %s: %s" % (cmd, e)) | |
if retcode < 0: | |
raise Exception( | |
"Child '%s' was terminated by signal %d. Output:\n%s\n%s\n" % | |
(cmd, -retcode, out, err)) | |
elif retcode != expected_retcode: | |
raise Exception( | |
"Child '%s' returned %d. Output:\n%s\n%s\n" % | |
(cmd, retcode, out, err)) | |
return (out, err) | |
class TestCase(unittest.TestCase): | |
"""Base of every TestCase class in this project. | |
This adds some methods that perhaps should be in unittest.TestCase. | |
""" | |
# Note from Tom, Dec 9 2009: Be careful about adding setUp or tearDown | |
# because they will be run a few hundred times. | |
def assertMatchesRegex(self, regex, string): | |
"""Assert that regex is found in string.""" | |
if not re.search(regex, string): | |
self.fail("string %r did not match regex %r" % (string, regex)) | |
class GetPathTestCase(TestCase): | |
"""TestCase with method to get paths to files in the distribution.""" | |
def setUp(self): | |
super(GetPathTestCase, self).setUp() | |
self._origcwd = os.getcwd() | |
def GetExamplePath(self, name): | |
"""Return the full path of a file in the examples directory""" | |
return self.GetPath('examples', name) | |
def GetTestDataPath(self, *path): | |
"""Return the full path of a file in the test/data directory""" | |
return self.GetPath('test', 'data', *path) | |
def GetPath(self, *path): | |
"""Return absolute path of path. path is relative main source directory.""" | |
here = os.path.dirname(__file__) # Relative to _origcwd | |
return os.path.join(self._origcwd, here, '..', *path) | |
class TempDirTestCaseBase(GetPathTestCase): | |
"""Make a temporary directory the current directory before running the test | |
and remove it after the test. | |
""" | |
def setUp(self): | |
GetPathTestCase.setUp(self) | |
self.tempdirpath = tempfile.mkdtemp() | |
os.chdir(self.tempdirpath) | |
def tearDown(self): | |
os.chdir(self._origcwd) | |
shutil.rmtree(self.tempdirpath) | |
GetPathTestCase.tearDown(self) | |
def CheckCallWithPath(self, cmd, expected_retcode=0, stdin_str=""): | |
"""Run python script cmd[0] with args cmd[1:], making sure 'import | |
transitfeed' will use the module in this source tree. Raises an Exception | |
if the return code is not expected_retcode. Returns a tuple of strings, | |
(stdout, stderr).""" | |
tf_path = transitfeed.__file__ | |
# Path of the directory containing transitfeed. When this is added to | |
# sys.path importing transitfeed should work independent of if | |
# transitfeed.__file__ is <parent>/transitfeed.py or | |
# <parent>/transitfeed/__init__.py | |
transitfeed_parent = tf_path[:tf_path.rfind("transitfeed")] | |
transitfeed_parent = transitfeed_parent.replace("\\", "/").rstrip("/") | |
script_path = cmd[0].replace("\\", "/") | |
script_args = cmd[1:] | |
# Propogate sys.path of this process to the subprocess. This is done | |
# because I assume that if this process has a customized sys.path it is | |
# meant to be used for all processes involved in the tests. The downside | |
# of this is that the subprocess is no longer a clean version of what you | |
# get when running "python" after installing transitfeed. Hopefully if this | |
# process uses a customized sys.path you know what you are doing. | |
env = {"PYTHONPATH": ":".join(sys.path)} | |
# Instead of directly running the script make sure that the transitfeed | |
# module in this source directory is at the front of sys.path. Then | |
# adjust sys.argv so it looks like the script was run directly. This lets | |
# OptionParser use the correct value for %proj. | |
cmd = [sys.executable, "-c", | |
"import sys; " | |
"sys.path.insert(0,'%s'); " | |
"sys.argv = ['%s'] + sys.argv[1:]; " | |
"exec(open('%s'))" % | |
(transitfeed_parent, script_path, script_path)] + script_args | |
return check_call(cmd, expected_retcode=expected_retcode, shell=False, | |
env=env, stdin_str=stdin_str) | |
def ConvertZipToDict(self, zip): | |
"""Converts a zip file into a dictionary. | |
Arguments: | |
zip: The zipfile whose contents are to be converted to a dictionary. | |
Returns: | |
A dictionary mapping filenames to file contents.""" | |
zip_dict = {} | |
for archive_name in zip.namelist(): | |
zip_dict[archive_name] = zip.read(archive_name) | |
zip.close() | |
return zip_dict | |
def ConvertDictToZip(self, dict): | |
"""Converts a dictionary to an in-memory zipfile. | |
Arguments: | |
dict: A dictionary mapping file names to file contents | |
Returns: | |
The new file's in-memory contents as a file-like object.""" | |
zipfile_mem = StringIO.StringIO() | |
zip = zipfile.ZipFile(zipfile_mem, 'a') | |
for arcname, contents in dict.items(): | |
zip.writestr(arcname, contents) | |
zip.close() | |
return zipfile_mem | |
#TODO(anog): Revisit this after we implement proper per-exception level change | |
class RecordingProblemAccumulator(transitfeed.ProblemAccumulatorInterface): | |
"""Save all problems for later inspection. | |
Args: | |
test_case: a unittest.TestCase object on which to report problems | |
ignore_types: sequence of string type names that will be ignored by the | |
ProblemAccumulator""" | |
def __init__(self, test_case, ignore_types=None): | |
self.exceptions = [] | |
self._test_case = test_case | |
self._ignore_types = ignore_types or set() | |
def _Report(self, e): | |
# Ensure that these don't crash | |
e.FormatProblem() | |
e.FormatContext() | |
if e.__class__.__name__ in self._ignore_types: | |
return | |
# Keep the 7 nearest stack frames. This should be enough to identify | |
# the code path that created the exception while trimming off most of the | |
# large test framework's stack. | |
traceback_list = traceback.format_list(traceback.extract_stack()[-7:-1]) | |
self.exceptions.append((e, ''.join(traceback_list))) | |
def PopException(self, type_name): | |
"""Return the first exception, which must be a type_name.""" | |
e = self.exceptions.pop(0) | |
e_name = e[0].__class__.__name__ | |
self._test_case.assertEqual(e_name, type_name, | |
"%s != %s\n%s" % | |
(e_name, type_name, self.FormatException(*e))) | |
return e[0] | |
def FormatException(self, exce, tb): | |
return ("%s\nwith gtfs file context %s\nand traceback\n%s" % | |
(exce.FormatProblem(), exce.FormatContext(), tb)) | |
def TearDownAssertNoMoreExceptions(self): | |
"""Assert that there are no unexpected problems left after a test has run. | |
This function should be called on a test's tearDown. For more information | |
please see AssertNoMoreExceptions""" | |
assert len(self.exceptions) == 0, \ | |
"see util.RecordingProblemAccumulator.AssertNoMoreExceptions" | |
def AssertNoMoreExceptions(self): | |
"""Check that no unexpected problems were reported. | |
Every test that uses a RecordingProblemReporter should end with a call to | |
this method. If setUp creates a RecordingProblemReporter it is good for | |
tearDown to double check that the exceptions list was emptied. | |
""" | |
exceptions_as_text = [] | |
for e, tb in self.exceptions: | |
exceptions_as_text.append(self.FormatException(e, tb)) | |
# If the assertFalse below fails the test will abort and tearDown is | |
# called. Some tearDown methods assert that self.exceptions is empty as | |
# protection against a test that doesn't end with AssertNoMoreExceptions | |
# and has exceptions remaining in the RecordingProblemReporter. It would | |
# be nice to trigger a normal test failure in tearDown but the idea was | |
# rejected (http://bugs.python.org/issue5531). | |
self.exceptions = [] | |
self._test_case.assertFalse(exceptions_as_text, | |
"\n".join(exceptions_as_text)) | |
def PopInvalidValue(self, column_name, file_name=None): | |
e = self.PopException("InvalidValue") | |
self._test_case.assertEquals(column_name, e.column_name) | |
if file_name: | |
self._test_case.assertEquals(file_name, e.file_name) | |
return e | |
def PopMissingValue(self, column_name, file_name=None): | |
e = self.PopException("MissingValue") | |
self._test_case.assertEquals(column_name, e.column_name) | |
if file_name: | |
self._test_case.assertEquals(file_name, e.file_name) | |
return e | |
def PopDuplicateColumn(self, file_name, header, count): | |
e = self.PopException("DuplicateColumn") | |
self._test_case.assertEquals(file_name, e.file_name) | |
self._test_case.assertEquals(header, e.header) | |
self._test_case.assertEquals(count, e.count) | |
return e | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""This module is a library to help you create, read and write Google | |
Transit Feed files. Refer to the feed specification, available at | |
http://code.google.com/transit/spec/transit_feed_specification.htm, for a | |
complete description how the transit feed represents a transit schedule. This | |
library supports all required parts of the specification but does not yet | |
support all optional parts. Patches welcome! | |
Before transitfeed version 1.2.4 all our library code was distributed in a | |
one file module, transitfeed.py, and could be used as | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
At that time the module (one file, transitfeed.py) was converted into a | |
package (a directory named transitfeed containing __init__.py and multiple .py | |
files). Classes and attributes exposed by the old module may still be imported | |
in the same way. Indeed, code that depends on the library <em>should</em> | |
continue to use import commands such as the above and ignore _transitfeed. | |
To import the transitfeed module you should do something like: | |
import transitfeed | |
schedule = transitfeed.Schedule() | |
... | |
The specification describes several tables such as stops, routes and trips. | |
In a feed file these are stored as comma separeted value files. This library | |
represents each row of these tables with a single Python object. This object has | |
attributes for each value on the row. For example, schedule.AddStop returns a | |
Stop object which has attributes such as stop_lat and stop_name. | |
Schedule: Central object of the parser | |
GenericGTFSObject: A base class for each of the objects below | |
Route: Represents a single route | |
Trip: Represents a single trip | |
Stop: Represents a single stop | |
ServicePeriod: Represents a single service, a set of dates | |
Agency: Represents the agency in this feed | |
Transfer: Represents a single transfer rule | |
TimeToSecondsSinceMidnight(): Convert HH:MM:SS into seconds since midnight. | |
FormatSecondsSinceMidnight(s): Formats number of seconds past midnight into a string | |
""" | |
# util needs to be imported before problems because otherwise the loading order | |
# of this module is Agency -> Problems -> Util -> Trip and trip tries to | |
# use problems.default_problem_reporter as a default argument (which fails | |
# because problems.py isn't fully loaded yet). Loading util first solves this as | |
# problems.py gets fully loaded right away. | |
# TODO: Solve this problem cleanly | |
from util import * | |
from agency import * | |
from fareattribute import * | |
from farerule import * | |
from frequency import * | |
from gtfsfactory import * | |
from gtfsfactoryuser import * | |
from gtfsobjectbase import * | |
from loader import * | |
from problems import * | |
from route import * | |
from schedule import * | |
from serviceperiod import * | |
from shape import * | |
from shapelib import * | |
from shapeloader import * | |
from shapepoint import * | |
from stop import * | |
from stoptime import * | |
from transfer import * | |
from trip import * | |
__version__ = '1.2.6' | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from gtfsobjectbase import GtfsObjectBase | |
from problems import default_problem_reporter | |
import util | |
class Agency(GtfsObjectBase): | |
"""Represents an agency in a schedule. | |
Callers may assign arbitrary values to instance attributes. __init__ makes no | |
attempt at validating the attributes. Call Validate() to check that | |
attributes are valid and the agency object is consistent with itself. | |
Attributes: | |
All attributes are strings. | |
""" | |
_REQUIRED_FIELD_NAMES = ['agency_name', 'agency_url', 'agency_timezone'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['agency_id', 'agency_lang', | |
'agency_phone'] | |
_TABLE_NAME = 'agency' | |
def __init__(self, name=None, url=None, timezone=None, id=None, | |
field_dict=None, lang=None, **kwargs): | |
"""Initialize a new Agency object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
name: a string, ignored when field_dict is present | |
url: a string, ignored when field_dict is present | |
timezone: a string, ignored when field_dict is present | |
id: a string, ignored when field_dict is present | |
kwargs: arbitrary keyword arguments may be used to add attributes to the | |
new object, ignored when field_dict is present | |
""" | |
self._schedule = None | |
if not field_dict: | |
if name: | |
kwargs['agency_name'] = name | |
if url: | |
kwargs['agency_url'] = url | |
if timezone: | |
kwargs['agency_timezone'] = timezone | |
if id: | |
kwargs['agency_id'] = id | |
if lang: | |
kwargs['agency_lang'] = lang | |
field_dict = kwargs | |
self.__dict__.update(field_dict) | |
def ValidateRequiredFieldNames(self, problems): | |
for required in self._REQUIRED_FIELD_NAMES: | |
if util.IsEmpty(getattr(self, required, None)): | |
problems.MissingValue(required) | |
return True | |
return False | |
def ValidateAgencyUrl(self, problems): | |
if self.agency_url and not util.IsValidURL(self.agency_url): | |
problems.InvalidValue('agency_url', self.agency_url) | |
return True | |
return False | |
def ValidateAgencyLang(self, problems): | |
if (not util.IsEmpty(self.agency_lang) and | |
self.agency_lang.lower() not in ISO639.codes_2letter): | |
problems.InvalidValue('agency_lang', self.agency_lang) | |
return True | |
return False | |
def ValidateAgencyTimezone(self, problems): | |
try: | |
import pytz | |
if self.agency_timezone not in pytz.common_timezones: | |
problems.InvalidValue( | |
'agency_timezone', | |
self.agency_timezone, | |
'"%s" is not a common timezone name according to pytz version %s' % | |
(self.agency_timezone, pytz.VERSION)) | |
return True | |
except ImportError: # no pytz | |
print ("Timezone not checked " | |
"(install pytz package for timezone validation)") | |
return False | |
def Validate(self, problems=default_problem_reporter): | |
"""Validate attribute values and this object's internal consistency. | |
Returns: | |
True iff all validation checks passed. | |
""" | |
found_problem = False | |
found_problem = self.ValidateRequiredFieldNames(problems) or found_problem | |
found_problem = self.ValidateAgencyUrl(problems) or found_problem | |
found_problem = self.ValidateAgencyLang(problems) or found_problem | |
found_problem = self.ValidateAgencyTimezone(problems) or found_problem | |
return not found_problem | |
def ValidateBeforeAdd(self, problems): | |
return True | |
def ValidateAfterAdd(self, problems): | |
self.Validate(problems) | |
def AddToSchedule(self, schedule, problems): | |
schedule.AddAgencyObject(self, problems) | |
class ISO639(object): | |
# Set of all the 2-letter ISO 639-1 language codes. | |
codes_2letter = set([ | |
'aa', 'ab', 'ae', 'af', 'ak', 'am', 'an', 'ar', 'as', 'av', 'ay', 'az', | |
'ba', 'be', 'bg', 'bh', 'bi', 'bm', 'bn', 'bo', 'br', 'bs', 'ca', 'ce', | |
'ch', 'co', 'cr', 'cs', 'cu', 'cv', 'cy', 'da', 'de', 'dv', 'dz', 'ee', | |
'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fj', 'fo', 'fr', | |
'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'gv', 'ha', 'he', 'hi', 'ho', 'hr', | |
'ht', 'hu', 'hy', 'hz', 'ia', 'id', 'ie', 'ig', 'ii', 'ik', 'io', 'is', | |
'it', 'iu', 'ja', 'jv', 'ka', 'kg', 'ki', 'kj', 'kk', 'kl', 'km', 'kn', | |
'ko', 'kr', 'ks', 'ku', 'kv', 'kw', 'ky', 'la', 'lb', 'lg', 'li', 'ln', | |
'lo', 'lt', 'lu', 'lv', 'mg', 'mh', 'mi', 'mk', 'ml', 'mn', 'mo', 'mr', | |
'ms', 'mt', 'my', 'na', 'nb', 'nd', 'ne', 'ng', 'nl', 'nn', 'no', 'nr', | |
'nv', 'ny', 'oc', 'oj', 'om', 'or', 'os', 'pa', 'pi', 'pl', 'ps', 'pt', | |
'qu', 'rm', 'rn', 'ro', 'ru', 'rw', 'sa', 'sc', 'sd', 'se', 'sg', 'si', | |
'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'ss', 'st', 'su', 'sv', 'sw', | |
'ta', 'te', 'tg', 'th', 'ti', 'tk', 'tl', 'tn', 'to', 'tr', 'ts', 'tt', | |
'tw', 'ty', 'ug', 'uk', 'ur', 'uz', 've', 'vi', 'vo', 'wa', 'wo', 'xh', | |
'yi', 'yo', 'za', 'zh', 'zu', | |
]) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from gtfsobjectbase import GtfsObjectBase | |
from problems import default_problem_reporter | |
import util | |
class FareAttribute(GtfsObjectBase): | |
"""Represents a fare type.""" | |
_REQUIRED_FIELD_NAMES = ['fare_id', 'price', 'currency_type', | |
'payment_method', 'transfers'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['transfer_duration'] | |
_TABLE_NAME = "fare_attributes" | |
def __init__(self, | |
fare_id=None, price=None, currency_type=None, | |
payment_method=None, transfers=None, transfer_duration=None, | |
field_dict=None): | |
self._schedule = None | |
(self.fare_id, self.price, self.currency_type, self.payment_method, | |
self.transfers, self.transfer_duration) = \ | |
(fare_id, price, currency_type, payment_method, | |
transfers, transfer_duration) | |
if field_dict: | |
if isinstance(field_dict, FareAttribute): | |
# Special case so that we don't need to re-parse the attributes to | |
# native types iteritems returns all attributes that don't start with _ | |
for k, v in field_dict.iteritems(): | |
self.__dict__[k] = v | |
else: | |
self.__dict__.update(field_dict) | |
self.rules = [] | |
try: | |
self.price = float(self.price) | |
except (TypeError, ValueError): | |
pass | |
try: | |
self.payment_method = int(self.payment_method) | |
except (TypeError, ValueError): | |
pass | |
if self.transfers == None or self.transfers == "": | |
self.transfers = None | |
else: | |
try: | |
self.transfers = int(self.transfers) | |
except (TypeError, ValueError): | |
pass | |
if self.transfer_duration == None or self.transfer_duration == "": | |
self.transfer_duration = None | |
else: | |
try: | |
self.transfer_duration = int(self.transfer_duration) | |
except (TypeError, ValueError): | |
pass | |
def GetFareRuleList(self): | |
return self.rules | |
def ClearFareRules(self): | |
self.rules = [] | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in self._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
if self.GetFieldValuesTuple() != other.GetFieldValuesTuple(): | |
return False | |
self_rules = [r.GetFieldValuesTuple() for r in self.GetFareRuleList()] | |
self_rules.sort() | |
other_rules = [r.GetFieldValuesTuple() for r in other.GetFareRuleList()] | |
other_rules.sort() | |
return self_rules == other_rules | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def ValidateFareId(self, problems): | |
if util.IsEmpty(self.fare_id): | |
problems.MissingValue("fare_id") | |
def ValidatePrice(self, problems): | |
if self.price == None: | |
problems.MissingValue("price") | |
elif not isinstance(self.price, float) and not isinstance(self.price, int): | |
problems.InvalidValue("price", self.price) | |
elif self.price < 0: | |
problems.InvalidValue("price", self.price) | |
def ValidateCurrencyType(self, problems): | |
if util.IsEmpty(self.currency_type): | |
problems.MissingValue("currency_type") | |
elif self.currency_type not in ISO4217.codes: | |
problems.InvalidValue("currency_type", self.currency_type) | |
def ValidatePaymentMethod(self, problems): | |
if self.payment_method == "" or self.payment_method == None: | |
problems.MissingValue("payment_method") | |
elif (not isinstance(self.payment_method, int) or | |
self.payment_method not in range(0, 2)): | |
problems.InvalidValue("payment_method", self.payment_method) | |
def ValidateTransfers(self, problems): | |
if not ((self.transfers == None) or | |
(isinstance(self.transfers, int) and | |
self.transfers in range(0, 3))): | |
problems.InvalidValue("transfers", self.transfers) | |
def ValidateTransferDuration(self, problems): | |
if ((self.transfer_duration != None) and | |
not isinstance(self.transfer_duration, int)): | |
problems.InvalidValue("transfer_duration", self.transfer_duration) | |
if self.transfer_duration and (self.transfer_duration < 0): | |
problems.InvalidValue("transfer_duration", self.transfer_duration) | |
def Validate(self, problems=default_problem_reporter): | |
self.ValidateFareId(problems) | |
self.ValidatePrice(problems) | |
self.ValidateCurrencyType(problems) | |
self.ValidatePaymentMethod(problems) | |
self.ValidateTransfers(problems) | |
self.ValidateTransferDuration(problems) | |
def ValidateBeforeAdd(self, problems): | |
return True | |
def ValidateAfterAdd(self, problems): | |
return | |
def AddToSchedule(self, schedule=None, problems=None): | |
if schedule: | |
schedule.AddFareAttributeObject(self, problems) | |
# TODO: move these into a separate file | |
class ISO4217(object): | |
"""Represents the set of currencies recognized by the ISO-4217 spec.""" | |
codes = { # map of alpha code to numerical code | |
'AED': 784, 'AFN': 971, 'ALL': 8, 'AMD': 51, 'ANG': 532, 'AOA': 973, | |
'ARS': 32, 'AUD': 36, 'AWG': 533, 'AZN': 944, 'BAM': 977, 'BBD': 52, | |
'BDT': 50, 'BGN': 975, 'BHD': 48, 'BIF': 108, 'BMD': 60, 'BND': 96, | |
'BOB': 68, 'BOV': 984, 'BRL': 986, 'BSD': 44, 'BTN': 64, 'BWP': 72, | |
'BYR': 974, 'BZD': 84, 'CAD': 124, 'CDF': 976, 'CHE': 947, 'CHF': 756, | |
'CHW': 948, 'CLF': 990, 'CLP': 152, 'CNY': 156, 'COP': 170, 'COU': 970, | |
'CRC': 188, 'CUP': 192, 'CVE': 132, 'CYP': 196, 'CZK': 203, 'DJF': 262, | |
'DKK': 208, 'DOP': 214, 'DZD': 12, 'EEK': 233, 'EGP': 818, 'ERN': 232, | |
'ETB': 230, 'EUR': 978, 'FJD': 242, 'FKP': 238, 'GBP': 826, 'GEL': 981, | |
'GHC': 288, 'GIP': 292, 'GMD': 270, 'GNF': 324, 'GTQ': 320, 'GYD': 328, | |
'HKD': 344, 'HNL': 340, 'HRK': 191, 'HTG': 332, 'HUF': 348, 'IDR': 360, | |
'ILS': 376, 'INR': 356, 'IQD': 368, 'IRR': 364, 'ISK': 352, 'JMD': 388, | |
'JOD': 400, 'JPY': 392, 'KES': 404, 'KGS': 417, 'KHR': 116, 'KMF': 174, | |
'KPW': 408, 'KRW': 410, 'KWD': 414, 'KYD': 136, 'KZT': 398, 'LAK': 418, | |
'LBP': 422, 'LKR': 144, 'LRD': 430, 'LSL': 426, 'LTL': 440, 'LVL': 428, | |
'LYD': 434, 'MAD': 504, 'MDL': 498, 'MGA': 969, 'MKD': 807, 'MMK': 104, | |
'MNT': 496, 'MOP': 446, 'MRO': 478, 'MTL': 470, 'MUR': 480, 'MVR': 462, | |
'MWK': 454, 'MXN': 484, 'MXV': 979, 'MYR': 458, 'MZN': 943, 'NAD': 516, | |
'NGN': 566, 'NIO': 558, 'NOK': 578, 'NPR': 524, 'NZD': 554, 'OMR': 512, | |
'PAB': 590, 'PEN': 604, 'PGK': 598, 'PHP': 608, 'PKR': 586, 'PLN': 985, | |
'PYG': 600, 'QAR': 634, 'ROL': 642, 'RON': 946, 'RSD': 941, 'RUB': 643, | |
'RWF': 646, 'SAR': 682, 'SBD': 90, 'SCR': 690, 'SDD': 736, 'SDG': 938, | |
'SEK': 752, 'SGD': 702, 'SHP': 654, 'SKK': 703, 'SLL': 694, 'SOS': 706, | |
'SRD': 968, 'STD': 678, 'SYP': 760, 'SZL': 748, 'THB': 764, 'TJS': 972, | |
'TMM': 795, 'TND': 788, 'TOP': 776, 'TRY': 949, 'TTD': 780, 'TWD': 901, | |
'TZS': 834, 'UAH': 980, 'UGX': 800, 'USD': 840, 'USN': 997, 'USS': 998, | |
'UYU': 858, 'UZS': 860, 'VEB': 862, 'VND': 704, 'VUV': 548, 'WST': 882, | |
'XAF': 950, 'XAG': 961, 'XAU': 959, 'XBA': 955, 'XBB': 956, 'XBC': 957, | |
'XBD': 958, 'XCD': 951, 'XDR': 960, 'XFO': None, 'XFU': None, 'XOF': 952, | |
'XPD': 964, 'XPF': 953, 'XPT': 962, 'XTS': 963, 'XXX': 999, 'YER': 886, | |
'ZAR': 710, 'ZMK': 894, 'ZWD': 716, | |
} | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from problems import default_problem_reporter | |
from gtfsobjectbase import GtfsObjectBase | |
class FareRule(GtfsObjectBase): | |
"""This class represents a rule that determines which itineraries a | |
fare rule applies to.""" | |
_REQUIRED_FIELD_NAMES = ['fare_id'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['route_id', | |
'origin_id', | |
'destination_id', | |
'contains_id'] | |
_TABLE_NAME = "fare_rules" | |
def __init__(self, fare_id=None, route_id=None, | |
origin_id=None, destination_id=None, contains_id=None, | |
field_dict=None): | |
self._schedule = None | |
(self.fare_id, self.route_id, self.origin_id, self.destination_id, | |
self.contains_id) = \ | |
(fare_id, route_id, origin_id, destination_id, contains_id) | |
if field_dict: | |
if isinstance(field_dict, self.GetGtfsFactory().FareRule): | |
# Special case so that we don't need to re-parse the attributes to | |
# native types iteritems returns all attributes that don't start with _ | |
for k, v in field_dict.iteritems(): | |
self.__dict__[k] = v | |
else: | |
self.__dict__.update(field_dict) | |
# canonicalize non-content values as None | |
if not self.route_id: | |
self.route_id = None | |
if not self.origin_id: | |
self.origin_id = None | |
if not self.destination_id: | |
self.destination_id = None | |
if not self.contains_id: | |
self.contains_id = None | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) for fn in self._FIELD_NAMES] | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.GetFieldValuesTuple() == other.GetFieldValuesTuple() | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def AddToSchedule(self, schedule, problems): | |
self._schedule = schedule | |
schedule.AddFareRuleObject(self, problems) | |
def ValidateBeforeAdd(self, problems): | |
return True | |
def ValidateAfterAdd(self, problems): | |
return | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2010 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from gtfsobjectbase import GtfsObjectBase | |
class Frequency(GtfsObjectBase): | |
"""This class represents a period of a trip during which the vehicle travels | |
at regular intervals (rather than specifying exact times for each stop).""" | |
_REQUIRED_FIELD_NAMES = ['trip_id', 'start_time', 'end_time', | |
'headway_secs'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES | |
_TABLE_NAME = "frequencies" | |
def __init__(self, field_dict=None): | |
self._schedule = None | |
if not field_dict: | |
return | |
self._trip_id = field_dict['trip_id'] | |
self._start_time = field_dict['start_time'] | |
self._end_time = field_dict['end_time'] | |
self._headway_secs = field_dict['headway_secs'] | |
def StartTime(self): | |
return self._start_time | |
def EndTime(self): | |
return self._end_time | |
def TripId(self): | |
return self._trip_id | |
def HeadwaySecs(self): | |
return self._headway_secs | |
def ValidateBeforeAdd(self, problems): | |
return True | |
def ValidateAfterAdd(self, problems): | |
return | |
def Validate(self, problems=None): | |
return | |
def AddToSchedule(self, schedule=None, problems=None): | |
if schedule is None: | |
return | |
self._schedule = schedule | |
try: | |
trip = schedule.GetTrip(self._trip_id) | |
except KeyError: | |
problems.InvalidValue('trip_id', self._trip_id) | |
return | |
trip.AddFrequencyObject(self, problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2010 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from agency import Agency | |
from fareattribute import FareAttribute | |
from farerule import FareRule | |
from frequency import Frequency | |
import loader | |
import problems | |
from route import Route | |
import schedule | |
from serviceperiod import ServicePeriod | |
from shape import Shape | |
from shapepoint import ShapePoint | |
from stop import Stop | |
from stoptime import StopTime | |
from transfer import Transfer | |
from trip import Trip | |
class GtfsFactory(object): | |
"""A factory for the default GTFS objects""" | |
_REQUIRED_MAPPING_FIELDS = ['classes', 'required', 'loading_order'] | |
def __init__(self): | |
self._class_mapping = { | |
'Agency': Agency, | |
'ServicePeriod': ServicePeriod, | |
'FareAttribute': FareAttribute, | |
'FareRule': FareRule, | |
'Frequency': Frequency, | |
'Shape': Shape, | |
'ShapePoint': ShapePoint, | |
'Stop': Stop, | |
'StopTime': StopTime, | |
'Route': Route, | |
'Transfer': Transfer, | |
'Trip': Trip, | |
} | |
self._file_mapping = { | |
'agency.txt': { 'required': True, 'loading_order': 0, | |
'classes': ['Agency'] }, | |
'calendar.txt': { 'required': False, 'loading_order': None, | |
'classes': ['ServicePeriod']}, | |
'calendar_dates.txt': { 'required': False, 'loading_order': None, | |
'classes': ['ServicePeriod']}, | |
'fare_attributes.txt': { 'required': False, 'loading_order': 50, | |
'classes': ['FareAttribute']}, | |
'fare_rules.txt': { 'required': False, 'loading_order': 60, | |
'classes': ['FareRule']}, | |
'frequencies.txt': { 'required': False, 'loading_order': 70, | |
'classes': ['Frequency']}, | |
'shapes.txt': { 'required': False, 'loading_order': None, | |
'classes': ['Shape', 'ShapePoint']}, | |
'stops.txt': { 'required': True, 'loading_order': 10, | |
'classes': ['Stop']}, | |
'stop_times.txt': { 'required': True, 'loading_order': None, | |
'classes': ['StopTime']}, | |
'routes.txt': { 'required': True, 'loading_order': 20, | |
'classes': ['Route']}, | |
'transfers.txt': { 'required': False, 'loading_order': 30, | |
'classes': ['Transfer']}, | |
'trips.txt': { 'required': True, 'loading_order': 40, | |
'classes': ['Trip']}, | |
} | |
def __getattr__(self, name): | |
if name == 'Schedule': | |
return schedule.Schedule | |
if name == 'Loader': | |
return loader.Loader | |
if name in self._class_mapping: | |
return self._class_mapping[name] | |
raise AttributeError(name) | |
def GetGtfsClassByFileName(self, filename): | |
"""Returns the transitfeed class corresponding to a GTFS file. | |
Args: | |
filename: The filename whose class is to be returned | |
Raises: | |
NonStandardMapping if the specified filename has more than one | |
corresponding class | |
""" | |
if filename not in self._file_mapping: | |
return None | |
mapping = self._file_mapping[filename] | |
class_list = mapping['classes'] | |
if len(class_list) > 1: | |
raise problems.NonStandardMapping(filename) | |
else: | |
return self._class_mapping[class_list[0]] | |
def GetLoadingOrder(self): | |
"""Returns a list of filenames sorted by loading order. | |
Only includes files that Loader's standardized loading knows how to load""" | |
result = {} | |
for filename, mapping in self._file_mapping.iteritems(): | |
loading_order = mapping['loading_order'] | |
if loading_order is not None: | |
result[loading_order] = filename | |
return list(result[key] for key in sorted(result)) | |
def IsFileRequired(self, filename): | |
"""Returns true if a file is required by GTFS, false otherwise. | |
Unknown files are, by definition, not required""" | |
if filename not in self._file_mapping: | |
return False | |
mapping = self._file_mapping[filename] | |
return mapping['required'] | |
def GetKnownFilenames(self): | |
"""Returns a list of all known filenames""" | |
return self._file_mapping.keys() | |
def RemoveMapping(self, filename): | |
"""Removes an entry from the list of known filenames. | |
An entry is identified by its filename. | |
filename: The filename whose mapping is to be updated. | |
""" | |
if filename in self._file_mapping: | |
del self._file_mapping[filename] | |
def AddMapping(self, filename, new_mapping): | |
"""Adds an entry to the list of known filenames. | |
Args: | |
filename: The filename whose mapping is being added. | |
new_mapping: A dictionary with the mapping to add. Must contain all | |
fields in _REQUIRED_MAPPING_FIELDS. | |
Raises: | |
DuplicateMapping if the filename already exists in the mapping | |
InvalidMapping if not all required fields are present | |
""" | |
for field in self._REQUIRED_MAPPING_FIELDS: | |
if field not in new_mapping: | |
raise problems.InvalidMapping(field) | |
if filename in self.GetKnownFilenames(): | |
raise problems.DuplicateMapping(filename) | |
self._file_mapping[filename] = new_mapping | |
def UpdateMapping(self, filename, mapping_update): | |
"""Updates an entry in the list of known filenames. | |
An entry is identified by its filename. | |
Args: | |
filename: The filename whose mapping is to be updated | |
mapping_update: A dictionary containing the fields to update and their | |
new values. | |
Raises: | |
InexistentMapping if the filename does not exist in the mapping | |
""" | |
if filename not in self._file_mapping: | |
raise problems.NonexistentMapping(filename) | |
mapping = self._file_mapping[filename] | |
mapping.update(mapping_update) | |
def AddClass(self, class_name, gtfs_class): | |
"""Adds an entry to the list of known classes. | |
Args: | |
class_name: A string with name through which gtfs_class is to be made | |
accessible. | |
gtfs_class: The class to be added. | |
Raises: | |
DuplicateMapping if class_name is already present in the class mapping. | |
""" | |
if class_name in self._class_mapping: | |
raise problems.DuplicateMapping(class_name) | |
self._class_mapping[class_name] = gtfs_class | |
def UpdateClass(self, class_name, gtfs_class): | |
"""Updates an entry in the list of known classes. | |
Args: | |
class_name: A string with the class name that is to be updated. | |
gtfs_class: The new class | |
Raises: | |
NonexistentMapping if there is no class with the specified class_name. | |
""" | |
if class_name not in self._class_mapping: | |
raise problems.NonexistentMapping(class_name) | |
self._class_mapping[class_name] = gtfs_class | |
def RemoveClass(self, class_name): | |
"""Removes an entry from the list of known classes. | |
Args: | |
class_name: A string with the class name that is to be removed. | |
Raises: | |
NonexistentMapping if there is no class with the specified class_name. | |
""" | |
if class_name not in self._class_mapping: | |
raise problems.NonexistentMapping(class_name) | |
del self._class_mapping[class_name] | |
def GetProblemReporter(self): | |
return problems.ProblemReporter() | |
def GetGtfsFactory(): | |
"""Called by FeedValidator to retrieve this extension's GtfsFactory. | |
Extensions will most likely only need to create an instance of | |
transitfeed.GtfsFactory, call {Remove,Add,Update}Mapping as needed, and | |
return that instance""" | |
return GtfsFactory() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2010 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
class GtfsFactoryUser(object): | |
"""Base class for objects that must store a GtfsFactory in order to | |
be able to instantiate Gtfs classes. | |
If a non-default GtfsFactory is to be used, it must be set explicitly.""" | |
_gtfs_factory = None | |
def GetGtfsFactory(self): | |
"""Return the object's GTFS Factory. | |
Returns: | |
The GTFS Factory that was set for this object. If none was explicitly | |
set, it first sets the object's factory to transitfeed's GtfsFactory | |
and returns it""" | |
if self._gtfs_factory is None: | |
#TODO(anog): We really need to create a dependency graph and clean things | |
# up, as the comment in __init__.py says. | |
# Not having GenericGTFSObject as a leaf (with no other | |
# imports) creates all sorts of circular import problems. | |
# This is why the import is here and not at the top level. | |
# When this runs, gtfsfactory should have already been loaded | |
# by other modules, avoiding the circular imports. | |
import gtfsfactory | |
self._gtfs_factory = gtfsfactory.GetGtfsFactory() | |
return self._gtfs_factory | |
def SetGtfsFactory(self, factory): | |
self._gtfs_factory = factory | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from gtfsfactoryuser import GtfsFactoryUser | |
class GtfsObjectBase(GtfsFactoryUser): | |
"""Object with arbitrary attributes which may be added to a schedule. | |
This class should be used as the base class for GTFS objects which may | |
be stored in a Schedule. It defines some methods for reading and writing | |
attributes. If self._schedule is None than the object is not in a Schedule. | |
Subclasses must: | |
* define an __init__ method which sets the _schedule member to None or a | |
weakref to a Schedule | |
* Set the _TABLE_NAME class variable to a name such as 'stops', 'agency', ... | |
* define methods to validate objects of that type: | |
* ValidateBeforeAdd, which is called before an object is added to a | |
Schedule. With the default loader the object is added to the Schedule if | |
this function returns True, and is not added if it returns False. | |
* ValidateAfterAdd, which is called after an object is added to a Schedule. | |
With the default Loader the return value, if any, is not used. | |
""" | |
def __getitem__(self, name): | |
"""Return a unicode or str representation of name or "" if not set.""" | |
if name in self.__dict__ and self.__dict__[name] is not None: | |
return "%s" % self.__dict__[name] | |
else: | |
return "" | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method is only called when name is not found in __dict__. | |
""" | |
if name in self.__class__._FIELD_NAMES: | |
return None | |
else: | |
raise AttributeError(name) | |
def iteritems(self): | |
"""Return a iterable for (name, value) pairs of public attributes.""" | |
for name, value in self.__dict__.iteritems(): | |
if (not name) or name[0] == "_": | |
continue | |
yield name, value | |
def __setattr__(self, name, value): | |
"""Set an attribute, adding name to the list of columns as needed.""" | |
object.__setattr__(self, name, value) | |
if name[0] != '_' and self._schedule: | |
self._schedule.AddTableColumn(self.__class__._TABLE_NAME, name) | |
def __eq__(self, other): | |
"""Return true iff self and other are equivalent""" | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
for k in self.keys().union(other.keys()): | |
# use __getitem__ which returns "" for missing columns values | |
if self[k] != other[k]: | |
return False | |
return True | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
# TODO(Tom): According to | |
# http://docs.python.org/reference/datamodel.html#object.__hash__ | |
# this class should set '__hash__ = None' because it defines __eq__. This | |
# can't be fixed until the merger is changed to not use a/b_merge_map. | |
def __repr__(self): | |
return "<%s %s>" % (self.__class__.__name__, sorted(self.iteritems())) | |
def keys(self): | |
"""Return iterable of columns used by this object.""" | |
columns = set() | |
for name in vars(self): | |
if (not name) or name[0] == "_": | |
continue | |
columns.add(name) | |
return columns | |
def _ColumnNames(self): | |
return self.keys() | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import codecs | |
import cStringIO as StringIO | |
import csv | |
import os | |
import re | |
import zipfile | |
import gtfsfactory as gtfsfactory_module | |
import problems | |
import util | |
class Loader: | |
def __init__(self, | |
feed_path=None, | |
schedule=None, | |
problems=problems.default_problem_reporter, | |
extra_validation=False, | |
load_stop_times=True, | |
memory_db=True, | |
zip=None, | |
check_duplicate_trips=False, | |
gtfs_factory=None): | |
"""Initialize a new Loader object. | |
Args: | |
feed_path: string path to a zip file or directory | |
schedule: a Schedule object or None to have one created | |
problems: a ProblemReporter object, the default reporter raises an | |
exception for each problem | |
extra_validation: True if you would like extra validation | |
load_stop_times: load the stop_times table, used to speed load time when | |
times are not needed. The default is True. | |
memory_db: if creating a new Schedule object use an in-memory sqlite | |
database instead of creating one in a temporary file | |
zip: a zipfile.ZipFile object, optionally used instead of path | |
""" | |
if gtfs_factory is None: | |
gtfs_factory = gtfsfactory_module.GetGtfsFactory() | |
if not schedule: | |
schedule = gtfs_factory.Schedule(problem_reporter=problems, | |
memory_db=memory_db, check_duplicate_trips=check_duplicate_trips) | |
self._extra_validation = extra_validation | |
self._schedule = schedule | |
self._problems = problems | |
self._path = feed_path | |
self._zip = zip | |
self._load_stop_times = load_stop_times | |
self._gtfs_factory = gtfs_factory | |
def _DetermineFormat(self): | |
"""Determines whether the feed is in a form that we understand, and | |
if so, returns True.""" | |
if self._zip: | |
# If zip was passed to __init__ then path isn't used | |
assert not self._path | |
return True | |
if not isinstance(self._path, basestring) and hasattr(self._path, 'read'): | |
# A file-like object, used for testing with a StringIO file | |
self._zip = zipfile.ZipFile(self._path, mode='r') | |
return True | |
if not os.path.exists(self._path): | |
self._problems.FeedNotFound(self._path) | |
return False | |
if self._path.endswith('.zip'): | |
try: | |
self._zip = zipfile.ZipFile(self._path, mode='r') | |
except IOError: # self._path is a directory | |
pass | |
except zipfile.BadZipfile: | |
self._problems.UnknownFormat(self._path) | |
return False | |
if not self._zip and not os.path.isdir(self._path): | |
self._problems.UnknownFormat(self._path) | |
return False | |
return True | |
def _GetFileNames(self): | |
"""Returns a list of file names in the feed.""" | |
if self._zip: | |
return self._zip.namelist() | |
else: | |
return os.listdir(self._path) | |
def _CheckFileNames(self): | |
filenames = self._GetFileNames() | |
known_filenames = self._gtfs_factory.GetKnownFilenames() | |
for feed_file in filenames: | |
if feed_file not in known_filenames: | |
if not feed_file.startswith('.'): | |
# Don't worry about .svn files and other hidden files | |
# as this will break the tests. | |
self._problems.UnknownFile(feed_file) | |
def _GetUtf8Contents(self, file_name): | |
"""Check for errors in file_name and return a string for csv reader.""" | |
contents = self._FileContents(file_name) | |
if not contents: # Missing file | |
return | |
# Check for errors that will prevent csv.reader from working | |
if len(contents) >= 2 and contents[0:2] in (codecs.BOM_UTF16_BE, | |
codecs.BOM_UTF16_LE): | |
self._problems.FileFormat("appears to be encoded in utf-16", (file_name, )) | |
# Convert and continue, so we can find more errors | |
contents = codecs.getdecoder('utf-16')(contents)[0].encode('utf-8') | |
null_index = contents.find('\0') | |
if null_index != -1: | |
# It is easier to get some surrounding text than calculate the exact | |
# row_num | |
m = re.search(r'.{,20}\0.{,20}', contents, re.DOTALL) | |
self._problems.FileFormat( | |
"contains a null in text \"%s\" at byte %d" % | |
(codecs.getencoder('string_escape')(m.group()), null_index + 1), | |
(file_name, )) | |
return | |
# strip out any UTF-8 Byte Order Marker (otherwise it'll be | |
# treated as part of the first column name, causing a mis-parse) | |
contents = contents.lstrip(codecs.BOM_UTF8) | |
return contents | |
def _ReadCsvDict(self, file_name, all_cols, required): | |
"""Reads lines from file_name, yielding a dict of unicode values.""" | |
assert file_name.endswith(".txt") | |
table_name = file_name[0:-4] | |
contents = self._GetUtf8Contents(file_name) | |
if not contents: | |
return | |
eol_checker = util.EndOfLineChecker(StringIO.StringIO(contents), | |
file_name, self._problems) | |
# The csv module doesn't provide a way to skip trailing space, but when I | |
# checked 15/675 feeds had trailing space in a header row and 120 had spaces | |
# after fields. Space after header fields can cause a serious parsing | |
# problem, so warn. Space after body fields can cause a problem time, | |
# integer and id fields; they will be validated at higher levels. | |
reader = csv.reader(eol_checker, skipinitialspace=True) | |
raw_header = reader.next() | |
header_occurrences = util.defaultdict(lambda: 0) | |
header = [] | |
valid_columns = [] # Index into raw_header and raw_row | |
for i, h in enumerate(raw_header): | |
h_stripped = h.strip() | |
if not h_stripped: | |
self._problems.CsvSyntax( | |
description="The header row should not contain any blank values. " | |
"The corresponding column will be skipped for the " | |
"entire file.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=problems.TYPE_ERROR) | |
continue | |
elif h != h_stripped: | |
self._problems.CsvSyntax( | |
description="The header row should not contain any " | |
"space characters.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=problems.TYPE_WARNING) | |
header.append(h_stripped) | |
valid_columns.append(i) | |
header_occurrences[h_stripped] += 1 | |
for name, count in header_occurrences.items(): | |
if count > 1: | |
self._problems.DuplicateColumn( | |
header=name, | |
file_name=file_name, | |
count=count) | |
self._schedule._table_columns[table_name] = header | |
# check for unrecognized columns, which are often misspellings | |
unknown_cols = set(header) - set(all_cols) | |
if len(unknown_cols) == len(header): | |
self._problems.CsvSyntax( | |
description="The header row did not contain any known column " | |
"names. The file is most likely missing the header row " | |
"or not in the expected CSV format.", | |
context=(file_name, 1, [''] * len(raw_header), raw_header), | |
type=problems.TYPE_ERROR) | |
else: | |
for col in unknown_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.UnrecognizedColumn(file_name, col, context) | |
missing_cols = set(required) - set(header) | |
for col in missing_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.MissingColumn(file_name, col, context) | |
line_num = 1 # First line read by reader.next() above | |
for raw_row in reader: | |
line_num += 1 | |
if len(raw_row) == 0: # skip extra empty lines in file | |
continue | |
if len(raw_row) > len(raw_header): | |
self._problems.OtherProblem('Found too many cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(line_num, file_name), | |
(file_name, line_num), | |
type=problems.TYPE_WARNING) | |
if len(raw_row) < len(raw_header): | |
self._problems.OtherProblem('Found missing cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(line_num, file_name), | |
(file_name, line_num), | |
type=problems.TYPE_WARNING) | |
# raw_row is a list of raw bytes which should be valid utf-8. Convert each | |
# valid_columns of raw_row into Unicode. | |
valid_values = [] | |
unicode_error_columns = [] # index of valid_values elements with an error | |
for i in valid_columns: | |
try: | |
valid_values.append(raw_row[i].decode('utf-8')) | |
except UnicodeDecodeError: | |
# Replace all invalid characters with REPLACEMENT CHARACTER (U+FFFD) | |
valid_values.append(codecs.getdecoder("utf8") | |
(raw_row[i], errors="replace")[0]) | |
unicode_error_columns.append(len(valid_values) - 1) | |
except IndexError: | |
break | |
# The error report may contain a dump of all values in valid_values so | |
# problems can not be reported until after converting all of raw_row to | |
# Unicode. | |
for i in unicode_error_columns: | |
self._problems.InvalidValue(header[i], valid_values[i], | |
'Unicode error', | |
(file_name, line_num, | |
valid_values, header)) | |
d = dict(zip(header, valid_values)) | |
yield (d, line_num, header, valid_values) | |
# TODO: Add testing for this specific function | |
def _ReadCSV(self, file_name, cols, required): | |
"""Reads lines from file_name, yielding a list of unicode values | |
corresponding to the column names in cols.""" | |
contents = self._GetUtf8Contents(file_name) | |
if not contents: | |
return | |
eol_checker = util.EndOfLineChecker(StringIO.StringIO(contents), | |
file_name, self._problems) | |
reader = csv.reader(eol_checker) # Use excel dialect | |
header = reader.next() | |
header = map(lambda x: x.strip(), header) # trim any whitespace | |
header_occurrences = util.defaultdict(lambda: 0) | |
for column_header in header: | |
header_occurrences[column_header] += 1 | |
for name, count in header_occurrences.items(): | |
if count > 1: | |
self._problems.DuplicateColumn( | |
header=name, | |
file_name=file_name, | |
count=count) | |
# check for unrecognized columns, which are often misspellings | |
unknown_cols = set(header).difference(set(cols)) | |
for col in unknown_cols: | |
# this is provided in order to create a nice colored list of | |
# columns in the validator output | |
context = (file_name, 1, [''] * len(header), header) | |
self._problems.UnrecognizedColumn(file_name, col, context) | |
col_index = [-1] * len(cols) | |
for i in range(len(cols)): | |
if cols[i] in header: | |
col_index[i] = header.index(cols[i]) | |
elif cols[i] in required: | |
self._problems.MissingColumn(file_name, cols[i]) | |
row_num = 1 | |
for row in reader: | |
row_num += 1 | |
if len(row) == 0: # skip extra empty lines in file | |
continue | |
if len(row) > len(header): | |
self._problems.OtherProblem('Found too many cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(row_num, file_name), (file_name, row_num), | |
type=problems.TYPE_WARNING) | |
if len(row) < len(header): | |
self._problems.OtherProblem('Found missing cells (commas) in line ' | |
'%d of file "%s". Every row in the file ' | |
'should have the same number of cells as ' | |
'the header (first line) does.' % | |
(row_num, file_name), (file_name, row_num), | |
type=problems.TYPE_WARNING) | |
result = [None] * len(cols) | |
unicode_error_columns = [] # A list of column numbers with an error | |
for i in range(len(cols)): | |
ci = col_index[i] | |
if ci >= 0: | |
if len(row) <= ci: # handle short CSV rows | |
result[i] = u'' | |
else: | |
try: | |
result[i] = row[ci].decode('utf-8').strip() | |
except UnicodeDecodeError: | |
# Replace all invalid characters with | |
# REPLACEMENT CHARACTER (U+FFFD) | |
result[i] = codecs.getdecoder("utf8")(row[ci], | |
errors="replace")[0].strip() | |
unicode_error_columns.append(i) | |
for i in unicode_error_columns: | |
self._problems.InvalidValue(cols[i], result[i], | |
'Unicode error', | |
(file_name, row_num, result, cols)) | |
yield (result, row_num, cols) | |
def _HasFile(self, file_name): | |
"""Returns True if there's a file in the current feed with the | |
given file_name in the current feed.""" | |
if self._zip: | |
return file_name in self._zip.namelist() | |
else: | |
file_path = os.path.join(self._path, file_name) | |
return os.path.exists(file_path) and os.path.isfile(file_path) | |
def _FileContents(self, file_name): | |
results = None | |
if self._zip: | |
try: | |
results = self._zip.read(file_name) | |
except KeyError: # file not found in archve | |
self._problems.MissingFile(file_name) | |
return None | |
else: | |
try: | |
data_file = open(os.path.join(self._path, file_name), 'rb') | |
results = data_file.read() | |
except IOError: # file not found | |
self._problems.MissingFile(file_name) | |
return None | |
if not results: | |
self._problems.EmptyFile(file_name) | |
return results | |
def _LoadFeed(self): | |
loading_order = self._gtfs_factory.GetLoadingOrder() | |
for filename in loading_order: | |
if not self._gtfs_factory.IsFileRequired(filename) and \ | |
not self._HasFile(filename): | |
pass # File is not required, and feed does not have it. | |
else: | |
object_class = self._gtfs_factory.GetGtfsClassByFileName(filename) | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
filename, | |
object_class._FIELD_NAMES, | |
object_class._REQUIRED_FIELD_NAMES): | |
self._problems.SetFileContext(filename, row_num, row, header) | |
instance = object_class(field_dict=d) | |
instance.SetGtfsFactory(self._gtfs_factory) | |
if not instance.ValidateBeforeAdd(self._problems): | |
continue | |
instance.AddToSchedule(self._schedule, self._problems) | |
instance.ValidateAfterAdd(self._problems) | |
self._problems.ClearContext() | |
def _LoadCalendar(self): | |
file_name = 'calendar.txt' | |
file_name_dates = 'calendar_dates.txt' | |
if not self._HasFile(file_name) and not self._HasFile(file_name_dates): | |
self._problems.MissingFile(file_name) | |
return | |
# map period IDs to (period object, (file_name, row_num, row, cols)) | |
periods = {} | |
# process calendar.txt | |
if self._HasFile(file_name): | |
has_useful_contents = False | |
for (row, row_num, cols) in \ | |
self._ReadCSV(file_name, | |
self._gtfs_factory.ServicePeriod._FIELD_NAMES, | |
self._gtfs_factory.ServicePeriod._FIELD_NAMES_REQUIRED): | |
context = (file_name, row_num, row, cols) | |
self._problems.SetFileContext(*context) | |
period = self._gtfs_factory.ServicePeriod(field_list=row) | |
if period.service_id in periods: | |
self._problems.DuplicateID('service_id', period.service_id) | |
else: | |
periods[period.service_id] = (period, context) | |
self._problems.ClearContext() | |
# process calendar_dates.txt | |
if self._HasFile(file_name_dates): | |
# ['service_id', 'date', 'exception_type'] | |
fields = self._gtfs_factory.ServicePeriod._FIELD_NAMES_CALENDAR_DATES | |
for (row, row_num, cols) in self._ReadCSV(file_name_dates, | |
fields, fields): | |
context = (file_name_dates, row_num, row, cols) | |
self._problems.SetFileContext(*context) | |
service_id = row[0] | |
period = None | |
if service_id in periods: | |
period = periods[service_id][0] | |
else: | |
period = self._gtfs_factory.ServicePeriod(service_id) | |
periods[period.service_id] = (period, context) | |
exception_type = row[2] | |
if exception_type == u'1': | |
period.SetDateHasService(row[1], True, self._problems) | |
elif exception_type == u'2': | |
period.SetDateHasService(row[1], False, self._problems) | |
else: | |
self._problems.InvalidValue('exception_type', exception_type) | |
self._problems.ClearContext() | |
# Now insert the periods into the schedule object, so that they're | |
# validated with both calendar and calendar_dates info present | |
for period, context in periods.values(): | |
self._problems.SetFileContext(*context) | |
self._schedule.AddServicePeriodObject(period, self._problems) | |
self._problems.ClearContext() | |
def _LoadShapes(self): | |
file_name = 'shapes.txt' | |
if not self._HasFile(file_name): | |
return | |
shapes = {} # shape_id to shape object | |
for (d, row_num, header, row) in self._ReadCsvDict( | |
file_name, | |
self._gtfs_factory.Shape._FIELD_NAMES, | |
self._gtfs_factory.Shape._REQUIRED_FIELD_NAMES): | |
file_context = (file_name, row_num, row, header) | |
self._problems.SetFileContext(*file_context) | |
shapepoint = self._gtfs_factory.ShapePoint(field_dict=d) | |
if not shapepoint.ParseAttributes(self._problems): | |
continue | |
if shapepoint.shape_id in shapes: | |
shape = shapes[shapepoint.shape_id] | |
else: | |
shape = self._gtfs_factory.Shape(shapepoint.shape_id) | |
shape.SetGtfsFactory(self._gtfs_factory) | |
shapes[shapepoint.shape_id] = shape | |
shape.AddShapePointObjectUnsorted(shapepoint, self._problems) | |
self._problems.ClearContext() | |
for shape_id, shape in shapes.items(): | |
self._schedule.AddShapeObject(shape, self._problems) | |
del shapes[shape_id] | |
def _LoadStopTimes(self): | |
for (row, row_num, cols) in self._ReadCSV('stop_times.txt', | |
self._gtfs_factory.StopTime._FIELD_NAMES, | |
self._gtfs_factory.StopTime._REQUIRED_FIELD_NAMES): | |
file_context = ('stop_times.txt', row_num, row, cols) | |
self._problems.SetFileContext(*file_context) | |
(trip_id, arrival_time, departure_time, stop_id, stop_sequence, | |
stop_headsign, pickup_type, drop_off_type, shape_dist_traveled) = row | |
try: | |
sequence = int(stop_sequence) | |
except (TypeError, ValueError): | |
self._problems.InvalidValue('stop_sequence', stop_sequence, | |
'This should be a number.') | |
continue | |
if sequence < 0: | |
self._problems.InvalidValue('stop_sequence', sequence, | |
'Sequence numbers should be 0 or higher.') | |
if stop_id not in self._schedule.stops: | |
self._problems.InvalidValue('stop_id', stop_id, | |
'This value wasn\'t defined in stops.txt') | |
continue | |
stop = self._schedule.stops[stop_id] | |
if trip_id not in self._schedule.trips: | |
self._problems.InvalidValue('trip_id', trip_id, | |
'This value wasn\'t defined in trips.txt') | |
continue | |
trip = self._schedule.trips[trip_id] | |
# If self._problems.Report returns then StopTime.__init__ will return | |
# even if the StopTime object has an error. Thus this code may add a | |
# StopTime that didn't validate to the database. | |
# Trip.GetStopTimes then tries to make a StopTime from the invalid data | |
# and calls the problem reporter for errors. An ugly solution is to | |
# wrap problems and a better solution is to move all validation out of | |
# __init__. For now make sure Trip.GetStopTimes gets a problem reporter | |
# when called from Trip.Validate. | |
stop_time = self._gtfs_factory.StopTime(self._problems, stop, | |
arrival_time, departure_time, stop_headsign, pickup_type, | |
drop_off_type, shape_dist_traveled, stop_sequence=sequence) | |
trip._AddStopTimeObjectUnordered(stop_time, self._schedule) | |
self._problems.ClearContext() | |
# stop_times are validated in Trip.ValidateChildren, called by | |
# Schedule.Validate | |
def Load(self): | |
self._problems.ClearContext() | |
if not self._DetermineFormat(): | |
return self._schedule | |
self._CheckFileNames() | |
self._LoadCalendar() | |
self._LoadShapes() | |
self._LoadFeed() | |
if self._load_stop_times: | |
self._LoadStopTimes() | |
if self._zip: | |
self._zip.close() | |
self._zip = None | |
if self._extra_validation: | |
self._schedule.Validate(self._problems, validate_children=False) | |
return self._schedule | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import logging | |
import time | |
import util | |
# These are used to distinguish between errors (not allowed by the spec) | |
# and warnings (not recommended) when reporting issues. | |
TYPE_ERROR = 0 | |
TYPE_WARNING = 1 | |
MAX_DISTANCE_FROM_STOP_TO_SHAPE = 1000 | |
MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING = 100.0 | |
MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR = 1000.0 | |
class ProblemReporter(object): | |
"""Base class for problem reporters. Tracks the current context and creates | |
an exception object for each problem. Exception objects are sent to a | |
Problem Accumulator, which is responsible for handling them.""" | |
def __init__(self, accumulator=None): | |
self.ClearContext() | |
if accumulator is None: | |
self.accumulator = SimpleProblemAccumulator() | |
else: | |
self.accumulator = accumulator | |
def SetAccumulator(self, accumulator): | |
self.accumulator = accumulator | |
def GetAccumulator(self): | |
return self.accumulator | |
def ClearContext(self): | |
"""Clear any previous context.""" | |
self._context = None | |
def SetFileContext(self, file_name, row_num, row, headers): | |
"""Save the current context to be output with any errors. | |
Args: | |
file_name: string | |
row_num: int | |
row: list of strings | |
headers: list of column headers, its order corresponding to row's | |
""" | |
self._context = (file_name, row_num, row, headers) | |
def AddToAccumulator(self,e): | |
"""Report an exception to the Problem Accumulator""" | |
self.accumulator._Report(e) | |
def FeedNotFound(self, feed_name, context=None): | |
e = FeedNotFound(feed_name=feed_name, context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def UnknownFormat(self, feed_name, context=None): | |
e = UnknownFormat(feed_name=feed_name, context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def FileFormat(self, problem, context=None): | |
e = FileFormat(problem=problem, context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def MissingFile(self, file_name, context=None): | |
e = MissingFile(file_name=file_name, context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def UnknownFile(self, file_name, context=None): | |
e = UnknownFile(file_name=file_name, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def EmptyFile(self, file_name, context=None): | |
e = EmptyFile(file_name=file_name, context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def MissingColumn(self, file_name, column_name, context=None): | |
e = MissingColumn(file_name=file_name, column_name=column_name, | |
context=context, context2=self._context) | |
self.AddToAccumulator(e) | |
def UnrecognizedColumn(self, file_name, column_name, context=None): | |
e = UnrecognizedColumn(file_name=file_name, column_name=column_name, | |
context=context, context2=self._context, | |
type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def CsvSyntax(self, description=None, context=None, type=TYPE_ERROR): | |
e = CsvSyntax(description=description, context=context, | |
context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def DuplicateColumn(self, file_name, header, count, type=TYPE_ERROR, | |
context=None): | |
e = DuplicateColumn(file_name=file_name, | |
header=header, | |
count=count, | |
type=type, | |
context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def MissingValue(self, column_name, reason=None, context=None): | |
e = MissingValue(column_name=column_name, reason=reason, context=context, | |
context2=self._context) | |
self.AddToAccumulator(e) | |
def InvalidValue(self, column_name, value, reason=None, context=None, | |
type=TYPE_ERROR): | |
e = InvalidValue(column_name=column_name, value=value, reason=reason, | |
context=context, context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def InvalidFloatValue(self, value, reason=None, context=None, | |
type=TYPE_WARNING): | |
e = InvalidFloatValue(value=value, reason=reason, context=context, | |
context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def InvalidNonNegativeIntegerValue(self, value, reason=None, context=None, | |
type=TYPE_WARNING): | |
e = InvalidNonNegativeIntegerValue(value=value, reason=reason, | |
context=context, context2=self._context, | |
type=type) | |
self.AddToAccumulator(e) | |
def DuplicateID(self, column_names, values, context=None, type=TYPE_ERROR): | |
if isinstance(column_names, (tuple, list)): | |
column_names = '(' + ', '.join(column_names) + ')' | |
if isinstance(values, tuple): | |
values = '(' + ', '.join(values) + ')' | |
e = DuplicateID(column_name=column_names, value=values, | |
context=context, context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def UnusedStop(self, stop_id, stop_name, context=None): | |
e = UnusedStop(stop_id=stop_id, stop_name=stop_name, | |
context=context, context2=self._context, type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def UsedStation(self, stop_id, stop_name, context=None): | |
e = UsedStation(stop_id=stop_id, stop_name=stop_name, | |
context=context, context2=self._context, type=TYPE_ERROR) | |
self.AddToAccumulator(e) | |
def StopTooFarFromParentStation(self, stop_id, stop_name, parent_stop_id, | |
parent_stop_name, distance, | |
type=TYPE_WARNING, context=None): | |
e = StopTooFarFromParentStation( | |
stop_id=stop_id, stop_name=stop_name, | |
parent_stop_id=parent_stop_id, | |
parent_stop_name=parent_stop_name, distance=distance, | |
context=context, context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def StopsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b, | |
distance, type=TYPE_WARNING, context=None): | |
e = StopsTooClose( | |
stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b, | |
stop_id_b=stop_id_b, distance=distance, context=context, | |
context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def StationsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b, | |
distance, type=TYPE_WARNING, context=None): | |
e = StationsTooClose( | |
stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b, | |
stop_id_b=stop_id_b, distance=distance, context=context, | |
context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def DifferentStationTooClose(self, stop_name, stop_id, | |
station_stop_name, station_stop_id, | |
distance, type=TYPE_WARNING, context=None): | |
e = DifferentStationTooClose( | |
stop_name=stop_name, stop_id=stop_id, | |
station_stop_name=station_stop_name, station_stop_id=station_stop_id, | |
distance=distance, context=context, context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def StopTooFarFromShapeWithDistTraveled(self, trip_id, stop_name, stop_id, | |
shape_dist_traveled, shape_id, | |
distance, max_distance, | |
type=TYPE_WARNING): | |
e = StopTooFarFromShapeWithDistTraveled( | |
trip_id=trip_id, stop_name=stop_name, stop_id=stop_id, | |
shape_dist_traveled=shape_dist_traveled, shape_id=shape_id, | |
distance=distance, max_distance=max_distance, type=type) | |
self.AddToAccumulator(e) | |
def ExpirationDate(self, expiration, context=None): | |
e = ExpirationDate(expiration=expiration, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def FutureService(self, start_date, context=None): | |
e = FutureService(start_date=start_date, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def NoServiceExceptions(self, start, end, type=TYPE_WARNING, context=None): | |
e = NoServiceExceptions(start=start, end=end, context=context, | |
context2=self._context, type=type); | |
self.AddToAccumulator(e) | |
def InvalidLineEnd(self, bad_line_end, context=None): | |
"""bad_line_end is a human readable string.""" | |
e = InvalidLineEnd(bad_line_end=bad_line_end, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def TooFastTravel(self, trip_id, prev_stop, next_stop, dist, time, speed, | |
type=TYPE_ERROR): | |
e = TooFastTravel(trip_id=trip_id, prev_stop=prev_stop, | |
next_stop=next_stop, time=time, dist=dist, speed=speed, | |
context=None, context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def StopWithMultipleRouteTypes(self, stop_name, stop_id, route_id1, route_id2, | |
context=None): | |
e = StopWithMultipleRouteTypes(stop_name=stop_name, stop_id=stop_id, | |
route_id1=route_id1, route_id2=route_id2, | |
context=context, context2=self._context, | |
type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def DuplicateTrip(self, trip_id1, route_id1, trip_id2, route_id2, | |
context=None): | |
e = DuplicateTrip(trip_id1=trip_id1, route_id1=route_id1, trip_id2=trip_id2, | |
route_id2=route_id2, context=context, | |
context2=self._context, type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def OverlappingTripsInSameBlock(self,trip_id1,trip_id2,block_id,context=None): | |
e = OverlappingTripsInSameBlock(trip_id1=trip_id1, trip_id2=trip_id2, | |
block_id=block_id, context=context, | |
context2=self._context,type=TYPE_WARNING) | |
self.AddToAccumulator(e) | |
def TransferDistanceTooBig(self, from_stop_id, to_stop_id, distance, | |
context=None, type=TYPE_ERROR): | |
e = TransferDistanceTooBig(from_stop_id=from_stop_id, to_stop_id=to_stop_id, | |
distance=distance, context=context, | |
context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def TransferWalkingSpeedTooFast(self, from_stop_id, to_stop_id, distance, | |
transfer_time, context=None, | |
type=TYPE_WARNING): | |
e = TransferWalkingSpeedTooFast(from_stop_id=from_stop_id, | |
transfer_time=transfer_time, | |
distance=distance, | |
to_stop_id=to_stop_id, context=context, | |
context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def OtherProblem(self, description, context=None, type=TYPE_ERROR): | |
e = OtherProblem(description=description, | |
context=context, context2=self._context, type=type) | |
self.AddToAccumulator(e) | |
def TooManyDaysWithoutService(self, | |
first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service, | |
context=None, | |
type=TYPE_WARNING): | |
e = TooManyDaysWithoutService( | |
first_day_without_service=first_day_without_service, | |
last_day_without_service=last_day_without_service, | |
consecutive_days_without_service=consecutive_days_without_service, | |
context=context, | |
context2=self._context, | |
type=type) | |
self.AddToAccumulator(e) | |
def MinimumTransferTimeSetWithInvalidTransferType(self, | |
transfer_type=None, | |
context=None, | |
type=TYPE_ERROR): | |
e = MinimumTransferTimeSetWithInvalidTransferType(context=context, | |
context2=self._context, transfer_type=transfer_type, type=type) | |
self.AddToAccumulator(e) | |
class ProblemAccumulatorInterface(object): | |
"""The base class for Problem Accumulators, which defines their interface.""" | |
def _Report(self, e): | |
raise NotImplementedError("Please use a concrete Problem Accumulator that " | |
"implements error and warning handling.") | |
class SimpleProblemAccumulator(ProblemAccumulatorInterface): | |
"""This is a basic problem accumulator that just prints to console.""" | |
def _Report(self, e): | |
context = e.FormatContext() | |
if context: | |
print context | |
print util.EncodeUnicode(self._LineWrap(e.FormatProblem(), 78)) | |
@staticmethod | |
def _LineWrap(text, width): | |
""" | |
A word-wrap function that preserves existing line breaks | |
and most spaces in the text. Expects that existing line | |
breaks are posix newlines (\n). | |
Taken from: | |
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/148061 | |
""" | |
return reduce(lambda line, word, width=width: '%s%s%s' % | |
(line, | |
' \n'[(len(line) - line.rfind('\n') - 1 + | |
len(word.split('\n', 1)[0]) >= width)], | |
word), | |
text.split(' ') | |
) | |
class ExceptionWithContext(Exception): | |
def __init__(self, context=None, context2=None, **kwargs): | |
"""Initialize an exception object, saving all keyword arguments in self. | |
context and context2, if present, must be a tuple of (file_name, row_num, | |
row, headers). context2 comes from ProblemReporter.SetFileContext. context | |
was passed in with the keyword arguments. context2 is ignored if context | |
is present.""" | |
Exception.__init__(self) | |
if context: | |
self.__dict__.update(self.ContextTupleToDict(context)) | |
elif context2: | |
self.__dict__.update(self.ContextTupleToDict(context2)) | |
self.__dict__.update(kwargs) | |
if ('type' in kwargs) and (kwargs['type'] == TYPE_WARNING): | |
self._type = TYPE_WARNING | |
else: | |
self._type = TYPE_ERROR | |
def GetType(self): | |
return self._type | |
def IsError(self): | |
return self._type == TYPE_ERROR | |
def IsWarning(self): | |
return self._type == TYPE_WARNING | |
CONTEXT_PARTS = ['file_name', 'row_num', 'row', 'headers'] | |
@staticmethod | |
def ContextTupleToDict(context): | |
"""Convert a tuple representing a context into a dict of (key, value) pairs""" | |
d = {} | |
if not context: | |
return d | |
for k, v in zip(ExceptionWithContext.CONTEXT_PARTS, context): | |
if v != '' and v != None: # Don't ignore int(0), a valid row_num | |
d[k] = v | |
return d | |
def __str__(self): | |
return self.FormatProblem() | |
def GetDictToFormat(self): | |
"""Return a copy of self as a dict, suitable for passing to FormatProblem""" | |
d = {} | |
for k, v in self.__dict__.items(): | |
# TODO: Better handling of unicode/utf-8 within Schedule objects. | |
# Concatinating a unicode and utf-8 str object causes an exception such | |
# as "UnicodeDecodeError: 'ascii' codec can't decode byte ..." as python | |
# tries to convert the str to a unicode. To avoid that happening within | |
# the problem reporter convert all unicode attributes to utf-8. | |
# Currently valid utf-8 fields are converted to unicode in _ReadCsvDict. | |
# Perhaps all fields should be left as utf-8. | |
d[k] = util.EncodeUnicode(v) | |
return d | |
def FormatProblem(self, d=None): | |
"""Return a text string describing the problem. | |
Args: | |
d: map returned by GetDictToFormat with with formatting added | |
""" | |
if not d: | |
d = self.GetDictToFormat() | |
output_error_text = self.__class__.ERROR_TEXT % d | |
if ('reason' in d) and d['reason']: | |
return '%s\n%s' % (output_error_text, d['reason']) | |
else: | |
return output_error_text | |
def FormatContext(self): | |
"""Return a text string describing the context""" | |
text = '' | |
if hasattr(self, 'feed_name'): | |
text += "In feed '%s': " % self.feed_name | |
if hasattr(self, 'file_name'): | |
text += self.file_name | |
if hasattr(self, 'row_num'): | |
text += ":%i" % self.row_num | |
if hasattr(self, 'column_name'): | |
text += " column %s" % self.column_name | |
return text | |
def __cmp__(self, y): | |
"""Return an int <0/0/>0 when self is more/same/less significant than y. | |
Subclasses should define this if exceptions should be listed in something | |
other than the order they are reported. | |
Args: | |
y: object to compare to self | |
Returns: | |
An int which is negative if self is more significant than y, 0 if they | |
are similar significance and positive if self is less significant than | |
y. Returning a float won't work. | |
Raises: | |
TypeError by default, meaning objects of the type can not be compared. | |
""" | |
raise TypeError("__cmp__ not defined") | |
class MissingFile(ExceptionWithContext): | |
ERROR_TEXT = "File %(file_name)s is not found" | |
class EmptyFile(ExceptionWithContext): | |
ERROR_TEXT = "File %(file_name)s is empty" | |
class UnknownFile(ExceptionWithContext): | |
ERROR_TEXT = 'The file named %(file_name)s was not expected.\n' \ | |
'This may be a misspelled file name or the file may be ' \ | |
'included in a subdirectory. Please check spellings and ' \ | |
'make sure that there are no subdirectories within the feed' | |
class FeedNotFound(ExceptionWithContext): | |
ERROR_TEXT = 'Couldn\'t find a feed named %(feed_name)s' | |
class UnknownFormat(ExceptionWithContext): | |
ERROR_TEXT = 'The feed named %(feed_name)s had an unknown format:\n' \ | |
'feeds should be either .zip files or directories.' | |
class FileFormat(ExceptionWithContext): | |
ERROR_TEXT = 'Files must be encoded in utf-8 and may not contain ' \ | |
'any null bytes (0x00). %(file_name)s %(problem)s.' | |
class MissingColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Missing column %(column_name)s in file %(file_name)s' | |
class UnrecognizedColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Unrecognized column %(column_name)s in file %(file_name)s. ' \ | |
'This might be a misspelled column name (capitalization ' \ | |
'matters!). Or it could be extra information (such as a ' \ | |
'proposed feed extension) that the validator doesn\'t know ' \ | |
'about yet. Extra information is fine; this warning is here ' \ | |
'to catch misspelled optional column names.' | |
class CsvSyntax(ExceptionWithContext): | |
ERROR_TEXT = '%(description)s' | |
class DuplicateColumn(ExceptionWithContext): | |
ERROR_TEXT = 'Column %(header)s appears %(count)i times in file %(file_name)s' | |
class MissingValue(ExceptionWithContext): | |
ERROR_TEXT = 'Missing value for column %(column_name)s' | |
class InvalidValue(ExceptionWithContext): | |
ERROR_TEXT = 'Invalid value %(value)s in field %(column_name)s' | |
class InvalidFloatValue(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"Invalid numeric value %(value)s. " | |
"Please ensure that the number includes an explicit whole " | |
"number portion (ie. use 0.5 instead of .5), that you do not use the " | |
"exponential notation (ie. use 0.001 instead of 1E-3), and " | |
"that it is a properly formated decimal value.") | |
class InvalidNonNegativeIntegerValue(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"Invalid numeric value %(value)s. " | |
"Please ensure that the number does not have a leading zero (ie. use " | |
"3 instead of 03), and that it is a properly formated integer value.") | |
class DuplicateID(ExceptionWithContext): | |
ERROR_TEXT = 'Duplicate ID %(value)s in column %(column_name)s' | |
class UnusedStop(ExceptionWithContext): | |
ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) isn't used in any trips" | |
class UsedStation(ExceptionWithContext): | |
ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) has location_type=1 " \ | |
"(station) so it should not appear in stop_times" | |
class StopTooFarFromParentStation(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"%(stop_name)s (ID %(stop_id)s) is too far from its parent station " | |
"%(parent_stop_name)s (ID %(parent_stop_id)s) : %(distance).2f meters.") | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. | |
return cmp(y.distance, self.distance) | |
class StopsTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The stops \"%(stop_name_a)s\" (ID %(stop_id_a)s) and \"%(stop_name_b)s\"" | |
" (ID %(stop_id_b)s) are %(distance)0.2fm apart and probably represent " | |
"the same location.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class StationsTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The stations \"%(stop_name_a)s\" (ID %(stop_id_a)s) and " | |
"\"%(stop_name_b)s\" (ID %(stop_id_b)s) are %(distance)0.2fm apart and " | |
"probably represent the same location.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class DifferentStationTooClose(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"The parent_station of stop \"%(stop_name)s\" (ID %(stop_id)s) is not " | |
"station \"%(station_stop_name)s\" (ID %(station_stop_id)s) but they are " | |
"only %(distance)0.2fm apart.") | |
def __cmp__(self, y): | |
# Sort in increasing order because less distance is more significant. | |
return cmp(self.distance, y.distance) | |
class StopTooFarFromShapeWithDistTraveled(ExceptionWithContext): | |
ERROR_TEXT = ( | |
"For trip %(trip_id)s the stop \"%(stop_name)s\" (ID %(stop_id)s) is " | |
"%(distance).0f meters away from the corresponding point " | |
"(shape_dist_traveled: %(shape_dist_traveled)f) on shape %(shape_id)s. " | |
"It should be closer than %(max_distance).0f meters.") | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. | |
return cmp(y.distance, self.distance) | |
class TooManyDaysWithoutService(ExceptionWithContext): | |
ERROR_TEXT = "There are %(consecutive_days_without_service)i consecutive"\ | |
" days, from %(first_day_without_service)s to" \ | |
" %(last_day_without_service)s, without any scheduled service." \ | |
" Please ensure this is intentional." | |
class MinimumTransferTimeSetWithInvalidTransferType(ExceptionWithContext): | |
ERROR_TEXT = "The field min_transfer_time should only be set when " \ | |
"transfer_type is set to 2, but it is set to %(transfer_type)s." | |
class ExpirationDate(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
expiration = d['expiration'] | |
formatted_date = time.strftime("%B %d, %Y", | |
time.localtime(expiration)) | |
if (expiration < time.mktime(time.localtime())): | |
return "This feed expired on %s" % formatted_date | |
else: | |
return "This feed will soon expire, on %s" % formatted_date | |
class FutureService(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
formatted_date = time.strftime("%B %d, %Y", time.localtime(d['start_date'])) | |
return ("The earliest service date in this feed is in the future, on %s. " | |
"Published feeds must always include the current date." % | |
formatted_date) | |
class NoServiceExceptions(ExceptionWithContext): | |
ERROR_TEXT = "All services are defined on a weekly basis from %(start)s " \ | |
"to %(end)s with no single day variations. If there are " \ | |
"exceptions such as holiday service dates please ensure they " \ | |
"are listed in calendar_dates.txt" | |
class InvalidLineEnd(ExceptionWithContext): | |
ERROR_TEXT = "Each line must end with CR LF or LF except for the last line " \ | |
"of the file. This line ends with \"%(bad_line_end)s\"." | |
class StopWithMultipleRouteTypes(ExceptionWithContext): | |
ERROR_TEXT = "Stop %(stop_name)s (ID=%(stop_id)s) belongs to both " \ | |
"subway (ID=%(route_id1)s) and bus line (ID=%(route_id2)s)." | |
class TooFastTravel(ExceptionWithContext): | |
def FormatProblem(self, d=None): | |
if not d: | |
d = self.GetDictToFormat() | |
if not d['speed']: | |
return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \ | |
" to %(next_stop)s. %(dist).0f meters in %(time)d seconds." % d | |
else: | |
return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \ | |
" to %(next_stop)s. %(dist).0f meters in %(time)d seconds." \ | |
" (%(speed).0f km/h)." % d | |
def __cmp__(self, y): | |
# Sort in decreasing order because more distance is more significant. We | |
# can't sort by speed because not all TooFastTravel objects have a speed. | |
return cmp(y.dist, self.dist) | |
class DuplicateTrip(ExceptionWithContext): | |
ERROR_TEXT = "Trip %(trip_id1)s of route %(route_id1)s might be duplicated " \ | |
"with trip %(trip_id2)s of route %(route_id2)s. They go " \ | |
"through the same stops with same service." | |
class OverlappingTripsInSameBlock(ExceptionWithContext): | |
ERROR_TEXT = "Trip %(trip_id1)s and trip %(trip_id2)s both are in the " \ | |
"same block %(block_id)s and have overlapping arrival times." | |
class TransferDistanceTooBig(ExceptionWithContext): | |
ERROR_TEXT = "Transfer from stop %(from_stop_id)s to stop " \ | |
"%(to_stop_id)s has a distance of %(distance)s meters." | |
class TransferWalkingSpeedTooFast(ExceptionWithContext): | |
ERROR_TEXT = "Riders transfering from stop %(from_stop_id)s to stop " \ | |
"%(to_stop_id)s would need to walk %(distance)s meters in " \ | |
"%(transfer_time)s seconds." | |
class OtherProblem(ExceptionWithContext): | |
ERROR_TEXT = '%(description)s' | |
class ExceptionProblemAccumulator(ProblemAccumulatorInterface): | |
"""A problem accumulator that handles errors and optionally warnings by | |
raising exceptions.""" | |
def __init__(self, raise_warnings=False): | |
"""Initialise. | |
Args: | |
raise_warnings: If this is True then warnings are also raised as | |
exceptions. | |
If it is false, warnings are printed to the console using | |
SimpleProblemAccumulator. | |
""" | |
self.raise_warnings = raise_warnings | |
self.accumulator = SimpleProblemAccumulator() | |
def _Report(self, e): | |
if self.raise_warnings or e.IsError(): | |
raise e | |
else: | |
self.accumulator._Report(e) | |
default_accumulator = ExceptionProblemAccumulator() | |
default_problem_reporter = ProblemReporter(default_accumulator) | |
# Add a default handler to send log messages to console | |
console = logging.StreamHandler() | |
console.setLevel(logging.WARNING) | |
log = logging.getLogger("schedule_builder") | |
log.addHandler(console) | |
class Error(Exception): | |
pass | |
# Below are the exceptions related to loading and setting up Feed Validator | |
# extensions | |
class ExtensionException(Exception): | |
pass | |
class InvalidMapping(ExtensionException): | |
def __init__(self, missing_field): | |
self.missing_field = missing_field | |
class NonexistentMapping(ExtensionException): | |
def __init__(self, name): | |
self.name = name | |
class DuplicateMapping(ExtensionException): | |
def __init__(self, name): | |
self.name = name | |
class NonStandardMapping(ExtensionException): | |
def __init__(self, name): | |
self.name = name | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from gtfsobjectbase import GtfsObjectBase | |
import problems as problems_module | |
import util | |
class Route(GtfsObjectBase): | |
"""Represents a single route.""" | |
_REQUIRED_FIELD_NAMES = [ | |
'route_id', 'route_short_name', 'route_long_name', 'route_type' | |
] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + [ | |
'agency_id', 'route_desc', 'route_url', 'route_color', 'route_text_color' | |
] | |
_ROUTE_TYPES = { | |
0: {'name':'Tram', 'max_speed':100}, | |
1: {'name':'Subway', 'max_speed':150}, | |
2: {'name':'Rail', 'max_speed':300}, | |
3: {'name':'Bus', 'max_speed':100}, | |
4: {'name':'Ferry', 'max_speed':80}, | |
5: {'name':'Cable Car', 'max_speed':50}, | |
6: {'name':'Gondola', 'max_speed':50}, | |
7: {'name':'Funicular', 'max_speed':50}, | |
} | |
# Create a reverse lookup dict of route type names to route types. | |
_ROUTE_TYPE_IDS = set(_ROUTE_TYPES.keys()) | |
_ROUTE_TYPE_NAMES = dict((v['name'], k) for k, v in _ROUTE_TYPES.items()) | |
_TABLE_NAME = 'routes' | |
def __init__(self, short_name=None, long_name=None, route_type=None, | |
route_id=None, agency_id=None, field_dict=None): | |
self._schedule = None | |
self._trips = [] | |
if not field_dict: | |
field_dict = {} | |
if short_name is not None: | |
field_dict['route_short_name'] = short_name | |
if long_name is not None: | |
field_dict['route_long_name'] = long_name | |
if route_type is not None: | |
if route_type in self._ROUTE_TYPE_NAMES: | |
self.route_type = self._ROUTE_TYPE_NAMES[route_type] | |
else: | |
field_dict['route_type'] = route_type | |
if route_id is not None: | |
field_dict['route_id'] = route_id | |
if agency_id is not None: | |
field_dict['agency_id'] = agency_id | |
self.__dict__.update(field_dict) | |
def AddTrip(self, schedule=None, headsign=None, service_period=None, | |
trip_id=None): | |
"""Add a trip to this route. | |
Args: | |
schedule: a Schedule object which will hold the new trip or None to use | |
the schedule of this route. | |
headsign: headsign of the trip as a string | |
service_period: a ServicePeriod object or None to use | |
schedule.GetDefaultServicePeriod() | |
trip_id: optional trip_id for the new trip | |
Returns: | |
a new Trip object | |
""" | |
if schedule is None: | |
assert self._schedule is not None | |
schedule = self._schedule | |
if trip_id is None: | |
trip_id = util.FindUniqueId(schedule.trips) | |
if service_period is None: | |
service_period = schedule.GetDefaultServicePeriod() | |
trip_class = self.GetGtfsFactory().Trip | |
trip_obj = trip_class(route=self, headsign=headsign, | |
service_period=service_period, trip_id=trip_id) | |
schedule.AddTripObject(trip_obj) | |
return trip_obj | |
def _AddTripObject(self, trip): | |
# Only class Schedule may call this. Users of the API should call | |
# Route.AddTrip or schedule.AddTripObject. | |
self._trips.append(trip) | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method overrides GtfsObjectBase.__getattr__ to provide backwards | |
compatible access to trips. | |
""" | |
if name == 'trips': | |
return self._trips | |
else: | |
return GtfsObjectBase.__getattr__(self, name) | |
def GetPatternIdTripDict(self): | |
"""Return a dictionary that maps pattern_id to a list of Trip objects.""" | |
d = {} | |
for t in self._trips: | |
d.setdefault(t.pattern_id, []).append(t) | |
return d | |
def ValidateRouteIdIsPresent(self, problems): | |
if util.IsEmpty(self.route_id): | |
problems.MissingValue('route_id') | |
def ValidateRouteTypeIsPresent(self, problems): | |
if util.IsEmpty(self.route_type): | |
problems.MissingValue('route_type') | |
def ValidateRouteShortAndLongNamesAreNotBlank(self, problems): | |
if util.IsEmpty(self.route_short_name) and \ | |
util.IsEmpty(self.route_long_name): | |
problems.InvalidValue('route_short_name', | |
self.route_short_name, | |
'Both route_short_name and ' | |
'route_long name are blank.') | |
def ValidateRouteShortNameIsNotTooLong(self, problems): | |
if self.route_short_name and len(self.route_short_name) > 6: | |
problems.InvalidValue('route_short_name', | |
self.route_short_name, | |
'This route_short_name is relatively long, which ' | |
'probably means that it contains a place name. ' | |
'You should only use this field to hold a short ' | |
'code that riders use to identify a route. ' | |
'If this route doesn\'t have such a code, it\'s ' | |
'OK to leave this field empty.', | |
type=problems_module.TYPE_WARNING) | |
def ValidateRouteLongNameDoesNotContainShortName(self, problems): | |
if self.route_short_name and self.route_long_name: | |
short_name = self.route_short_name.strip().lower() | |
long_name = self.route_long_name.strip().lower() | |
if (long_name.startswith(short_name + ' ') or | |
long_name.startswith(short_name + '(') or | |
long_name.startswith(short_name + '-')): | |
problems.InvalidValue('route_long_name', | |
self.route_long_name, | |
'route_long_name shouldn\'t contain ' | |
'the route_short_name value, as both ' | |
'fields are often displayed ' | |
'side-by-side.', | |
type=problems_module.TYPE_WARNING) | |
def ValidateRouteShortAndLongNamesAreNotEqual(self, problems): | |
if self.route_short_name and self.route_long_name: | |
short_name = self.route_short_name.strip().lower() | |
long_name = self.route_long_name.strip().lower() | |
if long_name == short_name: | |
problems.InvalidValue('route_long_name', | |
self.route_long_name, | |
'route_long_name shouldn\'t be the same ' | |
'the route_short_name value, as both ' | |
'fields are often displayed ' | |
'side-by-side. It\'s OK to omit either the ' | |
'short or long name (but not both).', | |
type=problems_module.TYPE_WARNING) | |
def ValidateRouteDescriptionNotTheSameAsRouteName(self, problems): | |
if (self.route_desc and | |
((self.route_desc == self.route_short_name) or | |
(self.route_desc == self.route_long_name))): | |
problems.InvalidValue('route_desc', | |
self.route_desc, | |
'route_desc shouldn\'t be the same as ' | |
'route_short_name or route_long_name') | |
def ValidateRouteTypeHasValidValue(self, problems): | |
if self.route_type is not None: | |
try: | |
if not isinstance(self.route_type, int): | |
self.route_type = util.NonNegIntStringToInt(self.route_type, problems) | |
except (TypeError, ValueError): | |
problems.InvalidValue('route_type', self.route_type) | |
else: | |
if self.route_type not in self._ROUTE_TYPE_IDS: | |
problems.InvalidValue('route_type', | |
self.route_type, | |
type=problems_module.TYPE_WARNING) | |
def ValidateRouteUrl(self, problems): | |
if self.route_url and not util.IsValidURL(self.route_url): | |
problems.InvalidValue('route_url', self.route_url) | |
def ValidateRouteColor(self, problems): | |
if self.route_color: | |
if not util.IsValidColor(self.route_color): | |
problems.InvalidValue('route_color', self.route_color, | |
'route_color should be a valid color description ' | |
'which consists of 6 hexadecimal characters ' | |
'representing the RGB values. Example: 44AA06') | |
self.route_color = None | |
def ValidateRouteTextColor(self, problems): | |
if self.route_text_color: | |
if not util.IsValidColor(self.route_text_color): | |
problems.InvalidValue('route_text_color', self.route_text_color, | |
'route_text_color should be a valid color ' | |
'description, which consists of 6 hexadecimal ' | |
'characters representing the RGB values. ' | |
'Example: 44AA06') | |
self.route_text_color = None | |
def ValidateRouteAndTextColors(self, problems): | |
if self.route_color: | |
bg_lum = util.ColorLuminance(self.route_color) | |
else: | |
bg_lum = util.ColorLuminance('ffffff') # white (default) | |
if self.route_text_color: | |
txt_lum = util.ColorLuminance(self.route_text_color) | |
else: | |
txt_lum = util.ColorLuminance('000000') # black (default) | |
if abs(txt_lum - bg_lum) < 510/7.: | |
# http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast recommends | |
# a threshold of 125, but that is for normal text and too harsh for | |
# big colored logos like line names, so we keep the original threshold | |
# from r541 (but note that weight has shifted between RGB components). | |
problems.InvalidValue('route_color', self.route_color, | |
'The route_text_color and route_color should ' | |
'be set to contrasting colors, as they are used ' | |
'as the text and background color (respectively) ' | |
'for displaying route names. When left blank, ' | |
'route_text_color defaults to 000000 (black) and ' | |
'route_color defaults to FFFFFF (white). A common ' | |
'source of issues here is setting route_color to ' | |
'a dark color, while leaving route_text_color set ' | |
'to black. In this case, route_text_color should ' | |
'be set to a lighter color like FFFFFF to ensure ' | |
'a legible contrast between the two.', | |
type=problems_module.TYPE_WARNING) | |
def ValidateBeforeAdd(self, problems): | |
self.ValidateRouteIdIsPresent(problems) | |
self.ValidateRouteTypeIsPresent(problems) | |
self.ValidateRouteShortAndLongNamesAreNotBlank(problems) | |
self.ValidateRouteShortNameIsNotTooLong(problems) | |
self.ValidateRouteLongNameDoesNotContainShortName(problems) | |
self.ValidateRouteShortAndLongNamesAreNotEqual(problems) | |
self.ValidateRouteDescriptionNotTheSameAsRouteName(problems) | |
self.ValidateRouteTypeHasValidValue(problems) | |
self.ValidateRouteUrl(problems) | |
self.ValidateRouteColor(problems) | |
self.ValidateRouteTextColor(problems) | |
self.ValidateRouteAndTextColors(problems) | |
# None of these checks are blocking | |
return True | |
def ValidateAfterAdd(self, problems): | |
return | |
def AddToSchedule(self, schedule, problems): | |
schedule.AddRouteObject(self, problems) | |
def Validate(self, problems=problems_module.default_problem_reporter): | |
self.ValidateBeforeAdd(problems) | |
self.ValidateAfterAdd(problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import bisect | |
import cStringIO as StringIO | |
import datetime | |
import itertools | |
import os | |
try: | |
import sqlite3 as sqlite | |
except ImportError: | |
from pysqlite2 import dbapi2 as sqlite | |
import tempfile | |
import time | |
import warnings | |
# Objects in a schedule (Route, Trip, etc) should not keep a strong reference | |
# to the Schedule object to avoid a reference cycle. Schedule needs to use | |
# __del__ to cleanup its temporary file. The garbage collector can't handle | |
# reference cycles containing objects with custom cleanup code. | |
import weakref | |
import zipfile | |
import gtfsfactory | |
import problems as problems_module | |
from transitfeed.util import defaultdict | |
import util | |
class Schedule: | |
"""Represents a Schedule, a collection of stops, routes, trips and | |
an agency. This is the main class for this module.""" | |
def __init__(self, problem_reporter=None, | |
memory_db=True, check_duplicate_trips=False, | |
gtfs_factory=None): | |
if gtfs_factory is None: | |
gtfs_factory = gtfsfactory.GetGtfsFactory() | |
self._gtfs_factory = gtfs_factory | |
# Map from table name to list of columns present in this schedule | |
self._table_columns = {} | |
self._agencies = {} | |
self.stops = {} | |
self.routes = {} | |
self.trips = {} | |
self.service_periods = {} | |
self.fares = {} | |
self.fare_zones = {} # represents the set of all known fare zones | |
self._shapes = {} # shape_id to Shape | |
# A map from transfer._ID() to a list of transfers. A list is used so | |
# there can be more than one transfer with each ID. Once GTFS explicitly | |
# prohibits duplicate IDs this might be changed to a simple dict of | |
# Transfers. | |
self._transfers = defaultdict(lambda: []) | |
self._default_service_period = None | |
self._default_agency = None | |
if problem_reporter is None: | |
self.problem_reporter = problems_module.default_problem_reporter | |
else: | |
self.problem_reporter = problem_reporter | |
self._check_duplicate_trips = check_duplicate_trips | |
self.ConnectDb(memory_db) | |
def AddTableColumn(self, table, column): | |
"""Add column to table if it is not already there.""" | |
if column not in self._table_columns[table]: | |
self._table_columns[table].append(column) | |
def AddTableColumns(self, table, columns): | |
"""Add columns to table if they are not already there. | |
Args: | |
table: table name as a string | |
columns: an iterable of column names""" | |
table_columns = self._table_columns.setdefault(table, []) | |
for attr in columns: | |
if attr not in table_columns: | |
table_columns.append(attr) | |
def GetTableColumns(self, table): | |
"""Return list of columns in a table.""" | |
return self._table_columns[table] | |
def __del__(self): | |
self._connection.cursor().close() | |
self._connection.close() | |
if hasattr(self, '_temp_db_filename'): | |
os.remove(self._temp_db_filename) | |
def ConnectDb(self, memory_db): | |
if memory_db: | |
self._connection = sqlite.connect(":memory:") | |
else: | |
try: | |
self._temp_db_file = tempfile.NamedTemporaryFile() | |
self._connection = sqlite.connect(self._temp_db_file.name) | |
except sqlite.OperationalError: | |
# Windows won't let a file be opened twice. mkstemp does not remove the | |
# file when all handles to it are closed. | |
self._temp_db_file = None | |
(fd, self._temp_db_filename) = tempfile.mkstemp(".db") | |
os.close(fd) | |
self._connection = sqlite.connect(self._temp_db_filename) | |
cursor = self._connection.cursor() | |
cursor.execute("""CREATE TABLE stop_times ( | |
trip_id CHAR(50), | |
arrival_secs INTEGER, | |
departure_secs INTEGER, | |
stop_id CHAR(50), | |
stop_sequence INTEGER, | |
stop_headsign VAR CHAR(100), | |
pickup_type INTEGER, | |
drop_off_type INTEGER, | |
shape_dist_traveled FLOAT);""") | |
cursor.execute("""CREATE INDEX trip_index ON stop_times (trip_id);""") | |
cursor.execute("""CREATE INDEX stop_index ON stop_times (stop_id);""") | |
def GetStopBoundingBox(self): | |
return (min(s.stop_lat for s in self.stops.values()), | |
min(s.stop_lon for s in self.stops.values()), | |
max(s.stop_lat for s in self.stops.values()), | |
max(s.stop_lon for s in self.stops.values()), | |
) | |
def AddAgency(self, name, url, timezone, agency_id=None): | |
"""Adds an agency to this schedule.""" | |
agency = self._gtfs_factory.Agency(name, url, timezone, agency_id) | |
self.AddAgencyObject(agency) | |
return agency | |
def AddAgencyObject(self, agency, problem_reporter=None, validate=False): | |
assert agency._schedule is None | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if agency.agency_id in self._agencies: | |
problem_reporter.DuplicateID('agency_id', agency.agency_id) | |
return | |
self.AddTableColumns('agency', agency._ColumnNames()) | |
agency._schedule = weakref.proxy(self) | |
if validate: | |
agency.Validate(problem_reporter) | |
self._agencies[agency.agency_id] = agency | |
def GetAgency(self, agency_id): | |
"""Return Agency with agency_id or throw a KeyError""" | |
return self._agencies[agency_id] | |
def GetDefaultAgency(self): | |
"""Return the default Agency. If no default Agency has been set select the | |
default depending on how many Agency objects are in the Schedule. If there | |
are 0 make a new Agency the default, if there is 1 it becomes the default, | |
if there is more than 1 then return None. | |
""" | |
if not self._default_agency: | |
if len(self._agencies) == 0: | |
self.NewDefaultAgency() | |
elif len(self._agencies) == 1: | |
self._default_agency = self._agencies.values()[0] | |
return self._default_agency | |
def NewDefaultAgency(self, **kwargs): | |
"""Create a new Agency object and make it the default agency for this Schedule""" | |
agency = self._gtfs_factory.Agency(**kwargs) | |
if not agency.agency_id: | |
agency.agency_id = util.FindUniqueId(self._agencies) | |
self._default_agency = agency | |
self.SetDefaultAgency(agency, validate=False) # Blank agency won't validate | |
return agency | |
def SetDefaultAgency(self, agency, validate=True): | |
"""Make agency the default and add it to the schedule if not already added""" | |
assert isinstance(agency, self._gtfs_factory.Agency) | |
self._default_agency = agency | |
if agency.agency_id not in self._agencies: | |
self.AddAgencyObject(agency, validate=validate) | |
def GetAgencyList(self): | |
"""Returns the list of Agency objects known to this Schedule.""" | |
return self._agencies.values() | |
def GetServicePeriod(self, service_id): | |
"""Returns the ServicePeriod object with the given ID.""" | |
return self.service_periods[service_id] | |
def GetDefaultServicePeriod(self): | |
"""Return the default ServicePeriod. If no default ServicePeriod has been | |
set select the default depending on how many ServicePeriod objects are in | |
the Schedule. If there are 0 make a new ServicePeriod the default, if there | |
is 1 it becomes the default, if there is more than 1 then return None. | |
""" | |
if not self._default_service_period: | |
if len(self.service_periods) == 0: | |
self.NewDefaultServicePeriod() | |
elif len(self.service_periods) == 1: | |
self._default_service_period = self.service_periods.values()[0] | |
return self._default_service_period | |
def NewDefaultServicePeriod(self): | |
"""Create a new ServicePeriod object, make it the default service period and | |
return it. The default service period is used when you create a trip without | |
providing an explict service period. """ | |
service_period = self._gtfs_factory.ServicePeriod() | |
service_period.service_id = util.FindUniqueId(self.service_periods) | |
# blank service won't validate in AddServicePeriodObject | |
self.SetDefaultServicePeriod(service_period, validate=False) | |
return service_period | |
def SetDefaultServicePeriod(self, service_period, validate=True): | |
assert isinstance(service_period, self._gtfs_factory.ServicePeriod) | |
self._default_service_period = service_period | |
if service_period.service_id not in self.service_periods: | |
self.AddServicePeriodObject(service_period, validate=validate) | |
def AddServicePeriodObject(self, service_period, problem_reporter=None, | |
validate=True): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if service_period.service_id in self.service_periods: | |
problem_reporter.DuplicateID('service_id', service_period.service_id) | |
return | |
if validate: | |
service_period.Validate(problem_reporter) | |
self.service_periods[service_period.service_id] = service_period | |
def GetServicePeriodList(self): | |
return self.service_periods.values() | |
def GetDateRange(self): | |
"""Returns a tuple of (earliest, latest) dates on which the service | |
periods in the schedule define service, in YYYYMMDD form.""" | |
ranges = [period.GetDateRange() for period in self.GetServicePeriodList()] | |
starts = filter(lambda x: x, [item[0] for item in ranges]) | |
ends = filter(lambda x: x, [item[1] for item in ranges]) | |
if not starts or not ends: | |
return (None, None) | |
return (min(starts), max(ends)) | |
def GetServicePeriodsActiveEachDate(self, date_start, date_end): | |
"""Return a list of tuples (date, [period1, period2, ...]). | |
For each date in the range [date_start, date_end) make list of each | |
ServicePeriod object which is active. | |
Args: | |
date_start: The first date in the list, a date object | |
date_end: The first date after the list, a date object | |
Returns: | |
A list of tuples. Each tuple contains a date object and a list of zero or | |
more ServicePeriod objects. | |
""" | |
date_it = date_start | |
one_day = datetime.timedelta(days=1) | |
date_service_period_list = [] | |
while date_it < date_end: | |
periods_today = [] | |
date_it_string = date_it.strftime("%Y%m%d") | |
for service in self.GetServicePeriodList(): | |
if service.IsActiveOn(date_it_string, date_it): | |
periods_today.append(service) | |
date_service_period_list.append((date_it, periods_today)) | |
date_it += one_day | |
return date_service_period_list | |
def AddStop(self, lat, lng, name, stop_id=None): | |
"""Add a stop to this schedule. | |
Args: | |
lat: Latitude of the stop as a float or string | |
lng: Longitude of the stop as a float or string | |
name: Name of the stop, which will appear in the feed | |
stop_id: stop_id of the stop or None, in which case a unique id is picked | |
Returns: | |
A new Stop object | |
""" | |
if stop_id is None: | |
stop_id = util.FindUniqueId(self.stops) | |
stop = self._gtfs_factory.Stop(stop_id=stop_id, lat=lat, lng=lng, name=name) | |
self.AddStopObject(stop) | |
return stop | |
def AddStopObject(self, stop, problem_reporter=None): | |
"""Add Stop object to this schedule if stop_id is non-blank.""" | |
assert stop._schedule is None | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if not stop.stop_id: | |
return | |
if stop.stop_id in self.stops: | |
problem_reporter.DuplicateID('stop_id', stop.stop_id) | |
return | |
stop._schedule = weakref.proxy(self) | |
self.AddTableColumns('stops', stop._ColumnNames()) | |
self.stops[stop.stop_id] = stop | |
if hasattr(stop, 'zone_id') and stop.zone_id: | |
self.fare_zones[stop.zone_id] = True | |
def GetStopList(self): | |
return self.stops.values() | |
def AddRoute(self, short_name, long_name, route_type, route_id=None): | |
"""Add a route to this schedule. | |
Args: | |
short_name: Short name of the route, such as "71L" | |
long_name: Full name of the route, such as "NW 21st Ave/St Helens Rd" | |
route_type: A type such as "Tram", "Subway" or "Bus" | |
route_id: id of the route or None, in which case a unique id is picked | |
Returns: | |
A new Route object | |
""" | |
if route_id is None: | |
route_id = util.FindUniqueId(self.routes) | |
route = self._gtfs_factory.Route(short_name=short_name, long_name=long_name, | |
route_type=route_type, route_id=route_id) | |
route.agency_id = self.GetDefaultAgency().agency_id | |
self.AddRouteObject(route) | |
return route | |
def AddRouteObject(self, route, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if route.route_id in self.routes: | |
problem_reporter.DuplicateID('route_id', route.route_id) | |
return | |
if route.agency_id not in self._agencies: | |
if not route.agency_id and len(self._agencies) == 1: | |
# we'll just assume that the route applies to the only agency | |
pass | |
else: | |
problem_reporter.InvalidValue('agency_id', route.agency_id, | |
'Route uses an unknown agency_id.') | |
return | |
self.AddTableColumns('routes', route._ColumnNames()) | |
route._schedule = weakref.proxy(self) | |
self.routes[route.route_id] = route | |
def GetRouteList(self): | |
return self.routes.values() | |
def GetRoute(self, route_id): | |
return self.routes[route_id] | |
def AddShapeObject(self, shape, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
shape.Validate(problem_reporter) | |
if shape.shape_id in self._shapes: | |
problem_reporter.DuplicateID('shape_id', shape.shape_id) | |
return | |
self._shapes[shape.shape_id] = shape | |
def GetShapeList(self): | |
return self._shapes.values() | |
def GetShape(self, shape_id): | |
return self._shapes[shape_id] | |
def AddTripObject(self, trip, problem_reporter=None, validate=False): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if trip.trip_id in self.trips: | |
problem_reporter.DuplicateID('trip_id', trip.trip_id) | |
return | |
self.AddTableColumns('trips', trip._ColumnNames()) | |
trip._schedule = weakref.proxy(self) | |
self.trips[trip.trip_id] = trip | |
# Call Trip.Validate after setting trip._schedule so that references | |
# are checked. trip.ValidateChildren will be called directly by | |
# schedule.Validate, after stop_times has been loaded. | |
if validate: | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
trip.Validate(problem_reporter, validate_children=False) | |
try: | |
self.routes[trip.route_id]._AddTripObject(trip) | |
except KeyError: | |
# Invalid route_id was reported in the Trip.Validate call above | |
pass | |
def GetTripList(self): | |
return self.trips.values() | |
def GetTrip(self, trip_id): | |
return self.trips[trip_id] | |
def AddFareObject(self, fare, problem_reporter=None): | |
"""Deprecated. Please use AddFareAttributeObject.""" | |
warnings.warn("No longer supported. The Fare class was renamed to " | |
"FareAttribute, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
self.AddFareAttributeObject(fare, problem_reporter) | |
def AddFareAttributeObject(self, fare, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
fare.Validate(problem_reporter) | |
if fare.fare_id in self.fares: | |
problem_reporter.DuplicateID('fare_id', fare.fare_id) | |
return | |
self.fares[fare.fare_id] = fare | |
def GetFareList(self): | |
"""Deprecated. Please use GetFareAttributeList instead""" | |
warnings.warn("No longer supported. The Fare class was renamed to " | |
"FareAttribute, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
return self.GetFareAttributeList() | |
def GetFareAttributeList(self): | |
return self.fares.values() | |
def GetFare(self, fare_id): | |
"""Deprecated. Please use GetFareAttribute instead""" | |
warnings.warn("No longer supported. The Fare class was renamed to " | |
"FareAttribute, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
return self.GetFareAttribute(fare_id) | |
def GetFareAttribute(self, fare_id): | |
return self.fares[fare_id] | |
def AddFareRuleObject(self, rule, problem_reporter=None): | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
if util.IsEmpty(rule.fare_id): | |
problem_reporter.MissingValue('fare_id') | |
return | |
if rule.route_id and rule.route_id not in self.routes: | |
problem_reporter.InvalidValue('route_id', rule.route_id) | |
if rule.origin_id and rule.origin_id not in self.fare_zones: | |
problem_reporter.InvalidValue('origin_id', rule.origin_id) | |
if rule.destination_id and rule.destination_id not in self.fare_zones: | |
problem_reporter.InvalidValue('destination_id', rule.destination_id) | |
if rule.contains_id and rule.contains_id not in self.fare_zones: | |
problem_reporter.InvalidValue('contains_id', rule.contains_id) | |
if rule.fare_id in self.fares: | |
self.GetFareAttribute(rule.fare_id).rules.append(rule) | |
else: | |
problem_reporter.InvalidValue('fare_id', rule.fare_id, | |
'(This fare_id doesn\'t correspond to any ' | |
'of the IDs defined in the ' | |
'fare attributes.)') | |
def AddTransferObject(self, transfer, problem_reporter=None): | |
assert transfer._schedule is None, "only add Transfer to a schedule once" | |
if not problem_reporter: | |
problem_reporter = self.problem_reporter | |
transfer_id = transfer._ID() | |
if transfer_id in self._transfers: | |
self.problem_reporter.DuplicateID(self._gtfs_factory.Transfer._ID_COLUMNS, | |
transfer_id, | |
type=problems_module.TYPE_WARNING) | |
# Duplicates are still added, while not prohibited by GTFS. | |
transfer._schedule = weakref.proxy(self) # See weakref comment at top | |
self.AddTableColumns('transfers', transfer._ColumnNames()) | |
self._transfers[transfer_id].append(transfer) | |
def GetTransferIter(self): | |
"""Return an iterator for all Transfer objects in this schedule.""" | |
return itertools.chain(*self._transfers.values()) | |
def GetTransferList(self): | |
"""Return a list containing all Transfer objects in this schedule.""" | |
return list(self.GetTransferIter()) | |
def GetStop(self, id): | |
return self.stops[id] | |
def GetFareZones(self): | |
"""Returns the list of all fare zones that have been identified by | |
the stops that have been added.""" | |
return self.fare_zones.keys() | |
def GetNearestStops(self, lat, lon, n=1): | |
"""Return the n nearest stops to lat,lon""" | |
dist_stop_list = [] | |
for s in self.stops.values(): | |
# TODO: Use util.ApproximateDistanceBetweenStops? | |
dist = (s.stop_lat - lat)**2 + (s.stop_lon - lon)**2 | |
if len(dist_stop_list) < n: | |
bisect.insort(dist_stop_list, (dist, s)) | |
elif dist < dist_stop_list[-1][0]: | |
bisect.insort(dist_stop_list, (dist, s)) | |
dist_stop_list.pop() # Remove stop with greatest distance | |
return [stop for dist, stop in dist_stop_list] | |
def GetStopsInBoundingBox(self, north, east, south, west, n): | |
"""Return a sample of up to n stops in a bounding box""" | |
stop_list = [] | |
for s in self.stops.values(): | |
if (s.stop_lat <= north and s.stop_lat >= south and | |
s.stop_lon <= east and s.stop_lon >= west): | |
stop_list.append(s) | |
if len(stop_list) == n: | |
break | |
return stop_list | |
def Load(self, feed_path, extra_validation=False): | |
loader = self._gtfs_factory.Loader(feed_path, | |
self, problems=self.problem_reporter, | |
extra_validation=extra_validation) | |
loader.Load() | |
def _WriteArchiveString(self, archive, filename, stringio): | |
zi = zipfile.ZipInfo(filename) | |
# See | |
# http://stackoverflow.com/questions/434641/how-do-i-set-permissions-attributes-on-a-file-in-a-zip-file-using-pythons-zipf | |
zi.external_attr = 0666 << 16L # Set unix permissions to -rw-rw-rw | |
# ZIP_DEFLATED requires zlib. zlib comes with Python 2.4 and 2.5 | |
zi.compress_type = zipfile.ZIP_DEFLATED | |
archive.writestr(zi, stringio.getvalue()) | |
def WriteGoogleTransitFeed(self, file): | |
"""Output this schedule as a Google Transit Feed in file_name. | |
Args: | |
file: path of new feed file (a string) or a file-like object | |
Returns: | |
None | |
""" | |
# Compression type given when adding each file | |
archive = zipfile.ZipFile(file, 'w') | |
if 'agency' in self._table_columns: | |
agency_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(agency_string) | |
columns = self.GetTableColumns('agency') | |
writer.writerow(columns) | |
for a in self._agencies.values(): | |
writer.writerow([util.EncodeUnicode(a[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'agency.txt', agency_string) | |
calendar_dates_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(calendar_dates_string) | |
writer.writerow( | |
self._gtfs_factory.ServicePeriod._FIELD_NAMES_CALENDAR_DATES) | |
has_data = False | |
for period in self.service_periods.values(): | |
for row in period.GenerateCalendarDatesFieldValuesTuples(): | |
has_data = True | |
writer.writerow(row) | |
wrote_calendar_dates = False | |
if has_data: | |
wrote_calendar_dates = True | |
self._WriteArchiveString(archive, 'calendar_dates.txt', | |
calendar_dates_string) | |
calendar_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(calendar_string) | |
writer.writerow(self._gtfs_factory.ServicePeriod._FIELD_NAMES) | |
has_data = False | |
for s in self.service_periods.values(): | |
row = s.GetCalendarFieldValuesTuple() | |
if row: | |
has_data = True | |
writer.writerow(row) | |
if has_data or not wrote_calendar_dates: | |
self._WriteArchiveString(archive, 'calendar.txt', calendar_string) | |
if 'stops' in self._table_columns: | |
stop_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(stop_string) | |
columns = self.GetTableColumns('stops') | |
writer.writerow(columns) | |
for s in self.stops.values(): | |
writer.writerow([util.EncodeUnicode(s[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'stops.txt', stop_string) | |
if 'routes' in self._table_columns: | |
route_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(route_string) | |
columns = self.GetTableColumns('routes') | |
writer.writerow(columns) | |
for r in self.routes.values(): | |
writer.writerow([util.EncodeUnicode(r[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'routes.txt', route_string) | |
if 'trips' in self._table_columns: | |
trips_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(trips_string) | |
columns = self.GetTableColumns('trips') | |
writer.writerow(columns) | |
for t in self.trips.values(): | |
writer.writerow([util.EncodeUnicode(t[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'trips.txt', trips_string) | |
# write frequencies.txt (if applicable) | |
headway_rows = [] | |
for trip in self.GetTripList(): | |
headway_rows += trip.GetFrequencyOutputTuples() | |
if headway_rows: | |
headway_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(headway_string) | |
writer.writerow(self._gtfs_factory.Frequency._FIELD_NAMES) | |
writer.writerows(headway_rows) | |
self._WriteArchiveString(archive, 'frequencies.txt', headway_string) | |
# write fares (if applicable) | |
if self.GetFareAttributeList(): | |
fare_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(fare_string) | |
writer.writerow(self._gtfs_factory.FareAttribute._FIELD_NAMES) | |
writer.writerows( | |
f.GetFieldValuesTuple() for f in self.GetFareAttributeList()) | |
self._WriteArchiveString(archive, 'fare_attributes.txt', fare_string) | |
# write fare rules (if applicable) | |
rule_rows = [] | |
for fare in self.GetFareAttributeList(): | |
for rule in fare.GetFareRuleList(): | |
rule_rows.append(rule.GetFieldValuesTuple()) | |
if rule_rows: | |
rule_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(rule_string) | |
writer.writerow(self._gtfs_factory.FareRule._FIELD_NAMES) | |
writer.writerows(rule_rows) | |
self._WriteArchiveString(archive, 'fare_rules.txt', rule_string) | |
stop_times_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(stop_times_string) | |
writer.writerow(self._gtfs_factory.StopTime._FIELD_NAMES) | |
for t in self.trips.values(): | |
writer.writerows(t._GenerateStopTimesTuples()) | |
self._WriteArchiveString(archive, 'stop_times.txt', stop_times_string) | |
# write shapes (if applicable) | |
shape_rows = [] | |
for shape in self.GetShapeList(): | |
seq = 1 | |
for (lat, lon, dist) in shape.points: | |
shape_rows.append((shape.shape_id, lat, lon, seq, dist)) | |
seq += 1 | |
if shape_rows: | |
shape_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(shape_string) | |
writer.writerow(self._gtfs_factory.Shape._FIELD_NAMES) | |
writer.writerows(shape_rows) | |
self._WriteArchiveString(archive, 'shapes.txt', shape_string) | |
if 'transfers' in self._table_columns: | |
transfer_string = StringIO.StringIO() | |
writer = util.CsvUnicodeWriter(transfer_string) | |
columns = self.GetTableColumns('transfers') | |
writer.writerow(columns) | |
for t in self.GetTransferIter(): | |
writer.writerow([util.EncodeUnicode(t[c]) for c in columns]) | |
self._WriteArchiveString(archive, 'transfers.txt', transfer_string) | |
archive.close() | |
def GenerateDateTripsDeparturesList(self, date_start, date_end): | |
"""Return a list of (date object, number of trips, number of departures). | |
The list is generated for dates in the range [date_start, date_end). | |
Args: | |
date_start: The first date in the list, a date object | |
date_end: The first date after the list, a date object | |
Returns: | |
a list of (date object, number of trips, number of departures) tuples | |
""" | |
service_id_to_trips = defaultdict(lambda: 0) | |
service_id_to_departures = defaultdict(lambda: 0) | |
for trip in self.GetTripList(): | |
headway_start_times = trip.GetFrequencyStartTimes() | |
if headway_start_times: | |
trip_runs = len(headway_start_times) | |
else: | |
trip_runs = 1 | |
service_id_to_trips[trip.service_id] += trip_runs | |
service_id_to_departures[trip.service_id] += ( | |
(trip.GetCountStopTimes() - 1) * trip_runs) | |
date_services = self.GetServicePeriodsActiveEachDate(date_start, date_end) | |
date_trips = [] | |
for date, services in date_services: | |
day_trips = sum(service_id_to_trips[s.service_id] for s in services) | |
day_departures = sum( | |
service_id_to_departures[s.service_id] for s in services) | |
date_trips.append((date, day_trips, day_departures)) | |
return date_trips | |
def ValidateFeedStartAndExpirationDates(self, | |
problems, | |
first_date, | |
last_date, | |
today): | |
"""Validate the start and expiration dates of the feed. | |
Issue a warning if it only starts in the future, or if | |
it expires within 60 days. | |
Args: | |
problems: The problem reporter object | |
first_date: A date object representing the first day the feed is active | |
last_date: A date object representing the last day the feed is active | |
today: A date object representing the date the validation is being run on | |
Returns: | |
None | |
""" | |
warning_cutoff = today + datetime.timedelta(days=60) | |
if last_date < warning_cutoff: | |
problems.ExpirationDate(time.mktime(last_date.timetuple())) | |
if first_date > today: | |
problems.FutureService(time.mktime(first_date.timetuple())) | |
def ValidateServiceGaps(self, | |
problems, | |
validation_start_date, | |
validation_end_date, | |
service_gap_interval): | |
"""Validate consecutive dates without service in the feed. | |
Issue a warning if it finds service gaps of at least | |
"service_gap_interval" consecutive days in the date range | |
[validation_start_date, last_service_date) | |
Args: | |
problems: The problem reporter object | |
validation_start_date: A date object representing the date from which the | |
validation should take place | |
validation_end_date: A date object representing the first day the feed is | |
active | |
service_gap_interval: An integer indicating how many consecutive days the | |
service gaps need to have for a warning to be issued | |
Returns: | |
None | |
""" | |
if service_gap_interval is None: | |
return | |
departures = self.GenerateDateTripsDeparturesList(validation_start_date, | |
validation_end_date) | |
# The first day without service of the _current_ gap | |
first_day_without_service = validation_start_date | |
# The last day without service of the _current_ gap | |
last_day_without_service = validation_start_date | |
consecutive_days_without_service = 0 | |
for day_date, day_trips, _ in departures: | |
if day_trips == 0: | |
if consecutive_days_without_service == 0: | |
first_day_without_service = day_date | |
consecutive_days_without_service += 1 | |
last_day_without_service = day_date | |
else: | |
if consecutive_days_without_service >= service_gap_interval: | |
problems.TooManyDaysWithoutService(first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service) | |
consecutive_days_without_service = 0 | |
# We have to check if there is a gap at the end of the specified date range | |
if consecutive_days_without_service >= service_gap_interval: | |
problems.TooManyDaysWithoutService(first_day_without_service, | |
last_day_without_service, | |
consecutive_days_without_service) | |
def ValidateServiceExceptions(self, | |
problems, | |
first_service_day, | |
last_service_day): | |
# good enough approximation | |
six_months = datetime.timedelta(days=182) | |
service_span = last_service_day - first_service_day | |
if service_span < six_months: | |
# We don't check for exceptions because the feed is | |
# active for less than six months | |
return | |
for period in self.GetServicePeriodList(): | |
# If at least one ServicePeriod has service exceptions we don't issue the | |
# warning, so we can stop looking at the list of ServicePeriods. | |
if period.HasExceptions(): | |
return | |
problems.NoServiceExceptions(start=first_service_day, | |
end=last_service_day) | |
def ValidateServiceRangeAndExceptions(self, problems, today, | |
service_gap_interval): | |
if today is None: | |
today = datetime.date.today() | |
(start_date, end_date) = self.GetDateRange() | |
if not end_date or not start_date: | |
problems.OtherProblem('This feed has no effective service dates!', | |
type=problems_module.TYPE_WARNING) | |
else: | |
try: | |
last_service_day = datetime.datetime( | |
*(time.strptime(end_date, "%Y%m%d")[0:6])).date() | |
first_service_day = datetime.datetime( | |
*(time.strptime(start_date, "%Y%m%d")[0:6])).date() | |
except ValueError: | |
# Format of start_date and end_date checked in class ServicePeriod | |
pass | |
else: | |
self.ValidateServiceExceptions(problems, | |
first_service_day, | |
last_service_day) | |
self.ValidateFeedStartAndExpirationDates(problems, | |
first_service_day, | |
last_service_day, | |
today) | |
# We start checking for service gaps a bit in the past if the | |
# feed was active then. See | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=188 | |
# | |
# We subtract 1 from service_gap_interval so that if today has | |
# service no warning is issued. | |
# | |
# Service gaps are searched for only up to one year from today | |
if service_gap_interval is not None: | |
service_gap_timedelta = datetime.timedelta( | |
days=service_gap_interval - 1) | |
one_year = datetime.timedelta(days=365) | |
self.ValidateServiceGaps( | |
problems, | |
max(first_service_day, | |
today - service_gap_timedelta), | |
min(last_service_day, | |
today + one_year), | |
service_gap_interval) | |
def ValidateStops(self, problems, validate_children): | |
# Check for stops that aren't referenced by any trips and broken | |
# parent_station references. Also check that the parent station isn't too | |
# far from its child stops. | |
for stop in self.stops.values(): | |
if validate_children: | |
stop.Validate(problems) | |
cursor = self._connection.cursor() | |
cursor.execute("SELECT count(*) FROM stop_times WHERE stop_id=? LIMIT 1", | |
(stop.stop_id,)) | |
count = cursor.fetchone()[0] | |
if stop.location_type == 0 and count == 0: | |
problems.UnusedStop(stop.stop_id, stop.stop_name) | |
elif stop.location_type == 1 and count != 0: | |
problems.UsedStation(stop.stop_id, stop.stop_name) | |
if stop.location_type != 1 and stop.parent_station: | |
if stop.parent_station not in self.stops: | |
problems.InvalidValue("parent_station", | |
util.EncodeUnicode(stop.parent_station), | |
"parent_station '%s' not found for stop_id " | |
"'%s' in stops.txt" % | |
(util.EncodeUnicode(stop.parent_station), | |
util.EncodeUnicode(stop.stop_id))) | |
elif self.stops[stop.parent_station].location_type != 1: | |
problems.InvalidValue("parent_station", | |
util.EncodeUnicode(stop.parent_station), | |
"parent_station '%s' of stop_id '%s' must " | |
"have location_type=1 in stops.txt" % | |
(util.EncodeUnicode(stop.parent_station), | |
util.EncodeUnicode(stop.stop_id))) | |
else: | |
parent_station = self.stops[stop.parent_station] | |
distance = util.ApproximateDistanceBetweenStops(stop, parent_station) | |
if distance > problems_module.MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR: | |
problems.StopTooFarFromParentStation( | |
stop.stop_id, stop.stop_name, parent_station.stop_id, | |
parent_station.stop_name, distance, problems_module.TYPE_ERROR) | |
elif distance > problems_module.MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING: | |
problems.StopTooFarFromParentStation( | |
stop.stop_id, stop.stop_name, parent_station.stop_id, | |
parent_station.stop_name, distance, | |
problems_module.TYPE_WARNING) | |
def ValidateNearbyStops(self, problems): | |
# Check for stops that might represent the same location (specifically, | |
# stops that are less that 2 meters apart) First filter out stops without a | |
# valid lat and lon. Then sort by latitude, then find the distance between | |
# each pair of stations within 2 meters latitude of each other. This avoids | |
# doing n^2 comparisons in the average case and doesn't need a spatial | |
# index. | |
sorted_stops = filter(lambda s: s.stop_lat and s.stop_lon, | |
self.GetStopList()) | |
sorted_stops.sort(key=(lambda x: x.stop_lat)) | |
TWO_METERS_LAT = 0.000018 | |
for index, stop in enumerate(sorted_stops[:-1]): | |
index += 1 | |
while ((index < len(sorted_stops)) and | |
((sorted_stops[index].stop_lat - stop.stop_lat) < TWO_METERS_LAT)): | |
distance = util.ApproximateDistanceBetweenStops(stop, | |
sorted_stops[index]) | |
if distance < 2: | |
other_stop = sorted_stops[index] | |
if stop.location_type == 0 and other_stop.location_type == 0: | |
problems.StopsTooClose( | |
util.EncodeUnicode(stop.stop_name), | |
util.EncodeUnicode(stop.stop_id), | |
util.EncodeUnicode(other_stop.stop_name), | |
util.EncodeUnicode(other_stop.stop_id), distance) | |
elif stop.location_type == 1 and other_stop.location_type == 1: | |
problems.StationsTooClose( | |
util.EncodeUnicode(stop.stop_name), | |
util.EncodeUnicode(stop.stop_id), | |
util.EncodeUnicode(other_stop.stop_name), | |
util.EncodeUnicode(other_stop.stop_id), distance) | |
elif (stop.location_type in (0, 1) and | |
other_stop.location_type in (0, 1)): | |
if stop.location_type == 0 and other_stop.location_type == 1: | |
this_stop = stop | |
this_station = other_stop | |
elif stop.location_type == 1 and other_stop.location_type == 0: | |
this_stop = other_stop | |
this_station = stop | |
if this_stop.parent_station != this_station.stop_id: | |
problems.DifferentStationTooClose( | |
util.EncodeUnicode(this_stop.stop_name), | |
util.EncodeUnicode(this_stop.stop_id), | |
util.EncodeUnicode(this_station.stop_name), | |
util.EncodeUnicode(this_station.stop_id), distance) | |
index += 1 | |
def ValidateRouteNames(self, problems, validate_children): | |
# Check for multiple routes using same short + long name | |
route_names = {} | |
for route in self.routes.values(): | |
if validate_children: | |
route.Validate(problems) | |
short_name = '' | |
if not util.IsEmpty(route.route_short_name): | |
short_name = route.route_short_name.lower().strip() | |
long_name = '' | |
if not util.IsEmpty(route.route_long_name): | |
long_name = route.route_long_name.lower().strip() | |
name = (short_name, long_name) | |
if name in route_names: | |
problems.InvalidValue('route_long_name', | |
long_name, | |
'The same combination of ' | |
'route_short_name and route_long_name ' | |
'shouldn\'t be used for more than one ' | |
'route, as it is for the for the two routes ' | |
'with IDs "%s" and "%s".' % | |
(route.route_id, route_names[name].route_id), | |
type=problems_module.TYPE_WARNING) | |
else: | |
route_names[name] = route | |
def ValidateTrips(self, problems): | |
stop_types = {} # a dict mapping stop_id to [route_id, route_type, is_match] | |
trips = {} # a dict mapping tuple to (route_id, trip_id) | |
# a dict mapping block_id to a list of tuple of | |
# (trip_id, first_arrival_secs, last_arrival_secs) | |
trip_intervals_by_block_id = defaultdict(lambda: []) | |
for trip in sorted(self.trips.values()): | |
if trip.route_id not in self.routes: | |
continue | |
route_type = self.GetRoute(trip.route_id).route_type | |
stop_ids = [] | |
stop_times = trip.GetStopTimes(problems) | |
for index, st in enumerate(stop_times): | |
stop_id = st.stop.stop_id | |
stop_ids.append(stop_id) | |
# Check a stop if which belongs to both subway and bus. | |
if (route_type == self._gtfs_factory.Route._ROUTE_TYPE_NAMES['Subway'] or | |
route_type == self._gtfs_factory.Route._ROUTE_TYPE_NAMES['Bus']): | |
if stop_id not in stop_types: | |
stop_types[stop_id] = [trip.route_id, route_type, 0] | |
elif (stop_types[stop_id][1] != route_type and | |
stop_types[stop_id][2] == 0): | |
stop_types[stop_id][2] = 1 | |
if stop_types[stop_id][1] == \ | |
self._gtfs_factory.Route._ROUTE_TYPE_NAMES['Subway']: | |
subway_route_id = stop_types[stop_id][0] | |
bus_route_id = trip.route_id | |
else: | |
subway_route_id = trip.route_id | |
bus_route_id = stop_types[stop_id][0] | |
problems.StopWithMultipleRouteTypes(st.stop.stop_name, stop_id, | |
subway_route_id, bus_route_id) | |
# We only care about trips with a block id | |
if not util.IsEmpty(trip.block_id) and stop_times: | |
first_arrival_secs = stop_times[0].arrival_secs | |
last_departure_secs = stop_times[-1].departure_secs | |
# The arrival and departure time of the first and last stop_time | |
# SHOULD be set, but we need to handle the case where we're given | |
# an invalid feed anyway | |
if first_arrival_secs is not None and last_departure_secs is not None: | |
# Create a trip interval tuple of the trip id and arrival time | |
# intervals | |
key = trip.block_id | |
trip_intervals = trip_intervals_by_block_id[key] | |
trip_interval = (trip, first_arrival_secs, last_departure_secs) | |
trip_intervals.append(trip_interval) | |
# Check duplicate trips which go through the same stops with same | |
# service and start times. | |
if self._check_duplicate_trips: | |
if not stop_ids or not stop_times: | |
continue | |
key = (trip.service_id, stop_times[0].arrival_time, str(stop_ids)) | |
if key not in trips: | |
trips[key] = (trip.route_id, trip.trip_id) | |
else: | |
problems.DuplicateTrip(trips[key][1], trips[key][0], trip.trip_id, | |
trip.route_id) | |
# Now that we've generated our block trip intervls, we can check for | |
# overlaps in the intervals | |
self.ValidateBlocks(problems, trip_intervals_by_block_id) | |
def ValidateBlocks(self, problems, trip_intervals_by_block_id): | |
# Expects trip_intervals_by_block_id to be a dict with a key of block ids | |
# and a value of lists of tuples | |
# (trip, min_arrival_secs, max_departure_secs) | |
# Cache potentially expensive ServicePeriod overlap checks | |
service_period_overlap_cache = {} | |
for (block_id,trip_intervals) in trip_intervals_by_block_id.items(): | |
# Sort trip intervals by min arrival time | |
trip_intervals.sort(key=(lambda x: x[1])) | |
for xi in range(len(trip_intervals)): | |
trip_interval_a = trip_intervals[xi] | |
trip_a = trip_interval_a[0] | |
for xj in range(xi+1,len(trip_intervals)): | |
trip_interval_b = trip_intervals[xj] | |
trip_b = trip_interval_b[0] | |
# If the last departure of trip interval A is less than or equal | |
# to the first arrival of trip interval B, stop checking | |
if trip_interval_a[2] <= trip_interval_b[1]: | |
break | |
# We have an overlap between the times in two trip intervals in | |
# the same block. Potentially a problem... | |
# If they have the same service id, the trips run on the same | |
# day, yet have overlapping stop times. Definitely a problem. | |
if trip_a.service_id == trip_b.service_id: | |
problems.OverlappingTripsInSameBlock(trip_a.trip_id, | |
trip_b.trip_id, block_id) | |
else: | |
# Even if the the trips don't have the same service_id, their | |
# service dates might still overlap. Since the ServicePeriod | |
# overlap check is potentially expensive, we cache the | |
# computation | |
service_id_pair_key = tuple(sorted([trip_a.service_id, | |
trip_b.service_id])) | |
# If the serivce_id_pair_key is not in the cache, we do the | |
# full service period comparison | |
if service_id_pair_key not in service_period_overlap_cache: | |
service_period_a = self.GetServicePeriod(trip_a.service_id) | |
service_period_b = self.GetServicePeriod(trip_b.service_id) | |
dates_a = service_period_a.ActiveDates() | |
dates_b = service_period_b.ActiveDates() | |
overlap = False | |
for date in dates_a: | |
if date in dates_b: | |
overlap = True | |
break | |
service_period_overlap_cache[service_id_pair_key] = overlap | |
if service_period_overlap_cache[service_id_pair_key]: | |
problems.OverlappingTripsInSameBlock(trip_a.trip_id, | |
trip_b.trip_id, | |
block_id) | |
def ValidateRouteAgencyId(self, problems): | |
# Check that routes' agency IDs are valid, if set | |
for route in self.routes.values(): | |
if (not util.IsEmpty(route.agency_id) and | |
not route.agency_id in self._agencies): | |
problems.InvalidValue('agency_id', | |
route.agency_id, | |
'The route with ID "%s" specifies agency_id ' | |
'"%s", which doesn\'t exist.' % | |
(route.route_id, route.agency_id)) | |
def ValidateTripStopTimes(self, problems): | |
# Make sure all trips have stop_times | |
# We're doing this here instead of in Trip.Validate() so that | |
# Trips can be validated without error during the reading of trips.txt | |
for trip in self.trips.values(): | |
trip.ValidateChildren(problems) | |
count_stop_times = trip.GetCountStopTimes() | |
if not count_stop_times: | |
problems.OtherProblem('The trip with the trip_id "%s" doesn\'t have ' | |
'any stop times defined.' % trip.trip_id, | |
type=problems_module.TYPE_WARNING) | |
if len(trip._headways) > 0: # no stoptimes, but there are headways | |
problems.OtherProblem('Frequencies defined, but no stop times given ' | |
'in trip %s' % trip.trip_id, | |
type=problems_module.TYPE_ERROR) | |
elif count_stop_times == 1: | |
problems.OtherProblem('The trip with the trip_id "%s" only has one ' | |
'stop on it; it should have at least one more ' | |
'stop so that the riders can leave!' % | |
trip.trip_id, type=problems_module.TYPE_WARNING) | |
else: | |
# These methods report InvalidValue if there's no first or last time | |
trip.GetStartTime(problems=problems) | |
trip.GetEndTime(problems=problems) | |
def ValidateUnusedShapes(self, problems): | |
# Check for unused shapes | |
known_shape_ids = set(self._shapes.keys()) | |
used_shape_ids = set() | |
for trip in self.GetTripList(): | |
used_shape_ids.add(trip.shape_id) | |
unused_shape_ids = known_shape_ids - used_shape_ids | |
if unused_shape_ids: | |
problems.OtherProblem('The shapes with the following shape_ids aren\'t ' | |
'used by any trips: %s' % | |
', '.join(unused_shape_ids), | |
type=problems_module.TYPE_WARNING) | |
def Validate(self, | |
problems=None, | |
validate_children=True, | |
today=None, | |
service_gap_interval=None): | |
"""Validates various holistic aspects of the schedule | |
(mostly interrelationships between the various data sets).""" | |
if not problems: | |
problems = self.problem_reporter | |
self.ValidateServiceRangeAndExceptions(problems, today, | |
service_gap_interval) | |
# TODO: Check Trip fields against valid values | |
self.ValidateStops(problems, validate_children) | |
#TODO: check that every station is used. | |
# Then uncomment testStationWithoutReference. | |
self.ValidateNearbyStops(problems) | |
self.ValidateRouteNames(problems, validate_children) | |
self.ValidateTrips(problems) | |
self.ValidateRouteAgencyId(problems) | |
self.ValidateTripStopTimes(problems) | |
self.ValidateUnusedShapes(problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import datetime | |
import re | |
import time | |
import problems as problems_module | |
import util | |
class ServicePeriod(object): | |
"""Represents a service, which identifies a set of dates when one or more | |
trips operate.""" | |
_DAYS_OF_WEEK = [ | |
'monday', 'tuesday', 'wednesday', 'thursday', 'friday', | |
'saturday', 'sunday' | |
] | |
_FIELD_NAMES_REQUIRED = [ | |
'service_id', 'start_date', 'end_date' | |
] + _DAYS_OF_WEEK | |
_FIELD_NAMES = _FIELD_NAMES_REQUIRED # no optional fields in this one | |
_FIELD_NAMES_CALENDAR_DATES = ['service_id', 'date', 'exception_type'] | |
def __init__(self, id=None, field_list=None): | |
self.original_day_values = [] | |
if field_list: | |
self.service_id = field_list[self._FIELD_NAMES.index('service_id')] | |
self.day_of_week = [False] * len(self._DAYS_OF_WEEK) | |
for day in self._DAYS_OF_WEEK: | |
value = field_list[self._FIELD_NAMES.index(day)] or '' # can be None | |
self.original_day_values += [value.strip()] | |
self.day_of_week[self._DAYS_OF_WEEK.index(day)] = (value == u'1') | |
self.start_date = field_list[self._FIELD_NAMES.index('start_date')] | |
self.end_date = field_list[self._FIELD_NAMES.index('end_date')] | |
else: | |
self.service_id = id | |
self.day_of_week = [False] * 7 | |
self.start_date = None | |
self.end_date = None | |
self.date_exceptions = {} # Map from 'YYYYMMDD' to 1 (add) or 2 (remove) | |
def _IsValidDate(self, date): | |
if re.match('^\d{8}$', date) == None: | |
return False | |
try: | |
time.strptime(date, "%Y%m%d") | |
return True | |
except ValueError: | |
return False | |
def HasExceptions(self): | |
"""Checks if the ServicePeriod has service exceptions.""" | |
if self.date_exceptions: | |
return True | |
else: | |
return False | |
def GetDateRange(self): | |
"""Return the range over which this ServicePeriod is valid. | |
The range includes exception dates that add service outside of | |
(start_date, end_date), but doesn't shrink the range if exception | |
dates take away service at the edges of the range. | |
Returns: | |
A tuple of "YYYYMMDD" strings, (start date, end date) or (None, None) if | |
no dates have been given. | |
""" | |
start = self.start_date | |
end = self.end_date | |
for date in self.date_exceptions: | |
if self.date_exceptions[date] == 2: | |
continue | |
if not start or (date < start): | |
start = date | |
if not end or (date > end): | |
end = date | |
if start is None: | |
start = end | |
elif end is None: | |
end = start | |
# If start and end are None we did a little harmless shuffling | |
return (start, end) | |
def GetCalendarFieldValuesTuple(self): | |
"""Return the tuple of calendar.txt values or None if this ServicePeriod | |
should not be in calendar.txt .""" | |
if self.start_date and self.end_date: | |
return [getattr(self, fn) for fn in self._FIELD_NAMES] | |
def GenerateCalendarDatesFieldValuesTuples(self): | |
"""Generates tuples of calendar_dates.txt values. Yield zero tuples if | |
this ServicePeriod should not be in calendar_dates.txt .""" | |
for date, exception_type in self.date_exceptions.items(): | |
yield (self.service_id, date, unicode(exception_type)) | |
def GetCalendarDatesFieldValuesTuples(self): | |
"""Return a list of date execeptions""" | |
result = [] | |
for date_tuple in self.GenerateCalendarDatesFieldValuesTuples(): | |
result.append(date_tuple) | |
result.sort() # helps with __eq__ | |
return result | |
def SetDateHasService(self, date, has_service=True, problems=None): | |
if date in self.date_exceptions and problems: | |
problems.DuplicateID(('service_id', 'date'), | |
(self.service_id, date), | |
type=problems_module.TYPE_WARNING) | |
self.date_exceptions[date] = has_service and 1 or 2 | |
def ResetDateToNormalService(self, date): | |
if date in self.date_exceptions: | |
del self.date_exceptions[date] | |
def SetStartDate(self, start_date): | |
"""Set the first day of service as a string in YYYYMMDD format""" | |
self.start_date = start_date | |
def SetEndDate(self, end_date): | |
"""Set the last day of service as a string in YYYYMMDD format""" | |
self.end_date = end_date | |
def SetDayOfWeekHasService(self, dow, has_service=True): | |
"""Set service as running (or not) on a day of the week. By default the | |
service does not run on any days. | |
Args: | |
dow: 0 for Monday through 6 for Sunday | |
has_service: True if this service operates on dow, False if it does not. | |
Returns: | |
None | |
""" | |
assert(dow >= 0 and dow < 7) | |
self.day_of_week[dow] = has_service | |
def SetWeekdayService(self, has_service=True): | |
"""Set service as running (or not) on all of Monday through Friday.""" | |
for i in range(0, 5): | |
self.SetDayOfWeekHasService(i, has_service) | |
def SetWeekendService(self, has_service=True): | |
"""Set service as running (or not) on Saturday and Sunday.""" | |
self.SetDayOfWeekHasService(5, has_service) | |
self.SetDayOfWeekHasService(6, has_service) | |
def SetServiceId(self, service_id): | |
"""Set the service_id for this schedule. Generally the default will | |
suffice so you won't need to call this method.""" | |
self.service_id = service_id | |
def IsActiveOn(self, date, date_object=None): | |
"""Test if this service period is active on a date. | |
Args: | |
date: a string of form "YYYYMMDD" | |
date_object: a date object representing the same date as date. | |
This parameter is optional, and present only for performance | |
reasons. | |
If the caller constructs the date string from a date object | |
that date object can be passed directly, thus avoiding the | |
costly conversion from string to date object. | |
Returns: | |
True iff this service is active on date. | |
""" | |
if date in self.date_exceptions: | |
if self.date_exceptions[date] == 1: | |
return True | |
else: | |
return False | |
if (self.start_date and self.end_date and self.start_date <= date and | |
date <= self.end_date): | |
if date_object is None: | |
date_object = util.DateStringToDateObject(date) | |
return self.day_of_week[date_object.weekday()] | |
return False | |
def ActiveDates(self): | |
"""Return dates this service period is active as a list of "YYYYMMDD".""" | |
(earliest, latest) = self.GetDateRange() | |
if earliest is None: | |
return [] | |
dates = [] | |
date_it = util.DateStringToDateObject(earliest) | |
date_end = util.DateStringToDateObject(latest) | |
delta = datetime.timedelta(days=1) | |
while date_it <= date_end: | |
date_it_string = date_it.strftime("%Y%m%d") | |
if self.IsActiveOn(date_it_string, date_it): | |
dates.append(date_it_string) | |
date_it = date_it + delta | |
return dates | |
def __getattr__(self, name): | |
try: | |
# Return 1 if value in day_of_week is True, 0 otherwise | |
return (self.day_of_week[self._DAYS_OF_WEEK.index(name)] | |
and 1 or 0) | |
except KeyError: | |
pass | |
except ValueError: # not a day of the week | |
pass | |
raise AttributeError(name) | |
def __getitem__(self, name): | |
return getattr(self, name) | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
if (self.GetCalendarFieldValuesTuple() != | |
other.GetCalendarFieldValuesTuple()): | |
return False | |
if (self.GetCalendarDatesFieldValuesTuples() != | |
other.GetCalendarDatesFieldValuesTuples()): | |
return False | |
return True | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def ValidateServiceId(self, problems): | |
if util.IsEmpty(self.service_id): | |
problems.MissingValue('service_id') | |
def ValidateStartDate(self, problems): | |
if self.start_date is not None: | |
if util.IsEmpty(self.start_date): | |
problems.MissingValue('start_date') | |
self.start_date = None | |
elif not self._IsValidDate(self.start_date): | |
problems.InvalidValue('start_date', self.start_date) | |
self.start_date = None | |
def ValidateEndDate(self, problems): | |
if self.end_date is not None: | |
if util.IsEmpty(self.end_date): | |
problems.MissingValue('end_date') | |
self.end_date = None | |
elif not self._IsValidDate(self.end_date): | |
problems.InvalidValue('end_date', self.end_date) | |
self.end_date = None | |
def ValidateEndDateAfterStartDate(self, problems): | |
if self.start_date and self.end_date and self.end_date < self.start_date: | |
problems.InvalidValue('end_date', self.end_date, | |
'end_date of %s is earlier than ' | |
'start_date of "%s"' % | |
(self.end_date, self.start_date)) | |
def ValidateDaysOfWeek(self, problems): | |
if self.original_day_values: | |
index = 0 | |
for value in self.original_day_values: | |
column_name = self._DAYS_OF_WEEK[index] | |
if util.IsEmpty(value): | |
problems.MissingValue(column_name) | |
elif (value != u'0') and (value != '1'): | |
problems.InvalidValue(column_name, value) | |
index += 1 | |
def ValidateHasServiceAtLeastOnceAWeek(self, problems): | |
if (True not in self.day_of_week and | |
1 not in self.date_exceptions.values()): | |
problems.OtherProblem('Service period with service_id "%s" ' | |
'doesn\'t have service on any days ' | |
'of the week.' % self.service_id, | |
type=problems_module.TYPE_WARNING) | |
def ValidateDates(self, problems): | |
for date in self.date_exceptions: | |
if not self._IsValidDate(date): | |
problems.InvalidValue('date', date) | |
def Validate(self, problems=problems_module.default_problem_reporter): | |
self.ValidateServiceId(problems) | |
# self.start_date/self.end_date is None in 3 cases: | |
# ServicePeriod created by loader and | |
# 1a) self.service_id wasn't in calendar.txt | |
# 1b) calendar.txt didn't have a start_date/end_date column | |
# ServicePeriod created directly and | |
# 2) start_date/end_date wasn't set | |
# In case 1a no problem is reported. In case 1b the missing required column | |
# generates an error in _ReadCSV so this method should not report another | |
# problem. There is no way to tell the difference between cases 1b and 2 | |
# so case 2 is ignored because making the feedvalidator pretty is more | |
# important than perfect validation when an API users makes a mistake. | |
self.ValidateStartDate(problems) | |
self.ValidateEndDate(problems) | |
self.ValidateEndDateAfterStartDate(problems) | |
self.ValidateDaysOfWeek(problems) | |
self.ValidateHasServiceAtLeastOnceAWeek(problems) | |
self.ValidateDates(problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import bisect | |
from gtfsfactoryuser import GtfsFactoryUser | |
import problems as problems_module | |
import util | |
class Shape(GtfsFactoryUser): | |
"""This class represents a geographic shape that corresponds to the route | |
taken by one or more Trips.""" | |
_REQUIRED_FIELD_NAMES = ['shape_id', 'shape_pt_lat', 'shape_pt_lon', | |
'shape_pt_sequence'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['shape_dist_traveled'] | |
def __init__(self, shape_id): | |
# List of shape point tuple (lat, lng, shape_dist_traveled), where lat and | |
# lon is the location of the shape point, and shape_dist_traveled is an | |
# increasing metric representing the distance traveled along the shape. | |
self.points = [] | |
# An ID that uniquely identifies a shape in the dataset. | |
self.shape_id = shape_id | |
# The max shape_dist_traveled of shape points in this shape. | |
self.max_distance = 0 | |
# List of shape_dist_traveled of each shape point. | |
self.distance = [] | |
# List of shape_pt_sequence of each shape point. | |
self.sequence = [] | |
def AddPoint(self, lat, lon, distance=None, | |
problems=problems_module.default_problem_reporter): | |
shapepoint_class = self.GetGtfsFactory().ShapePoint | |
shapepoint = shapepoint_class( | |
self.shape_id, lat, lon, len(self.sequence), distance) | |
if shapepoint.ParseAttributes(problems): | |
self.AddShapePointObjectUnsorted(shapepoint, problems) | |
def AddShapePointObjectUnsorted(self, shapepoint, problems): | |
"""Insert a point into a correct position by sequence. """ | |
if (len(self.sequence) == 0 or | |
shapepoint.shape_pt_sequence >= self.sequence[-1]): | |
index = len(self.sequence) | |
elif shapepoint.shape_pt_sequence <= self.sequence[0]: | |
index = 0 | |
else: | |
index = bisect.bisect(self.sequence, shapepoint.shape_pt_sequence) | |
if shapepoint.shape_pt_sequence in self.sequence: | |
problems.InvalidValue('shape_pt_sequence', shapepoint.shape_pt_sequence, | |
'The sequence number %d occurs more than once in ' | |
'shape %s.' % | |
(shapepoint.shape_pt_sequence, self.shape_id)) | |
if shapepoint.shape_dist_traveled is not None and len(self.sequence) > 0: | |
if (index != len(self.sequence) and | |
shapepoint.shape_dist_traveled > self.distance[index]): | |
problems.InvalidValue('shape_dist_traveled', | |
shapepoint.shape_dist_traveled, | |
'Each subsequent point in a shape should have ' | |
'a distance value that shouldn\'t be larger ' | |
'than the next ones. In this case, the next ' | |
'distance was %f.' % self.distance[index]) | |
if (index > 0 and | |
shapepoint.shape_dist_traveled < self.distance[index - 1]): | |
problems.InvalidValue('shape_dist_traveled', | |
shapepoint.shape_dist_traveled, | |
'Each subsequent point in a shape should have ' | |
'a distance value that\'s at least as large as ' | |
'the previous ones. In this case, the previous ' | |
'distance was %f.' % self.distance[index - 1]) | |
if shapepoint.shape_dist_traveled > self.max_distance: | |
self.max_distance = shapepoint.shape_dist_traveled | |
self.sequence.insert(index, shapepoint.shape_pt_sequence) | |
self.distance.insert(index, shapepoint.shape_dist_traveled) | |
self.points.insert(index, (shapepoint.shape_pt_lat, | |
shapepoint.shape_pt_lon, | |
shapepoint.shape_dist_traveled)) | |
def ClearPoints(self): | |
self.points = [] | |
def __eq__(self, other): | |
if not other: | |
return False | |
if id(self) == id(other): | |
return True | |
return self.points == other.points | |
def __ne__(self, other): | |
return not self.__eq__(other) | |
def __repr__(self): | |
return "<Shape %s>" % self.__dict__ | |
def ValidateShapeId(self, problems): | |
if util.IsEmpty(self.shape_id): | |
problems.MissingValue('shape_id') | |
def ValidateShapePoints(self, problems): | |
if not self.points: | |
problems.OtherProblem('The shape with shape_id "%s" contains no points.' % | |
self.shape_id, type=problems_module.TYPE_WARNING) | |
def Validate(self, problems=problems_module.default_problem_reporter): | |
self.ValidateShapeId(problems) | |
self.ValidateShapePoints(problems) | |
def GetPointWithDistanceTraveled(self, shape_dist_traveled): | |
"""Returns a point on the shape polyline with the input shape_dist_traveled. | |
Args: | |
shape_dist_traveled: The input shape_dist_traveled. | |
Returns: | |
The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and | |
lng is the location of the shape point, and shape_dist_traveled is an | |
increasing metric representing the distance traveled along the shape. | |
Returns None if there is data error in shape. | |
""" | |
if not self.distance: | |
return None | |
if shape_dist_traveled <= self.distance[0]: | |
return self.points[0] | |
if shape_dist_traveled >= self.distance[-1]: | |
return self.points[-1] | |
index = bisect.bisect(self.distance, shape_dist_traveled) | |
(lat0, lng0, dist0) = self.points[index - 1] | |
(lat1, lng1, dist1) = self.points[index] | |
# Interpolate if shape_dist_traveled does not equal to any of the point | |
# in shape segment. | |
# (lat0, lng0) (lat, lng) (lat1, lng1) | |
# -----|--------------------|---------------------|------ | |
# dist0 shape_dist_traveled dist1 | |
# \------- ca --------/ \-------- bc -------/ | |
# \----------------- ba ------------------/ | |
ca = shape_dist_traveled - dist0 | |
bc = dist1 - shape_dist_traveled | |
ba = bc + ca | |
if ba == 0: | |
# This only happens when there's data error in shapes and should have been | |
# catched before. Check to avoid crash. | |
return None | |
# This won't work crossing longitude 180 and is only an approximation which | |
# works well for short distance. | |
lat = (lat1 * ca + lat0 * bc) / ba | |
lng = (lng1 * ca + lng0 * bc) / ba | |
return (lat, lng, shape_dist_traveled) | |
#!/usr/bin/python2.4 | |
# | |
# Copyright 2007 Google Inc. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
"""A library for manipulating points and polylines. | |
This is a library for creating and manipulating points on the unit | |
sphere, as an approximate model of Earth. The primary use of this | |
library is to make manipulation and matching of polylines easy in the | |
transitfeed library. | |
NOTE: in this library, Earth is modelled as a sphere, whereas | |
GTFS specifies that latitudes and longitudes are in WGS84. For the | |
purpose of comparing and matching latitudes and longitudes that | |
are relatively close together on the surface of the earth, this | |
is adequate; for other purposes, this library may not be accurate | |
enough. | |
""" | |
__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)' | |
import copy | |
import decimal | |
import heapq | |
import math | |
class ShapeError(Exception): | |
"""Thrown whenever there is a shape parsing error.""" | |
pass | |
EARTH_RADIUS_METERS = 6371010.0 | |
class Point(object): | |
""" | |
A class representing a point on the unit sphere in three dimensions. | |
""" | |
def __init__(self, x, y, z): | |
self.x = x | |
self.y = y | |
self.z = z | |
def __hash__(self): | |
return hash((self.x, self.y, self.z)) | |
def __cmp__(self, other): | |
if not isinstance(other, Point): | |
raise TypeError('Point.__cmp__(x,y) requires y to be a "Point", ' | |
'not a "%s"' % type(other).__name__) | |
return cmp((self.x, self.y, self.z), (other.x, other.y, other.z)) | |
def __str__(self): | |
return "(%.15f, %.15f, %.15f) " % (self.x, self.y, self.z) | |
def Norm2(self): | |
""" | |
Returns the L_2 (Euclidean) norm of self. | |
""" | |
sum = self.x * self.x + self.y * self.y + self.z * self.z | |
return math.sqrt(float(sum)) | |
def IsUnitLength(self): | |
return abs(self.Norm2() - 1.0) < 1e-14 | |
def Plus(self, other): | |
""" | |
Returns a new point which is the pointwise sum of self and other. | |
""" | |
return Point(self.x + other.x, | |
self.y + other.y, | |
self.z + other.z) | |
def Minus(self, other): | |
""" | |
Returns a new point which is the pointwise subtraction of other from | |
self. | |
""" | |
return Point(self.x - other.x, | |
self.y - other.y, | |
self.z - other.z) | |
def DotProd(self, other): | |
""" | |
Returns the (scalar) dot product of self with other. | |
""" | |
return self.x * other.x + self.y * other.y + self.z * other.z | |
def Times(self, val): | |
""" | |
Returns a new point which is pointwise multiplied by val. | |
""" | |
return Point(self.x * val, self.y * val, self.z * val) | |
def Normalize(self): | |
""" | |
Returns a unit point in the same direction as self. | |
""" | |
return self.Times(1 / self.Norm2()) | |
def RobustCrossProd(self, other): | |
""" | |
A robust version of cross product. If self and other | |
are not nearly the same point, returns the same value | |
as CrossProd() modulo normalization. Otherwise returns | |
an arbitrary unit point orthogonal to self. | |
""" | |
assert(self.IsUnitLength() and other.IsUnitLength()) | |
x = self.Plus(other).CrossProd(other.Minus(self)) | |
if abs(x.x) > 1e-15 or abs(x.y) > 1e-15 or abs(x.z) > 1e-15: | |
return x.Normalize() | |
else: | |
return self.Ortho() | |
def LargestComponent(self): | |
""" | |
Returns (i, val) where i is the component index (0 - 2) | |
which has largest absolute value and val is the value | |
of the component. | |
""" | |
if abs(self.x) > abs(self.y): | |
if abs(self.x) > abs(self.z): | |
return (0, self.x) | |
else: | |
return (2, self.z) | |
else: | |
if abs(self.y) > abs(self.z): | |
return (1, self.y) | |
else: | |
return (2, self.z) | |
def Ortho(self): | |
"""Returns a unit-length point orthogonal to this point""" | |
(index, val) = self.LargestComponent() | |
index = index - 1 | |
if index < 0: | |
index = 2 | |
temp = Point(0.012, 0.053, 0.00457) | |
if index == 0: | |
temp.x = 1 | |
elif index == 1: | |
temp.y = 1 | |
elif index == 2: | |
temp.z = 1 | |
return self.CrossProd(temp).Normalize() | |
def CrossProd(self, other): | |
""" | |
Returns the cross product of self and other. | |
""" | |
return Point( | |
self.y * other.z - self.z * other.y, | |
self.z * other.x - self.x * other.z, | |
self.x * other.y - self.y * other.x) | |
@staticmethod | |
def _approxEq(a, b): | |
return abs(a - b) < 1e-11 | |
def Equals(self, other): | |
""" | |
Returns true of self and other are approximately equal. | |
""" | |
return (self._approxEq(self.x, other.x) | |
and self._approxEq(self.y, other.y) | |
and self._approxEq(self.z, other.z)) | |
def Angle(self, other): | |
""" | |
Returns the angle in radians between self and other. | |
""" | |
return math.atan2(self.CrossProd(other).Norm2(), | |
self.DotProd(other)) | |
def ToLatLng(self): | |
""" | |
Returns that latitude and longitude that this point represents | |
under a spherical Earth model. | |
""" | |
rad_lat = math.atan2(self.z, math.sqrt(self.x * self.x + self.y * self.y)) | |
rad_lng = math.atan2(self.y, self.x) | |
return (rad_lat * 180.0 / math.pi, rad_lng * 180.0 / math.pi) | |
@staticmethod | |
def FromLatLng(lat, lng): | |
""" | |
Returns a new point representing this latitude and longitude under | |
a spherical Earth model. | |
""" | |
phi = lat * (math.pi / 180.0) | |
theta = lng * (math.pi / 180.0) | |
cosphi = math.cos(phi) | |
return Point(math.cos(theta) * cosphi, | |
math.sin(theta) * cosphi, | |
math.sin(phi)) | |
def GetDistanceMeters(self, other): | |
assert(self.IsUnitLength() and other.IsUnitLength()) | |
return self.Angle(other) * EARTH_RADIUS_METERS | |
def SimpleCCW(a, b, c): | |
""" | |
Returns true if the triangle abc is oriented counterclockwise. | |
""" | |
return c.CrossProd(a).DotProd(b) > 0 | |
def GetClosestPoint(x, a, b): | |
""" | |
Returns the point on the great circle segment ab closest to x. | |
""" | |
assert(x.IsUnitLength()) | |
assert(a.IsUnitLength()) | |
assert(b.IsUnitLength()) | |
a_cross_b = a.RobustCrossProd(b) | |
# project to the great circle going through a and b | |
p = x.Minus( | |
a_cross_b.Times( | |
x.DotProd(a_cross_b) / a_cross_b.Norm2())) | |
# if p lies between a and b, return it | |
if SimpleCCW(a_cross_b, a, p) and SimpleCCW(p, b, a_cross_b): | |
return p.Normalize() | |
# otherwise return the closer of a or b | |
if x.Minus(a).Norm2() <= x.Minus(b).Norm2(): | |
return a | |
else: | |
return b | |
class Poly(object): | |
""" | |
A class representing a polyline. | |
""" | |
def __init__(self, points = [], name=None): | |
self._points = list(points) | |
self._name = name | |
def AddPoint(self, p): | |
""" | |
Adds a new point to the end of the polyline. | |
""" | |
assert(p.IsUnitLength()) | |
self._points.append(p) | |
def GetName(self): | |
return self._name | |
def GetPoint(self, i): | |
return self._points[i] | |
def GetPoints(self): | |
return self._points | |
def GetNumPoints(self): | |
return len(self._points) | |
def _GetPointSafe(self, i): | |
try: | |
return self.GetPoint(i) | |
except IndexError: | |
return None | |
def GetClosestPoint(self, p): | |
""" | |
Returns (closest_p, closest_i), where closest_p is the closest point | |
to p on the piecewise linear curve represented by the polyline, | |
and closest_i is the index of the point on the polyline just before | |
the polyline segment that contains closest_p. | |
""" | |
assert(len(self._points) > 0) | |
closest_point = self._points[0] | |
closest_i = 0 | |
for i in range(0, len(self._points) - 1): | |
(a, b) = (self._points[i], self._points[i+1]) | |
cur_closest_point = GetClosestPoint(p, a, b) | |
if p.Angle(cur_closest_point) < p.Angle(closest_point): | |
closest_point = cur_closest_point.Normalize() | |
closest_i = i | |
return (closest_point, closest_i) | |
def LengthMeters(self): | |
"""Return length of this polyline in meters.""" | |
assert(len(self._points) > 0) | |
length = 0 | |
for i in range(0, len(self._points) - 1): | |
length += self._points[i].GetDistanceMeters(self._points[i+1]) | |
return length | |
def Reversed(self): | |
"""Return a polyline that is the reverse of this polyline.""" | |
return Poly(reversed(self.GetPoints()), self.GetName()) | |
def CutAtClosestPoint(self, p): | |
""" | |
Let x be the point on the polyline closest to p. Then | |
CutAtClosestPoint returns two new polylines, one representing | |
the polyline from the beginning up to x, and one representing | |
x onwards to the end of the polyline. x is the first point | |
returned in the second polyline. | |
""" | |
(closest, i) = self.GetClosestPoint(p) | |
tmp = [closest] | |
tmp.extend(self._points[i+1:]) | |
return (Poly(self._points[0:i+1]), | |
Poly(tmp)) | |
def GreedyPolyMatchDist(self, shape): | |
""" | |
Tries a greedy matching algorithm to match self to the | |
given shape. Returns the maximum distance in meters of | |
any point in self to its matched point in shape under the | |
algorithm. | |
Args: shape, a Poly object. | |
""" | |
tmp_shape = Poly(shape.GetPoints()) | |
max_radius = 0 | |
for (i, point) in enumerate(self._points): | |
tmp_shape = tmp_shape.CutAtClosestPoint(point)[1] | |
dist = tmp_shape.GetPoint(0).GetDistanceMeters(point) | |
max_radius = max(max_radius, dist) | |
return max_radius | |
@staticmethod | |
def MergePolys(polys, merge_point_threshold=10): | |
""" | |
Merge multiple polylines, in the order that they were passed in. | |
Merged polyline will have the names of their component parts joined by ';'. | |
Example: merging [a,b], [c,d] and [e,f] will result in [a,b,c,d,e,f]. | |
However if the endpoints of two adjacent polylines are less than | |
merge_point_threshold meters apart, we will only use the first endpoint in | |
the merged polyline. | |
""" | |
name = ";".join((p.GetName(), '')[p.GetName() is None] for p in polys) | |
merged = Poly([], name) | |
if polys: | |
first_poly = polys[0] | |
for p in first_poly.GetPoints(): | |
merged.AddPoint(p) | |
last_point = merged._GetPointSafe(-1) | |
for poly in polys[1:]: | |
first_point = poly._GetPointSafe(0) | |
if (last_point and first_point and | |
last_point.GetDistanceMeters(first_point) <= merge_point_threshold): | |
points = poly.GetPoints()[1:] | |
else: | |
points = poly.GetPoints() | |
for p in points: | |
merged.AddPoint(p) | |
last_point = merged._GetPointSafe(-1) | |
return merged | |
def __str__(self): | |
return self._ToString(str) | |
def ToLatLngString(self): | |
return self._ToString(lambda p: str(p.ToLatLng())) | |
def _ToString(self, pointToStringFn): | |
return "%s: %s" % (self.GetName() or "", | |
", ".join([pointToStringFn(p) for p in self._points])) | |
class PolyCollection(object): | |
""" | |
A class representing a collection of polylines. | |
""" | |
def __init__(self): | |
self._name_to_shape = {} | |
pass | |
def AddPoly(self, poly, smart_duplicate_handling=True): | |
""" | |
Adds a new polyline to the collection. | |
""" | |
inserted_name = poly.GetName() | |
if poly.GetName() in self._name_to_shape: | |
if not smart_duplicate_handling: | |
raise ShapeError("Duplicate shape found: " + poly.GetName()) | |
print ("Warning: duplicate shape id being added to collection: " + | |
poly.GetName()) | |
if poly.GreedyPolyMatchDist(self._name_to_shape[poly.GetName()]) < 10: | |
print " (Skipping as it apears to be an exact duplicate)" | |
else: | |
print " (Adding new shape variant with uniquified name)" | |
inserted_name = "%s-%d" % (inserted_name, len(self._name_to_shape)) | |
self._name_to_shape[inserted_name] = poly | |
def NumPolys(self): | |
return len(self._name_to_shape) | |
def FindMatchingPolys(self, start_point, end_point, max_radius=150): | |
""" | |
Returns a list of polylines in the collection that have endpoints | |
within max_radius of the given start and end points. | |
""" | |
matches = [] | |
for shape in self._name_to_shape.itervalues(): | |
if start_point.GetDistanceMeters(shape.GetPoint(0)) < max_radius and \ | |
end_point.GetDistanceMeters(shape.GetPoint(-1)) < max_radius: | |
matches.append(shape) | |
return matches | |
class PolyGraph(PolyCollection): | |
""" | |
A class representing a graph where the edges are polylines. | |
""" | |
def __init__(self): | |
PolyCollection.__init__(self) | |
self._nodes = {} | |
def AddPoly(self, poly, smart_duplicate_handling=True): | |
PolyCollection.AddPoly(self, poly, smart_duplicate_handling) | |
start_point = poly.GetPoint(0) | |
end_point = poly.GetPoint(-1) | |
self._AddNodeWithEdge(start_point, poly) | |
self._AddNodeWithEdge(end_point, poly) | |
def _AddNodeWithEdge(self, point, edge): | |
if point in self._nodes: | |
self._nodes[point].add(edge) | |
else: | |
self._nodes[point] = set([edge]) | |
def ShortestPath(self, start, goal): | |
"""Uses the A* algorithm to find a shortest path between start and goal. | |
For more background see http://en.wikipedia.org/wiki/A-star_algorithm | |
Some definitions: | |
g(x): The actual shortest distance traveled from initial node to current | |
node. | |
h(x): The estimated (or "heuristic") distance from current node to goal. | |
We use the distance on Earth from node to goal as the heuristic. | |
This heuristic is both admissible and monotonic (see wikipedia for | |
more details). | |
f(x): The sum of g(x) and h(x), used to prioritize elements to look at. | |
Arguments: | |
start: Point that is in the graph, start point of the search. | |
goal: Point that is in the graph, end point for the search. | |
Returns: | |
A Poly object representing the shortest polyline through the graph from | |
start to goal, or None if no path found. | |
""" | |
assert start in self._nodes | |
assert goal in self._nodes | |
closed_set = set() # Set of nodes already evaluated. | |
open_heap = [(0, start)] # Nodes to visit, heapified by f(x). | |
open_set = set([start]) # Same as open_heap, but a set instead of a heap. | |
g_scores = { start: 0 } # Distance from start along optimal path | |
came_from = {} # Map to reconstruct optimal path once we're done. | |
while open_set: | |
(f_x, x) = heapq.heappop(open_heap) | |
open_set.remove(x) | |
if x == goal: | |
return self._ReconstructPath(came_from, goal) | |
closed_set.add(x) | |
edges = self._nodes[x] | |
for edge in edges: | |
if edge.GetPoint(0) == x: | |
y = edge.GetPoint(-1) | |
else: | |
y = edge.GetPoint(0) | |
if y in closed_set: | |
continue | |
tentative_g_score = g_scores[x] + edge.LengthMeters() | |
tentative_is_better = False | |
if y not in open_set: | |
h_y = y.GetDistanceMeters(goal) | |
f_y = tentative_g_score + h_y | |
open_set.add(y) | |
heapq.heappush(open_heap, (f_y, y)) | |
tentative_is_better = True | |
elif tentative_g_score < g_scores[y]: | |
tentative_is_better = True | |
if tentative_is_better: | |
came_from[y] = (x, edge) | |
g_scores[y] = tentative_g_score | |
return None | |
def _ReconstructPath(self, came_from, current_node): | |
""" | |
Helper method for ShortestPath, to reconstruct path. | |
Arguments: | |
came_from: a dictionary mapping Point to (Point, Poly) tuples. | |
This dictionary keeps track of the previous neighbor to a node, and | |
the edge used to get from the previous neighbor to the node. | |
current_node: the current Point in the path. | |
Returns: | |
A Poly that represents the path through the graph from the start of the | |
search to current_node. | |
""" | |
if current_node in came_from: | |
(previous_node, previous_edge) = came_from[current_node] | |
if previous_edge.GetPoint(0) == current_node: | |
previous_edge = previous_edge.Reversed() | |
p = self._ReconstructPath(came_from, previous_node) | |
return Poly.MergePolys([p, previous_edge], merge_point_threshold=0) | |
else: | |
return Poly([], '') | |
def FindShortestMultiPointPath(self, points, max_radius=150, keep_best_n=10, | |
verbosity=0): | |
""" | |
Return a polyline, representing the shortest path through this graph that | |
has edge endpoints on each of a given list of points in sequence. We allow | |
fuzziness in matching of input points to points in this graph. | |
We limit ourselves to a view of the best keep_best_n paths at any time, as a | |
greedy optimization. | |
""" | |
assert len(points) > 1 | |
nearby_points = [] | |
paths_found = [] # A heap sorted by inverse path length. | |
for i, point in enumerate(points): | |
nearby = [p for p in self._nodes.iterkeys() | |
if p.GetDistanceMeters(point) < max_radius] | |
if verbosity >= 2: | |
print ("Nearby points for point %d %s: %s" | |
% (i + 1, | |
str(point.ToLatLng()), | |
", ".join([str(n.ToLatLng()) for n in nearby]))) | |
if nearby: | |
nearby_points.append(nearby) | |
else: | |
print "No nearby points found for point %s" % str(point.ToLatLng()) | |
return None | |
pathToStr = lambda start, end, path: (" Best path %s -> %s: %s" | |
% (str(start.ToLatLng()), | |
str(end.ToLatLng()), | |
path and path.GetName() or | |
"None")) | |
if verbosity >= 3: | |
print "Step 1" | |
step = 2 | |
start_points = nearby_points[0] | |
end_points = nearby_points[1] | |
for start in start_points: | |
for end in end_points: | |
path = self.ShortestPath(start, end) | |
if verbosity >= 3: | |
print pathToStr(start, end, path) | |
PolyGraph._AddPathToHeap(paths_found, path, keep_best_n) | |
for possible_points in nearby_points[2:]: | |
if verbosity >= 3: | |
print "\nStep %d" % step | |
step += 1 | |
new_paths_found = [] | |
start_end_paths = {} # cache of shortest paths between (start, end) pairs | |
for score, path in paths_found: | |
start = path.GetPoint(-1) | |
for end in possible_points: | |
if (start, end) in start_end_paths: | |
new_segment = start_end_paths[(start, end)] | |
else: | |
new_segment = self.ShortestPath(start, end) | |
if verbosity >= 3: | |
print pathToStr(start, end, new_segment) | |
start_end_paths[(start, end)] = new_segment | |
if new_segment: | |
new_path = Poly.MergePolys([path, new_segment], | |
merge_point_threshold=0) | |
PolyGraph._AddPathToHeap(new_paths_found, new_path, keep_best_n) | |
paths_found = new_paths_found | |
if paths_found: | |
best_score, best_path = max(paths_found) | |
return best_path | |
else: | |
return None | |
@staticmethod | |
def _AddPathToHeap(heap, path, keep_best_n): | |
if path and path.GetNumPoints(): | |
new_item = (-path.LengthMeters(), path) | |
if new_item not in heap: | |
if len(heap) < keep_best_n: | |
heapq.heappush(heap, new_item) | |
else: | |
heapq.heapreplace(heap, new_item) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from loader import Loader | |
class ShapeLoader(Loader): | |
"""A subclass of Loader that only loads the shapes from a GTFS file.""" | |
def __init__(self, *args, **kwargs): | |
"""Initialize a new ShapeLoader object. | |
See Loader.__init__ for argument documentation. | |
""" | |
Loader.__init__(self, *args, **kwargs) | |
def Load(self): | |
self._LoadShapes() | |
return self._schedule |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import bisect | |
from gtfsobjectbase import GtfsObjectBase | |
import problems as problems_module | |
import util | |
import sys | |
class ShapePoint(GtfsObjectBase): | |
"""This class represents a single shape point. | |
Attributes: | |
shape_id: represents the shape_id of the point | |
shape_pt_lat: represents the latitude of the point | |
shape_pt_lon: represents the longitude of the point | |
shape_pt_sequence: represents the sequence of the point | |
shape_dist_traveled: represents the distance of the point | |
""" | |
_REQUIRED_FIELD_NAMES = ['shape_id', 'shape_pt_lat', 'shape_pt_lon', | |
'shape_pt_sequence'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['shape_dist_traveled'] | |
def __init__(self, shape_id=None, lat=None, lon=None,seq=None, dist=None, | |
field_dict=None): | |
"""Initialize a new ShapePoint object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
""" | |
self._schedule = None | |
if field_dict: | |
if isinstance(field_dict, self.__class__): | |
for k, v in field_dict.iteritems(): | |
self.__dict__[k] = v | |
else: | |
self.__dict__.update(field_dict) | |
else: | |
self.shape_id = shape_id | |
self.shape_pt_lat = lat | |
self.shape_pt_lon = lon | |
self.shape_pt_sequence = seq | |
self.shape_dist_traveled = dist | |
def ParseAttributes(self, problems): | |
"""Parse all attributes, calling problems as needed. | |
Return True if all of the values are valid. | |
""" | |
if util.IsEmpty(self.shape_id): | |
problems.MissingValue('shape_id') | |
return | |
try: | |
if not isinstance(self.shape_pt_sequence, int): | |
self.shape_pt_sequence = \ | |
util.NonNegIntStringToInt(self.shape_pt_sequence, problems) | |
elif self.shape_pt_sequence < 0: | |
problems.InvalidValue('shape_pt_sequence', self.shape_pt_sequence, | |
'Value should be a number (0 or higher)') | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_sequence', self.shape_pt_sequence, | |
'Value should be a number (0 or higher)') | |
return | |
try: | |
if not isinstance(self.shape_pt_lat, (int, float)): | |
self.shape_pt_lat = util.FloatStringToFloat(self.shape_pt_lat, problems) | |
if abs(self.shape_pt_lat) > 90.0: | |
problems.InvalidValue('shape_pt_lat', self.shape_pt_lat) | |
return | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_lat', self.shape_pt_lat) | |
return | |
try: | |
if not isinstance(self.shape_pt_lon, (int, float)): | |
self.shape_pt_lon = util.FloatStringToFloat(self.shape_pt_lon, problems) | |
if abs(self.shape_pt_lon) > 180.0: | |
problems.InvalidValue('shape_pt_lon', self.shape_pt_lon) | |
return | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_pt_lon', self.shape_pt_lon) | |
return | |
if abs(self.shape_pt_lat) < 1.0 and abs(self.shape_pt_lon) < 1.0: | |
problems.InvalidValue('shape_pt_lat', self.shape_pt_lat, | |
'Point location too close to 0, 0, which means ' | |
'that it\'s probably an incorrect location.', | |
type=problems_module.TYPE_WARNING) | |
return | |
if self.shape_dist_traveled == '': | |
self.shape_dist_traveled = None | |
if (self.shape_dist_traveled is not None and | |
not isinstance(self.shape_dist_traveled, (int, float))): | |
try: | |
self.shape_dist_traveled = \ | |
util.FloatStringToFloat(self.shape_dist_traveled, problems) | |
except (TypeError, ValueError): | |
problems.InvalidValue('shape_dist_traveled', self.shape_dist_traveled, | |
'This value should be a positive number.') | |
return | |
if self.shape_dist_traveled is not None and self.shape_dist_traveled < 0: | |
problems.InvalidValue('shape_dist_traveled', self.shape_dist_traveled, | |
'This value should be a positive number.') | |
return | |
return True | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import warnings | |
from gtfsobjectbase import GtfsObjectBase | |
import problems as problems_module | |
import util | |
class Stop(GtfsObjectBase): | |
"""Represents a single stop. A stop must have a latitude, longitude and name. | |
Callers may assign arbitrary values to instance attributes. | |
Stop.ParseAttributes validates attributes according to GTFS and converts some | |
into native types. ParseAttributes may delete invalid attributes. | |
Accessing an attribute that is a column in GTFS will return None if this | |
object does not have a value or it is ''. | |
A Stop object acts like a dict with string values. | |
Attributes: | |
stop_lat: a float representing the latitude of the stop | |
stop_lon: a float representing the longitude of the stop | |
All other attributes are strings. | |
""" | |
_REQUIRED_FIELD_NAMES = ['stop_id', 'stop_name', 'stop_lat', 'stop_lon'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + \ | |
['stop_desc', 'zone_id', 'stop_url', 'stop_code', | |
'location_type', 'parent_station'] | |
_TABLE_NAME = 'stops' | |
def __init__(self, lat=None, lng=None, name=None, stop_id=None, | |
field_dict=None, stop_code=None): | |
"""Initialize a new Stop object. | |
Args: | |
field_dict: A dictionary mapping attribute name to unicode string | |
lat: a float, ignored when field_dict is present | |
lng: a float, ignored when field_dict is present | |
name: a string, ignored when field_dict is present | |
stop_id: a string, ignored when field_dict is present | |
stop_code: a string, ignored when field_dict is present | |
""" | |
self._schedule = None | |
if field_dict: | |
if isinstance(field_dict, self.__class__): | |
# Special case so that we don't need to re-parse the attributes to | |
# native types iteritems returns all attributes that don't start with _ | |
for k, v in field_dict.iteritems(): | |
self.__dict__[k] = v | |
else: | |
self.__dict__.update(field_dict) | |
else: | |
if lat is not None: | |
self.stop_lat = lat | |
if lng is not None: | |
self.stop_lon = lng | |
if name is not None: | |
self.stop_name = name | |
if stop_id is not None: | |
self.stop_id = stop_id | |
if stop_code is not None: | |
self.stop_code = stop_code | |
def GetTrips(self, schedule=None): | |
"""Return iterable containing trips that visit this stop.""" | |
return [trip for trip, ss in self._GetTripSequence(schedule)] | |
def _GetTripSequence(self, schedule=None): | |
"""Return a list of (trip, stop_sequence) for all trips visiting this stop. | |
A trip may be in the list multiple times with different index. | |
stop_sequence is an integer. | |
Args: | |
schedule: Deprecated, do not use. | |
""" | |
if schedule is None: | |
schedule = getattr(self, "_schedule", None) | |
if schedule is None: | |
warnings.warn("No longer supported. _schedule attribute is used to get " | |
"stop_times table", DeprecationWarning) | |
cursor = schedule._connection.cursor() | |
cursor.execute("SELECT trip_id,stop_sequence FROM stop_times " | |
"WHERE stop_id=?", | |
(self.stop_id, )) | |
return [(schedule.GetTrip(row[0]), row[1]) for row in cursor] | |
def _GetTripIndex(self, schedule=None): | |
"""Return a list of (trip, index). | |
trip: a Trip object | |
index: an offset in trip.GetStopTimes() | |
""" | |
trip_index = [] | |
for trip, sequence in self._GetTripSequence(schedule): | |
for index, st in enumerate(trip.GetStopTimes()): | |
if st.stop_sequence == sequence: | |
trip_index.append((trip, index)) | |
break | |
else: | |
raise RuntimeError("stop_sequence %d not found in trip_id %s" % | |
sequence, trip.trip_id) | |
return trip_index | |
def GetStopTimeTrips(self, schedule=None): | |
"""Return a list of (time, (trip, index), is_timepoint). | |
time: an integer. It might be interpolated. | |
trip: a Trip object. | |
index: the offset of this stop in trip.GetStopTimes(), which may be | |
different from the stop_sequence. | |
is_timepoint: a bool | |
""" | |
time_trips = [] | |
for trip, index in self._GetTripIndex(schedule): | |
secs, stoptime, is_timepoint = trip.GetTimeInterpolatedStops()[index] | |
time_trips.append((secs, (trip, index), is_timepoint)) | |
return time_trips | |
def __getattr__(self, name): | |
"""Return None or the default value if name is a known attribute. | |
This method is only called when name is not found in __dict__. | |
""" | |
if name == "location_type": | |
return 0 | |
elif name == "trip_index": | |
return self._GetTripIndex() | |
elif name in self._FIELD_NAMES: | |
return None | |
else: | |
raise AttributeError(name) | |
def ValidateStopLatitude(self, problems): | |
if self.stop_lat: | |
value = self.stop_lat | |
try: | |
if not isinstance(value, (float, int)): | |
self.stop_lat = util.FloatStringToFloat(value, problems) | |
except (ValueError, TypeError): | |
problems.InvalidValue('stop_lat', value) | |
del self.stop_lat | |
else: | |
if self.stop_lat > 90 or self.stop_lat < -90: | |
problems.InvalidValue('stop_lat', value) | |
def ValidateStopLongitude(self, problems): | |
if self.stop_lon: | |
value = self.stop_lon | |
try: | |
if not isinstance(value, (float, int)): | |
self.stop_lon = util.FloatStringToFloat(value, problems) | |
except (ValueError, TypeError): | |
problems.InvalidValue('stop_lon', value) | |
del self.stop_lon | |
else: | |
if self.stop_lon > 180 or self.stop_lon < -180: | |
problems.InvalidValue('stop_lon', value) | |
def ValidateStopUrl(self, problems): | |
value = self.stop_url | |
if value and not util.IsValidURL(value): | |
problems.InvalidValue('stop_url', value) | |
del self.stop_url | |
def ValidateStopLocationType(self, problems): | |
value = self.location_type | |
if value == '': | |
self.location_type = 0 | |
else: | |
try: | |
self.location_type = int(value) | |
except (ValueError, TypeError): | |
problems.InvalidValue('location_type', value) | |
del self.location_type | |
else: | |
if self.location_type not in (0, 1): | |
problems.InvalidValue('location_type', value, | |
type=problems_module.TYPE_WARNING) | |
def ValidateStopRequiredFields(self, problems): | |
for required in self._REQUIRED_FIELD_NAMES: | |
if util.IsEmpty(getattr(self, required, None)): | |
# TODO: For now I'm keeping the API stable but it would be cleaner to | |
# treat whitespace stop_id as invalid, instead of missing | |
problems.MissingValue(required) | |
def ValidateStopNotTooCloseToOrigin(self, problems): | |
if (self.stop_lat is not None and self.stop_lon is not None and | |
abs(self.stop_lat) < 1.0) and (abs(self.stop_lon) < 1.0): | |
problems.InvalidValue('stop_lat', self.stop_lat, | |
'Stop location too close to 0, 0', | |
type=problems_module.TYPE_WARNING) | |
def ValidateStopDescriptionAndNameAreDifferent(self, problems): | |
if (self.stop_desc is not None and self.stop_name is not None and | |
self.stop_desc and self.stop_name and | |
not util.IsEmpty(self.stop_desc) and | |
self.stop_name.strip().lower() == self.stop_desc.strip().lower()): | |
problems.InvalidValue('stop_desc', self.stop_desc, | |
'stop_desc should not be the same as stop_name') | |
def ValidateStopIsNotStationWithParent(self, problems): | |
if self.parent_station and self.location_type == 1: | |
problems.InvalidValue('parent_station', self.parent_station, | |
'Stop row with location_type=1 (a station) must ' | |
'not have a parent_station') | |
def ValidateBeforeAdd(self, problems): | |
# First check that all required fields are present because ParseAttributes | |
# may remove invalid attributes. | |
self.ValidateStopRequiredFields(problems) | |
#If value is valid for attribute name store it. | |
#If value is not valid call problems. Return a new value of the correct type | |
#or None if value couldn't be converted. | |
self.ValidateStopLatitude(problems) | |
self.ValidateStopLongitude(problems) | |
self.ValidateStopUrl(problems) | |
self.ValidateStopLocationType(problems) | |
# Check that this object is consistent with itself | |
self.ValidateStopNotTooCloseToOrigin(problems) | |
self.ValidateStopDescriptionAndNameAreDifferent(problems) | |
self.ValidateStopIsNotStationWithParent(problems) | |
# None of these checks are blocking | |
return True | |
def ValidateAfterAdd(self, problems): | |
return | |
def Validate(self, problems=problems_module.default_problem_reporter): | |
self.ValidateBeforeAdd(problems) | |
self.ValidateAfterAdd(problems) | |
def AddToSchedule(self, schedule, problems): | |
schedule.AddStopObject(self, problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import problems as problems_module | |
from stop import Stop | |
import util | |
class StopTime(object): | |
""" | |
Represents a single stop of a trip. StopTime contains most of the columns | |
from the stop_times.txt file. It does not contain trip_id, which is implied | |
by the Trip used to access it. | |
See the Google Transit Feed Specification for the semantic details. | |
stop: A Stop object | |
arrival_time: str in the form HH:MM:SS; readonly after __init__ | |
departure_time: str in the form HH:MM:SS; readonly after __init__ | |
arrival_secs: int number of seconds since midnight | |
departure_secs: int number of seconds since midnight | |
stop_headsign: str | |
pickup_type: int | |
drop_off_type: int | |
shape_dist_traveled: float | |
stop_id: str; readonly | |
stop_time: The only time given for this stop. If present, it is used | |
for both arrival and departure time. | |
stop_sequence: int | |
""" | |
_REQUIRED_FIELD_NAMES = ['trip_id', 'arrival_time', 'departure_time', | |
'stop_id', 'stop_sequence'] | |
_OPTIONAL_FIELD_NAMES = ['stop_headsign', 'pickup_type', | |
'drop_off_type', 'shape_dist_traveled'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + _OPTIONAL_FIELD_NAMES | |
_SQL_FIELD_NAMES = ['trip_id', 'arrival_secs', 'departure_secs', | |
'stop_id', 'stop_sequence', 'stop_headsign', | |
'pickup_type', 'drop_off_type', 'shape_dist_traveled'] | |
_STOP_CLASS = Stop | |
__slots__ = ('arrival_secs', 'departure_secs', 'stop_headsign', 'stop', | |
'stop_headsign', 'pickup_type', 'drop_off_type', | |
'shape_dist_traveled', 'stop_sequence') | |
def __init__(self, problems, stop, | |
arrival_time=None, departure_time=None, | |
stop_headsign=None, pickup_type=None, drop_off_type=None, | |
shape_dist_traveled=None, arrival_secs=None, | |
departure_secs=None, stop_time=None, stop_sequence=None): | |
# Implementation note from Andre, July 22, 2010: | |
# The checks performed here should be in their own Validate* methods to | |
# keep consistency. Unfortunately the performance degradation is too great, | |
# so the validation was left in __init__. | |
# Performance is also the reason why we don't use the GtfsFactory, but | |
# have StopTime._STOP_CLASS instead. If a Stop class that does not inherit | |
# from transitfeed.Stop is used, the extension should also provide a | |
# StopTime class that updates _STOP_CLASS accordingly. | |
# | |
# For more details see the discussion at | |
# http://codereview.appspot.com/1713041 | |
if stop_time != None: | |
arrival_time = departure_time = stop_time | |
if arrival_secs != None: | |
self.arrival_secs = arrival_secs | |
elif arrival_time in (None, ""): | |
self.arrival_secs = None # Untimed | |
arrival_time = None | |
else: | |
try: | |
self.arrival_secs = util.TimeToSecondsSinceMidnight(arrival_time) | |
except problems_module.Error: | |
problems.InvalidValue('arrival_time', arrival_time) | |
self.arrival_secs = None | |
if departure_secs != None: | |
self.departure_secs = departure_secs | |
elif departure_time in (None, ""): | |
self.departure_secs = None | |
departure_time = None | |
else: | |
try: | |
self.departure_secs = util.TimeToSecondsSinceMidnight(departure_time) | |
except problems_module.Error: | |
problems.InvalidValue('departure_time', departure_time) | |
self.departure_secs = None | |
if not isinstance(stop, self._STOP_CLASS): | |
# Not quite correct, but better than letting the problem propagate | |
problems.InvalidValue('stop', stop) | |
self.stop = stop | |
self.stop_headsign = stop_headsign | |
if pickup_type in (None, ""): | |
self.pickup_type = None | |
else: | |
try: | |
pickup_type = int(pickup_type) | |
except ValueError: | |
problems.InvalidValue('pickup_type', pickup_type) | |
else: | |
if pickup_type < 0 or pickup_type > 3: | |
problems.InvalidValue('pickup_type', pickup_type) | |
self.pickup_type = pickup_type | |
if drop_off_type in (None, ""): | |
self.drop_off_type = None | |
else: | |
try: | |
drop_off_type = int(drop_off_type) | |
except ValueError: | |
problems.InvalidValue('drop_off_type', drop_off_type) | |
else: | |
if drop_off_type < 0 or drop_off_type > 3: | |
problems.InvalidValue('drop_off_type', drop_off_type) | |
self.drop_off_type = drop_off_type | |
if (self.pickup_type == 1 and self.drop_off_type == 1 and | |
self.arrival_secs == None and self.departure_secs == None): | |
problems.OtherProblem('This stop time has a pickup_type and ' | |
'drop_off_type of 1, indicating that riders ' | |
'can\'t get on or off here. Since it doesn\'t ' | |
'define a timepoint either, this entry serves no ' | |
'purpose and should be excluded from the trip.', | |
type=problems_module.TYPE_WARNING) | |
if ((self.arrival_secs != None) and (self.departure_secs != None) and | |
(self.departure_secs < self.arrival_secs)): | |
problems.InvalidValue('departure_time', departure_time, | |
'The departure time at this stop (%s) is before ' | |
'the arrival time (%s). This is often caused by ' | |
'problems in the feed exporter\'s time conversion') | |
# If the caller passed a valid arrival time but didn't attempt to pass a | |
# departure time complain | |
if (self.arrival_secs != None and | |
self.departure_secs == None and departure_time == None): | |
# self.departure_secs might be None because departure_time was invalid, | |
# so we need to check both | |
problems.MissingValue('departure_time', | |
'arrival_time and departure_time should either ' | |
'both be provided or both be left blank. ' | |
'It\'s OK to set them both to the same value.') | |
# If the caller passed a valid departure time but didn't attempt to pass a | |
# arrival time complain | |
if (self.departure_secs != None and | |
self.arrival_secs == None and arrival_time == None): | |
problems.MissingValue('arrival_time', | |
'arrival_time and departure_time should either ' | |
'both be provided or both be left blank. ' | |
'It\'s OK to set them both to the same value.') | |
if shape_dist_traveled in (None, ""): | |
self.shape_dist_traveled = None | |
else: | |
try: | |
self.shape_dist_traveled = float(shape_dist_traveled) | |
except ValueError: | |
problems.InvalidValue('shape_dist_traveled', shape_dist_traveled) | |
if stop_sequence is not None: | |
self.stop_sequence = stop_sequence | |
def GetFieldValuesTuple(self, trip_id): | |
"""Return a tuple that outputs a row of _FIELD_NAMES to be written to a | |
GTFS file. | |
Arguments: | |
trip_id: The trip_id of the trip to which this StopTime corresponds. | |
It must be provided, as it is not stored in StopTime. | |
""" | |
result = [] | |
for fn in self._FIELD_NAMES: | |
if fn == 'trip_id': | |
result.append(trip_id) | |
else: | |
# Since we'll be writting to an output file, we want empty values to be | |
# outputted as an empty string | |
result.append(getattr(self, fn) or '' ) | |
return tuple(result) | |
def GetSqlValuesTuple(self, trip_id): | |
"""Return a tuple that outputs a row of _FIELD_NAMES to be written to a | |
SQLite database. | |
Arguments: | |
trip_id: The trip_id of the trip to which this StopTime corresponds. | |
It must be provided, as it is not stored in StopTime. | |
""" | |
result = [] | |
for fn in self._SQL_FIELD_NAMES: | |
if fn == 'trip_id': | |
result.append(trip_id) | |
else: | |
# Since we'll be writting to SQLite, we want empty values to be | |
# outputted as NULL string (contrary to what happens in | |
# GetFieldValuesTuple) | |
result.append(getattr(self, fn)) | |
return tuple(result) | |
def GetTimeSecs(self): | |
"""Return the first of arrival_secs and departure_secs that is not None. | |
If both are None return None.""" | |
if self.arrival_secs != None: | |
return self.arrival_secs | |
elif self.departure_secs != None: | |
return self.departure_secs | |
else: | |
return None | |
def __getattr__(self, name): | |
if name == 'stop_id': | |
return self.stop.stop_id | |
elif name == 'arrival_time': | |
return (self.arrival_secs != None and | |
util.FormatSecondsSinceMidnight(self.arrival_secs) or '') | |
elif name == 'departure_time': | |
return (self.departure_secs != None and | |
util.FormatSecondsSinceMidnight(self.departure_secs) or '') | |
elif name == 'shape_dist_traveled': | |
return '' | |
raise AttributeError(name) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
from gtfsobjectbase import GtfsObjectBase | |
import problems as problems_module | |
import util | |
class Transfer(GtfsObjectBase): | |
"""Represents a transfer in a schedule""" | |
_REQUIRED_FIELD_NAMES = ['from_stop_id', 'to_stop_id', 'transfer_type'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['min_transfer_time'] | |
_TABLE_NAME = 'transfers' | |
_ID_COLUMNS = ['from_stop_id', 'to_stop_id'] | |
def __init__(self, schedule=None, from_stop_id=None, to_stop_id=None, transfer_type=None, | |
min_transfer_time=None, field_dict=None): | |
self._schedule = None | |
if field_dict: | |
self.__dict__.update(field_dict) | |
else: | |
self.from_stop_id = from_stop_id | |
self.to_stop_id = to_stop_id | |
self.transfer_type = transfer_type | |
self.min_transfer_time = min_transfer_time | |
if getattr(self, 'transfer_type', None) in ("", None): | |
# Use the default, recommended transfer, if attribute is not set or blank | |
self.transfer_type = 0 | |
else: | |
try: | |
self.transfer_type = util.NonNegIntStringToInt(self.transfer_type) | |
except (TypeError, ValueError): | |
pass | |
if hasattr(self, 'min_transfer_time'): | |
try: | |
self.min_transfer_time = util.NonNegIntStringToInt(self.min_transfer_time) | |
except (TypeError, ValueError): | |
pass | |
else: | |
self.min_transfer_time = None | |
if schedule is not None: | |
# Note from Tom, Nov 25, 2009: Maybe calling __init__ with a schedule | |
# should output a DeprecationWarning. A schedule factory probably won't | |
# use it and other GenericGTFSObject subclasses don't support it. | |
schedule.AddTransferObject(self) | |
def ValidateFromStopIdIsPresent(self, problems): | |
if util.IsEmpty(self.from_stop_id): | |
problems.MissingValue('from_stop_id') | |
return False | |
return True | |
def ValidateToStopIdIsPresent(self, problems): | |
if util.IsEmpty(self.to_stop_id): | |
problems.MissingValue('to_stop_id') | |
return False | |
return True | |
def ValidateTransferType(self, problems): | |
if not util.IsEmpty(self.transfer_type): | |
if (not isinstance(self.transfer_type, int)) or \ | |
(self.transfer_type not in range(0, 4)): | |
problems.InvalidValue('transfer_type', self.transfer_type) | |
return False | |
return True | |
def ValidateMinimumTransferTime(self, problems): | |
if not util.IsEmpty(self.min_transfer_time): | |
if self.transfer_type != 2: | |
problems.MinimumTransferTimeSetWithInvalidTransferType( | |
self.transfer_type) | |
# If min_transfer_time is negative, equal to or bigger than 24h, issue | |
# an error. If smaller than 24h but bigger than 3h issue a warning. | |
# These errors are not blocking, and should not prevent the transfer | |
# from being added to the schedule. | |
if (isinstance(self.min_transfer_time, int)): | |
if self.min_transfer_time < 0: | |
problems.InvalidValue('min_transfer_time', self.min_transfer_time, | |
reason="This field cannot contain a negative " \ | |
"value.") | |
elif self.min_transfer_time >= 24*3600: | |
problems.InvalidValue('min_transfer_time', self.min_transfer_time, | |
reason="The value is very large for a " \ | |
"transfer time and most likely " \ | |
"indicates an error.") | |
elif self.min_transfer_time >= 3*3600: | |
problems.InvalidValue('min_transfer_time', self.min_transfer_time, | |
type=problems_module.TYPE_WARNING, | |
reason="The value is large for a transfer " \ | |
"time and most likely indicates " \ | |
"an error.") | |
else: | |
# It has a value, but it is not an integer | |
problems.InvalidValue('min_transfer_time', self.min_transfer_time, | |
reason="If present, this field should contain " \ | |
"an integer value.") | |
return False | |
return True | |
def GetTransferDistance(self): | |
from_stop = self._schedule.stops[self.from_stop_id] | |
to_stop = self._schedule.stops[self.to_stop_id] | |
distance = util.ApproximateDistanceBetweenStops(from_stop, to_stop) | |
return distance | |
def ValidateFromStopIdIsValid(self, problems): | |
if self.from_stop_id not in self._schedule.stops.keys(): | |
problems.InvalidValue('from_stop_id', self.from_stop_id) | |
return False | |
return True | |
def ValidateToStopIdIsValid(self, problems): | |
if self.to_stop_id not in self._schedule.stops.keys(): | |
problems.InvalidValue('to_stop_id', self.to_stop_id) | |
return False | |
return True | |
def ValidateTransferDistance(self, problems): | |
distance = self.GetTransferDistance() | |
if distance > 10000: | |
problems.TransferDistanceTooBig(self.from_stop_id, | |
self.to_stop_id, | |
distance) | |
elif distance > 2000: | |
problems.TransferDistanceTooBig(self.from_stop_id, | |
self.to_stop_id, | |
distance, | |
type=problems_module.TYPE_WARNING) | |
def ValidateTransferWalkingTime(self, problems): | |
if util.IsEmpty(self.min_transfer_time): | |
return | |
if self.min_transfer_time < 0: | |
# Error has already been reported, and it does not make sense | |
# to calculate walking speed with negative times. | |
return | |
distance = self.GetTransferDistance() | |
# If min_transfer_time + 120s isn't enough for someone walking very fast | |
# (2m/s) then issue a warning. | |
# | |
# Stops that are close together (less than 240m appart) never trigger this | |
# warning, regardless of min_transfer_time. | |
FAST_WALKING_SPEED= 2 # 2m/s | |
if self.min_transfer_time + 120 < distance / FAST_WALKING_SPEED: | |
problems.TransferWalkingSpeedTooFast(from_stop_id=self.from_stop_id, | |
to_stop_id=self.to_stop_id, | |
transfer_time=self.min_transfer_time, | |
distance=distance) | |
def ValidateBeforeAdd(self, problems): | |
result = True | |
result = self.ValidateFromStopIdIsPresent(problems) and result | |
result = self.ValidateToStopIdIsPresent(problems) and result | |
result = self.ValidateTransferType(problems) and result | |
result = self.ValidateMinimumTransferTime(problems) and result | |
return result | |
def ValidateAfterAdd(self, problems): | |
valid_stop_ids = True | |
valid_stop_ids = self.ValidateFromStopIdIsValid(problems) and valid_stop_ids | |
valid_stop_ids = self.ValidateToStopIdIsValid(problems) and valid_stop_ids | |
# We need both stop IDs to be valid to able to validate their distance and | |
# the walking time between them | |
if valid_stop_ids: | |
self.ValidateTransferDistance(problems) | |
self.ValidateTransferWalkingTime(problems) | |
def Validate(self, | |
problems=problems_module.default_problem_reporter): | |
if self.ValidateBeforeAdd(problems) and self._schedule: | |
self.ValidateAfterAdd(problems) | |
def _ID(self): | |
return tuple(self[i] for i in self._ID_COLUMNS) | |
def AddToSchedule(self, schedule, problems): | |
schedule.AddTransferObject(self, problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import warnings | |
from gtfsobjectbase import GtfsObjectBase | |
import problems as problems_module | |
import util | |
class Trip(GtfsObjectBase): | |
_REQUIRED_FIELD_NAMES = ['route_id', 'service_id', 'trip_id'] | |
_FIELD_NAMES = _REQUIRED_FIELD_NAMES + [ | |
'trip_headsign', 'direction_id', 'block_id', 'shape_id' | |
] | |
_TABLE_NAME= "trips" | |
def __init__(self, headsign=None, service_period=None, | |
route=None, trip_id=None, field_dict=None): | |
self._schedule = None | |
self._headways = [] # [(start_time, end_time, headway_secs)] | |
if not field_dict: | |
field_dict = {} | |
if headsign is not None: | |
field_dict['trip_headsign'] = headsign | |
if route: | |
field_dict['route_id'] = route.route_id | |
if trip_id is not None: | |
field_dict['trip_id'] = trip_id | |
if service_period is not None: | |
field_dict['service_id'] = service_period.service_id | |
# Earlier versions of transitfeed.py assigned self.service_period here | |
# and allowed the caller to set self.service_id. Schedule.Validate | |
# checked the service_id attribute if it was assigned and changed it to a | |
# service_period attribute. Now only the service_id attribute is used and | |
# it is validated by Trip.Validate. | |
if service_period is not None: | |
# For backwards compatibility | |
self.service_id = service_period.service_id | |
self.__dict__.update(field_dict) | |
def GetFieldValuesTuple(self): | |
return [getattr(self, fn) or '' for fn in self._FIELD_NAMES] | |
def AddStopTime(self, stop, problems=None, schedule=None, **kwargs): | |
"""Add a stop to this trip. Stops must be added in the order visited. | |
Args: | |
stop: A Stop object | |
kwargs: remaining keyword args passed to StopTime.__init__ | |
Returns: | |
None | |
""" | |
if problems is None: | |
# TODO: delete this branch when StopTime.__init__ doesn't need a | |
# ProblemReporter | |
problems = problems_module.default_problem_reporter | |
stoptime = self.GetGtfsFactory().StopTime( | |
problems=problems, stop=stop, **kwargs) | |
self.AddStopTimeObject(stoptime, schedule) | |
def _AddStopTimeObjectUnordered(self, stoptime, schedule): | |
"""Add StopTime object to this trip. | |
The trip isn't checked for duplicate sequence numbers so it must be | |
validated later.""" | |
stop_time_class = self.GetGtfsFactory().StopTime | |
cursor = schedule._connection.cursor() | |
insert_query = "INSERT INTO stop_times (%s) VALUES (%s);" % ( | |
','.join(stop_time_class._SQL_FIELD_NAMES), | |
','.join(['?'] * len(stop_time_class._SQL_FIELD_NAMES))) | |
cursor = schedule._connection.cursor() | |
cursor.execute( | |
insert_query, stoptime.GetSqlValuesTuple(self.trip_id)) | |
def ReplaceStopTimeObject(self, stoptime, schedule=None): | |
"""Replace a StopTime object from this trip with the given one. | |
Keys the StopTime object to be replaced by trip_id, stop_sequence | |
and stop_id as 'stoptime', with the object 'stoptime'. | |
""" | |
if schedule is None: | |
schedule = self._schedule | |
new_secs = stoptime.GetTimeSecs() | |
cursor = schedule._connection.cursor() | |
cursor.execute("DELETE FROM stop_times WHERE trip_id=? and " | |
"stop_sequence=? and stop_id=?", | |
(self.trip_id, stoptime.stop_sequence, stoptime.stop_id)) | |
if cursor.rowcount == 0: | |
raise problems_module.Error, 'Attempted replacement of StopTime object which does not exist' | |
self._AddStopTimeObjectUnordered(stoptime, schedule) | |
def AddStopTimeObject(self, stoptime, schedule=None, problems=None): | |
"""Add a StopTime object to the end of this trip. | |
Args: | |
stoptime: A StopTime object. Should not be reused in multiple trips. | |
schedule: Schedule object containing this trip which must be | |
passed to Trip.__init__ or here | |
problems: ProblemReporter object for validating the StopTime in its new | |
home | |
Returns: | |
None | |
""" | |
if schedule is None: | |
schedule = self._schedule | |
if schedule is None: | |
warnings.warn("No longer supported. _schedule attribute is used to get " | |
"stop_times table", DeprecationWarning) | |
if problems is None: | |
problems = schedule.problem_reporter | |
new_secs = stoptime.GetTimeSecs() | |
cursor = schedule._connection.cursor() | |
cursor.execute("SELECT max(stop_sequence), max(arrival_secs), " | |
"max(departure_secs) FROM stop_times WHERE trip_id=?", | |
(self.trip_id,)) | |
row = cursor.fetchone() | |
if row[0] is None: | |
# This is the first stop_time of the trip | |
stoptime.stop_sequence = 1 | |
if new_secs == None: | |
problems.OtherProblem( | |
'No time for first StopTime of trip_id "%s"' % (self.trip_id,)) | |
else: | |
stoptime.stop_sequence = row[0] + 1 | |
prev_secs = max(row[1], row[2]) | |
if new_secs != None and new_secs < prev_secs: | |
problems.OtherProblem( | |
'out of order stop time for stop_id=%s trip_id=%s %s < %s' % | |
(util.EncodeUnicode(stoptime.stop_id), | |
util.EncodeUnicode(self.trip_id), | |
util.FormatSecondsSinceMidnight(new_secs), | |
util.FormatSecondsSinceMidnight(prev_secs))) | |
self._AddStopTimeObjectUnordered(stoptime, schedule) | |
def GetTimeStops(self): | |
"""Return a list of (arrival_secs, departure_secs, stop) tuples. | |
Caution: arrival_secs and departure_secs may be 0, a false value meaning a | |
stop at midnight or None, a false value meaning the stop is untimed.""" | |
return [(st.arrival_secs, st.departure_secs, st.stop) for st in | |
self.GetStopTimes()] | |
def GetCountStopTimes(self): | |
"""Return the number of stops made by this trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT count(*) FROM stop_times WHERE trip_id=?', (self.trip_id,)) | |
return cursor.fetchone()[0] | |
def GetTimeInterpolatedStops(self): | |
"""Return a list of (secs, stoptime, is_timepoint) tuples. | |
secs will always be an int. If the StopTime object does not have explict | |
times this method guesses using distance. stoptime is a StopTime object and | |
is_timepoint is a bool. | |
Raises: | |
ValueError if this trip does not have the times needed to interpolate | |
""" | |
rv = [] | |
stoptimes = self.GetStopTimes() | |
# If there are no stoptimes [] is the correct return value but if the start | |
# or end are missing times there is no correct return value. | |
if not stoptimes: | |
return [] | |
if (stoptimes[0].GetTimeSecs() is None or | |
stoptimes[-1].GetTimeSecs() is None): | |
raise ValueError("%s must have time at first and last stop" % (self)) | |
cur_timepoint = None | |
next_timepoint = None | |
distance_between_timepoints = 0 | |
distance_traveled_between_timepoints = 0 | |
for i, st in enumerate(stoptimes): | |
if st.GetTimeSecs() != None: | |
cur_timepoint = st | |
distance_between_timepoints = 0 | |
distance_traveled_between_timepoints = 0 | |
if i + 1 < len(stoptimes): | |
k = i + 1 | |
distance_between_timepoints += util.ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop) | |
while stoptimes[k].GetTimeSecs() == None: | |
k += 1 | |
distance_between_timepoints += util.ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop) | |
next_timepoint = stoptimes[k] | |
rv.append( (st.GetTimeSecs(), st, True) ) | |
else: | |
distance_traveled_between_timepoints += util.ApproximateDistanceBetweenStops(stoptimes[i-1].stop, st.stop) | |
distance_percent = distance_traveled_between_timepoints / distance_between_timepoints | |
total_time = next_timepoint.GetTimeSecs() - cur_timepoint.GetTimeSecs() | |
time_estimate = distance_percent * total_time + cur_timepoint.GetTimeSecs() | |
rv.append( (int(round(time_estimate)), st, False) ) | |
return rv | |
def ClearStopTimes(self): | |
"""Remove all stop times from this trip. | |
StopTime objects previously returned by GetStopTimes are unchanged but are | |
no longer associated with this trip. | |
""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute('DELETE FROM stop_times WHERE trip_id=?', (self.trip_id,)) | |
def GetStopTimes(self, problems=None): | |
"""Return a sorted list of StopTime objects for this trip.""" | |
# In theory problems=None should be safe because data from database has been | |
# validated. See comment in _LoadStopTimes for why this isn't always true. | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs,stop_headsign,pickup_type,' | |
'drop_off_type,shape_dist_traveled,stop_id,stop_sequence FROM ' | |
'stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence', (self.trip_id,)) | |
stop_times = [] | |
stoptime_class = self.GetGtfsFactory().StopTime | |
for row in cursor.fetchall(): | |
stop = self._schedule.GetStop(row[6]) | |
stop_times.append(stoptime_class(problems=problems, | |
stop=stop, | |
arrival_secs=row[0], | |
departure_secs=row[1], | |
stop_headsign=row[2], | |
pickup_type=row[3], | |
drop_off_type=row[4], | |
shape_dist_traveled=row[5], | |
stop_sequence=row[7])) | |
return stop_times | |
def GetHeadwayStopTimes(self, problems=None): | |
"""Deprecated. Please use GetFrequencyStopTimes instead.""" | |
warnings.warn("No longer supported. The HeadwayPeriod class was renamed to " | |
"Frequency, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
return self.GetFrequencyStopTimes(problems) | |
def GetFrequencyStopTimes(self, problems=None): | |
"""Return a list of StopTime objects for each headway-based run. | |
Returns: | |
a list of list of StopTime objects. Each list of StopTime objects | |
represents one run. If this trip doesn't have headways returns an empty | |
list. | |
""" | |
stoptimes_list = [] # list of stoptime lists to be returned | |
stoptime_pattern = self.GetStopTimes() | |
first_secs = stoptime_pattern[0].arrival_secs # first time of the trip | |
stoptime_class = self.GetGtfsFactory().StopTime | |
# for each start time of a headway run | |
for run_secs in self.GetFrequencyStartTimes(): | |
# stop time list for a headway run | |
stoptimes = [] | |
# go through the pattern and generate stoptimes | |
for st in stoptime_pattern: | |
arrival_secs, departure_secs = None, None # default value if the stoptime is not timepoint | |
if st.arrival_secs != None: | |
arrival_secs = st.arrival_secs - first_secs + run_secs | |
if st.departure_secs != None: | |
departure_secs = st.departure_secs - first_secs + run_secs | |
# append stoptime | |
stoptimes.append(stoptime_class(problems=problems, stop=st.stop, | |
arrival_secs=arrival_secs, | |
departure_secs=departure_secs, | |
stop_headsign=st.stop_headsign, | |
pickup_type=st.pickup_type, | |
drop_off_type=st.drop_off_type, | |
shape_dist_traveled= \ | |
st.shape_dist_traveled, | |
stop_sequence=st.stop_sequence)) | |
# add stoptimes to the stoptimes_list | |
stoptimes_list.append ( stoptimes ) | |
return stoptimes_list | |
def GetStartTime(self, problems=problems_module.default_problem_reporter): | |
"""Return the first time of the trip. TODO: For trips defined by frequency | |
return the first time of the first trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs FROM stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence LIMIT 1', (self.trip_id,)) | |
(arrival_secs, departure_secs) = cursor.fetchone() | |
if arrival_secs != None: | |
return arrival_secs | |
elif departure_secs != None: | |
return departure_secs | |
else: | |
problems.InvalidValue('departure_time', '', | |
'The first stop_time in trip %s is missing ' | |
'times.' % self.trip_id) | |
def GetHeadwayStartTimes(self): | |
"""Deprecated. Please use GetFrequencyStartTimes instead.""" | |
warnings.warn("No longer supported. The HeadwayPeriod class was renamed to " | |
"Frequency, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
return self.GetFrequencyStartTimes() | |
def GetFrequencyStartTimes(self): | |
"""Return a list of start time for each headway-based run. | |
Returns: | |
a sorted list of seconds since midnight, the start time of each run. If | |
this trip doesn't have headways returns an empty list.""" | |
start_times = [] | |
# for each headway period of the trip | |
for start_secs, end_secs, headway_secs in self.GetFrequencyTuples(): | |
# reset run secs to the start of the timeframe | |
run_secs = start_secs | |
while run_secs < end_secs: | |
start_times.append(run_secs) | |
# increment current run secs by headway secs | |
run_secs += headway_secs | |
return start_times | |
def GetEndTime(self, problems=problems_module.default_problem_reporter): | |
"""Return the last time of the trip. TODO: For trips defined by frequency | |
return the last time of the last trip.""" | |
cursor = self._schedule._connection.cursor() | |
cursor.execute( | |
'SELECT arrival_secs,departure_secs FROM stop_times WHERE ' | |
'trip_id=? ORDER BY stop_sequence DESC LIMIT 1', (self.trip_id,)) | |
(arrival_secs, departure_secs) = cursor.fetchone() | |
if departure_secs != None: | |
return departure_secs | |
elif arrival_secs != None: | |
return arrival_secs | |
else: | |
problems.InvalidValue('arrival_time', '', | |
'The last stop_time in trip %s is missing ' | |
'times.' % self.trip_id) | |
def _GenerateStopTimesTuples(self): | |
"""Generator for rows of the stop_times file""" | |
stoptimes = self.GetStopTimes() | |
for i, st in enumerate(stoptimes): | |
yield st.GetFieldValuesTuple(self.trip_id) | |
def GetStopTimesTuples(self): | |
results = [] | |
for time_tuple in self._GenerateStopTimesTuples(): | |
results.append(time_tuple) | |
return results | |
def GetPattern(self): | |
"""Return a tuple of Stop objects, in the order visited""" | |
stoptimes = self.GetStopTimes() | |
return tuple(st.stop for st in stoptimes) | |
def AddHeadwayPeriodObject(self, headway_period, problem_reporter): | |
"""Deprecated. Please use AddFrequencyObject instead.""" | |
warnings.warn("No longer supported. The HeadwayPeriod class was renamed to " | |
"Frequency, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
self.AddFrequencyObject(frequency, problem_reporter) | |
def AddFrequencyObject(self, frequency, problem_reporter): | |
"""Add a Frequency object to this trip's list of Frequencies.""" | |
if frequency is not None: | |
self.AddFrequency(frequency.StartTime(), | |
frequency.EndTime(), | |
frequency.HeadwaySecs(), | |
problem_reporter) | |
def AddHeadwayPeriod(self, start_time, end_time, headway_secs, | |
problem_reporter=problems_module.default_problem_reporter): | |
"""Deprecated. Please use AddFrequency instead.""" | |
warnings.warn("No longer supported. The HeadwayPeriod class was renamed to " | |
"Frequency, and all related functions were renamed " | |
"accordingly.", DeprecationWarning) | |
self.AddFrequency(start_time, end_time, headway_secs, problem_reporter) | |
def AddFrequency(self, start_time, end_time, headway_secs, | |
problem_reporter=problems_module.default_problem_reporter): | |
"""Adds a period to this trip during which the vehicle travels | |
at regular intervals (rather than specifying exact times for each stop). | |
Args: | |
start_time: The time at which this headway period starts, either in | |
numerical seconds since midnight or as "HH:MM:SS" since midnight. | |
end_time: The time at which this headway period ends, either in | |
numerical seconds since midnight or as "HH:MM:SS" since midnight. | |
This value should be larger than start_time. | |
headway_secs: The amount of time, in seconds, between occurences of | |
this trip. | |
problem_reporter: Optional parameter that can be used to select | |
how any errors in the other input parameters will be reported. | |
Returns: | |
None | |
""" | |
if start_time == None or start_time == '': # 0 is OK | |
problem_reporter.MissingValue('start_time') | |
return | |
if isinstance(start_time, basestring): | |
try: | |
start_time = util.TimeToSecondsSinceMidnight(start_time) | |
except problems_module.Error: | |
problem_reporter.InvalidValue('start_time', start_time) | |
return | |
elif start_time < 0: | |
problem_reporter.InvalidValue('start_time', start_time) | |
if end_time == None or end_time == '': | |
problem_reporter.MissingValue('end_time') | |
return | |
if isinstance(end_time, basestring): | |
try: | |
end_time = util.TimeToSecondsSinceMidnight(end_time) | |
except problems_module.Error: | |
problem_reporter.InvalidValue('end_time', end_time) | |
return | |
elif end_time < 0: | |
problem_reporter.InvalidValue('end_time', end_time) | |
return | |
if not headway_secs: | |
problem_reporter.MissingValue('headway_secs') | |
return | |
try: | |
headway_secs = int(headway_secs) | |
except ValueError: | |
problem_reporter.InvalidValue('headway_secs', headway_secs) | |
return | |
if headway_secs <= 0: | |
problem_reporter.InvalidValue('headway_secs', headway_secs) | |
return | |
if end_time <= start_time: | |
problem_reporter.InvalidValue('end_time', end_time, | |
'should be greater than start_time') | |
self._headways.append((start_time, end_time, headway_secs)) | |
def ClearFrequencies(self): | |
self._headways = [] | |
def _HeadwayOutputTuple(self, headway): | |
return (self.trip_id, | |
util.FormatSecondsSinceMidnight(headway[0]), | |
util.FormatSecondsSinceMidnight(headway[1]), | |
unicode(headway[2])) | |
def GetFrequencyOutputTuples(self): | |
tuples = [] | |
for headway in self._headways: | |
tuples.append(self._HeadwayOutputTuple(headway)) | |
return tuples | |
def GetFrequencyTuples(self): | |
return self._headways | |
def __getattr__(self, name): | |
if name == 'service_period': | |
assert self._schedule, "Must be in a schedule to get service_period" | |
return self._schedule.GetServicePeriod(self.service_id) | |
elif name == 'pattern_id': | |
if '_pattern_id' not in self.__dict__: | |
self.__dict__['_pattern_id'] = hash(self.GetPattern()) | |
return self.__dict__['_pattern_id'] | |
else: | |
return GtfsObjectBase.__getattr__(self, name) | |
def ValidateRouteId(self, problems): | |
if util.IsEmpty(self.route_id): | |
problems.MissingValue('route_id') | |
def ValidateServicePeriod(self, problems): | |
if 'service_period' in self.__dict__: | |
# Some tests assign to the service_period attribute. Patch up self before | |
# proceeding with validation. See also comment in Trip.__init__. | |
self.service_id = self.__dict__['service_period'].service_id | |
del self.service_period | |
if util.IsEmpty(self.service_id): | |
problems.MissingValue('service_id') | |
def ValidateTripId(self, problems): | |
if util.IsEmpty(self.trip_id): | |
problems.MissingValue('trip_id') | |
def ValidateDirectionId(self, problems): | |
if hasattr(self, 'direction_id') and (not util.IsEmpty(self.direction_id)) \ | |
and (self.direction_id != '0') and (self.direction_id != '1'): | |
problems.InvalidValue('direction_id', self.direction_id, | |
'direction_id must be "0" or "1"') | |
def ValidateShapeIdsExistInShapeList(self, problems): | |
if self._schedule: | |
if self.shape_id and self.shape_id not in self._schedule._shapes: | |
problems.InvalidValue('shape_id', self.shape_id) | |
def ValidateRouteIdExistsInRouteList(self, problems): | |
if self._schedule: | |
if self.route_id and self.route_id not in self._schedule.routes: | |
problems.InvalidValue('route_id', self.route_id) | |
def ValidateServiceIdExistsInServiceList(self, problems): | |
if self._schedule: | |
if (self.service_id and | |
self.service_id not in self._schedule.service_periods): | |
problems.InvalidValue('service_id', self.service_id) | |
def Validate(self, problems, validate_children=True): | |
"""Validate attributes of this object. | |
Check that this object has all required values set to a valid value without | |
reference to the rest of the schedule. If the _schedule attribute is set | |
then check that references such as route_id and service_id are correct. | |
Args: | |
problems: A ProblemReporter object | |
validate_children: if True and the _schedule attribute is set than call | |
ValidateChildren | |
""" | |
self.ValidateRouteId(problems) | |
self.ValidateServicePeriod(problems) | |
self.ValidateDirectionId(problems) | |
self.ValidateTripId(problems) | |
self.ValidateShapeIdsExistInShapeList(problems) | |
self.ValidateRouteIdExistsInRouteList(problems) | |
self.ValidateServiceIdExistsInServiceList(problems) | |
if self._schedule and validate_children: | |
self.ValidateChildren(problems) | |
def ValidateNoDuplicateStopSequences(self, problems): | |
cursor = self._schedule._connection.cursor() | |
cursor.execute("SELECT COUNT(stop_sequence) AS a FROM stop_times " | |
"WHERE trip_id=? GROUP BY stop_sequence HAVING a > 1", | |
(self.trip_id,)) | |
for row in cursor: | |
problems.InvalidValue('stop_sequence', row[0], | |
'Duplicate stop_sequence in trip_id %s' % | |
self.trip_id) | |
def ValidateTripStartAndEndTimes(self, problems, stoptimes): | |
if stoptimes: | |
if stoptimes[0].arrival_time is None and stoptimes[0].departure_time is None: | |
problems.OtherProblem( | |
'No time for start of trip_id "%s""' % (self.trip_id)) | |
if stoptimes[-1].arrival_time is None and stoptimes[-1].departure_time is None: | |
problems.OtherProblem( | |
'No time for end of trip_id "%s""' % (self.trip_id)) | |
def ValidateStopTimesSequenceHasIncreasingTimeAndDistance(self, | |
problems, | |
stoptimes): | |
if stoptimes: | |
route_class = self.GetGtfsFactory().Route | |
# Checks that the arrival time for each time point is after the departure | |
# time of the previous. Assumes a stoptimes sorted by sequence | |
prev_departure = 0 | |
prev_stop = None | |
prev_distance = None | |
try: | |
route_type = self._schedule.GetRoute(self.route_id).route_type | |
max_speed = route_class._ROUTE_TYPES[route_type]['max_speed'] | |
except KeyError, e: | |
# If route_type cannot be found, assume it is 0 (Tram) for checking | |
# speeds between stops. | |
max_speed = route_class._ROUTE_TYPES[0]['max_speed'] | |
for timepoint in stoptimes: | |
# Distance should be a nonnegative float number, so it should be | |
# always larger than None. | |
distance = timepoint.shape_dist_traveled | |
if distance is not None: | |
if distance > prev_distance and distance >= 0: | |
prev_distance = distance | |
else: | |
if distance == prev_distance: | |
type = problems_module.TYPE_WARNING | |
else: | |
type = problems_module.TYPE_ERROR | |
problems.InvalidValue('stoptimes.shape_dist_traveled', distance, | |
'For the trip %s the stop %s has shape_dist_traveled=%s, ' | |
'which should be larger than the previous ones. In this ' | |
'case, the previous distance was %s.' % | |
(self.trip_id, timepoint.stop_id, distance, prev_distance), | |
type=type) | |
if timepoint.arrival_secs is not None: | |
self._CheckSpeed(prev_stop, timepoint.stop, prev_departure, | |
timepoint.arrival_secs, max_speed, problems) | |
if timepoint.arrival_secs >= prev_departure: | |
prev_departure = timepoint.departure_secs | |
prev_stop = timepoint.stop | |
else: | |
problems.OtherProblem('Timetravel detected! Arrival time ' | |
'is before previous departure ' | |
'at sequence number %s in trip %s' % | |
(timepoint.stop_sequence, self.trip_id)) | |
def ValidateShapeDistTraveledSmallerThanMaxShapeDistance(self, | |
problems, | |
stoptimes): | |
if stoptimes: | |
if self.shape_id and self.shape_id in self._schedule._shapes: | |
shape = self._schedule.GetShape(self.shape_id) | |
max_shape_dist = shape.max_distance | |
st = stoptimes[-1] | |
if (st.shape_dist_traveled and | |
st.shape_dist_traveled > max_shape_dist): | |
problems.OtherProblem( | |
'In stop_times.txt, the stop with trip_id=%s and ' | |
'stop_sequence=%d has shape_dist_traveled=%f, which is larger ' | |
'than the max shape_dist_traveled=%f of the corresponding ' | |
'shape (shape_id=%s)' % | |
(self.trip_id, st.stop_sequence, st.shape_dist_traveled, | |
max_shape_dist, self.shape_id), | |
type=problems_module.TYPE_WARNING) | |
def ValidateDistanceFromStopToShape(self, problems, stoptimes): | |
if stoptimes: | |
if self.shape_id and self.shape_id in self._schedule._shapes: | |
shape = self._schedule.GetShape(self.shape_id) | |
max_shape_dist = shape.max_distance | |
st = stoptimes[-1] | |
# shape_dist_traveled is valid in shape if max_shape_dist larger than | |
# 0. | |
if max_shape_dist > 0: | |
for st in stoptimes: | |
if st.shape_dist_traveled is None: | |
continue | |
pt = shape.GetPointWithDistanceTraveled(st.shape_dist_traveled) | |
if pt: | |
stop = self._schedule.GetStop(st.stop_id) | |
distance = util.ApproximateDistance(stop.stop_lat, stop.stop_lon, | |
pt[0], pt[1]) | |
if distance > problems_module.MAX_DISTANCE_FROM_STOP_TO_SHAPE: | |
problems.StopTooFarFromShapeWithDistTraveled( | |
self.trip_id, stop.stop_name, stop.stop_id, pt[2], | |
self.shape_id, distance, | |
problems_module.MAX_DISTANCE_FROM_STOP_TO_SHAPE) | |
def ValidateFrequencies(self, problems): | |
# O(n^2), but we don't anticipate many headway periods per trip | |
for headway_index, headway in enumerate(self._headways[0:-1]): | |
for other in self._headways[headway_index + 1:]: | |
if (other[0] < headway[1]) and (other[1] > headway[0]): | |
problems.OtherProblem('Trip contains overlapping headway periods ' | |
'%s and %s' % | |
(self._HeadwayOutputTuple(headway), | |
self._HeadwayOutputTuple(other))) | |
def ValidateChildren(self, problems): | |
"""Validate StopTimes and headways of this trip.""" | |
assert self._schedule, "Trip must be in a schedule to ValidateChildren" | |
# TODO: validate distance values in stop times (if applicable) | |
self.ValidateNoDuplicateStopSequences(problems) | |
stoptimes = self.GetStopTimes(problems) | |
stoptimes.sort(key=lambda x: x.stop_sequence) | |
self.ValidateTripStartAndEndTimes(problems, stoptimes) | |
self.ValidateStopTimesSequenceHasIncreasingTimeAndDistance(problems, | |
stoptimes) | |
self.ValidateShapeDistTraveledSmallerThanMaxShapeDistance(problems, | |
stoptimes) | |
self.ValidateDistanceFromStopToShape(problems, stoptimes) | |
self.ValidateFrequencies(problems) | |
def ValidateBeforeAdd(self, problems): | |
return True | |
def ValidateAfterAdd(self, problems): | |
self.Validate(problems) | |
def _CheckSpeed(self, prev_stop, next_stop, depart_time, | |
arrive_time, max_speed, problems): | |
# Checks that the speed between two stops is not faster than max_speed | |
if prev_stop != None: | |
try: | |
time_between_stops = arrive_time - depart_time | |
except TypeError: | |
return | |
try: | |
dist_between_stops = \ | |
util.ApproximateDistanceBetweenStops(next_stop, prev_stop) | |
except TypeError, e: | |
return | |
if time_between_stops == 0: | |
# HASTUS makes it hard to output GTFS with times to the nearest second; | |
# it rounds times to the nearest minute. Therefore stop_times at the | |
# same time ending in :00 are fairly common. These times off by no more | |
# than 30 have not caused a problem. See | |
# http://code.google.com/p/googletransitdatafeed/issues/detail?id=193 | |
# Show a warning if times are not rounded to the nearest minute or | |
# distance is more than max_speed for one minute. | |
if depart_time % 60 != 0 or dist_between_stops / 1000 * 60 > max_speed: | |
problems.TooFastTravel(self.trip_id, | |
prev_stop.stop_name, | |
next_stop.stop_name, | |
dist_between_stops, | |
time_between_stops, | |
speed=None, | |
type=problems_module.TYPE_WARNING) | |
return | |
# This needs floating point division for precision. | |
speed_between_stops = ((float(dist_between_stops) / 1000) / | |
(float(time_between_stops) / 3600)) | |
if speed_between_stops > max_speed: | |
problems.TooFastTravel(self.trip_id, | |
prev_stop.stop_name, | |
next_stop.stop_name, | |
dist_between_stops, | |
time_between_stops, | |
speed_between_stops, | |
type=problems_module.TYPE_WARNING) | |
def AddToSchedule(self, schedule, problems): | |
schedule.AddTripObject(self, problems) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2009 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
import codecs | |
import csv | |
import datetime | |
import math | |
import optparse | |
import random | |
import re | |
import sys | |
import problems | |
from trip import Trip | |
class OptionParserLongError(optparse.OptionParser): | |
"""OptionParser subclass that includes list of options above error message.""" | |
def error(self, msg): | |
print >>sys.stderr, self.format_help() | |
print >>sys.stderr, '\n\n%s: error: %s\n\n' % (self.get_prog_name(), msg) | |
sys.exit(2) | |
def RunWithCrashHandler(f): | |
try: | |
exit_code = f() | |
sys.exit(exit_code) | |
except (SystemExit, KeyboardInterrupt): | |
raise | |
except: | |
import inspect | |
import traceback | |
# Save trace and exception now. These calls look at the most recently | |
# raised exception. The code that makes the report might trigger other | |
# exceptions. | |
original_trace = inspect.trace(3)[1:] | |
formatted_exception = traceback.format_exception_only(*(sys.exc_info()[:2])) | |
apology = """Yikes, the program threw an unexpected exception! | |
Hopefully a complete report has been saved to transitfeedcrash.txt, | |
though if you are seeing this message we've already disappointed you once | |
today. Please include the report in a new issue at | |
http://code.google.com/p/googletransitdatafeed/issues/entry | |
or an email to the public group googletransitdatafeed@googlegroups.com. Sorry! | |
""" | |
dashes = '%s\n' % ('-' * 60) | |
dump = [] | |
dump.append(apology) | |
dump.append(dashes) | |
try: | |
import transitfeed | |
dump.append("transitfeed version %s\n\n" % transitfeed.__version__) | |
except NameError: | |
# Oh well, guess we won't put the version in the report | |
pass | |
for (frame_obj, filename, line_num, fun_name, context_lines, | |
context_index) in original_trace: | |
dump.append('File "%s", line %d, in %s\n' % (filename, line_num, | |
fun_name)) | |
if context_lines: | |
for (i, line) in enumerate(context_lines): | |
if i == context_index: | |
dump.append(' --> %s' % line) | |
else: | |
dump.append(' %s' % line) | |
for local_name, local_val in frame_obj.f_locals.items(): | |
try: | |
truncated_val = str(local_val)[0:500] | |
except Exception, e: | |
dump.append(' Exception in str(%s): %s' % (local_name, e)) | |
else: | |
if len(truncated_val) >= 500: | |
truncated_val = '%s...' % truncated_val[0:499] | |
dump.append(' %s = %s\n' % (local_name, truncated_val)) | |
dump.append('\n') | |
dump.append(''.join(formatted_exception)) | |
open('transitfeedcrash.txt', 'w').write(''.join(dump)) | |
print ''.join(dump) | |
print dashes | |
print apology | |
try: | |
raw_input('Press enter to continue...') | |
except EOFError: | |
# Ignore stdin being closed. This happens during some tests. | |
pass | |
sys.exit(127) | |
# Pick one of two defaultdict implementations. A native version was added to | |
# the collections library in python 2.5. If that is not available use Jason's | |
# pure python recipe. He gave us permission to distribute it. | |
# On Mon, Nov 30, 2009 at 07:27, jason kirtland <jek at discorporate.us> wrote: | |
# > | |
# > Hi Tom, sure thing! It's not easy to find on the cookbook site, but the | |
# > recipe is under the Python license. | |
# > | |
# > Cheers, | |
# > Jason | |
# > | |
# > On Thu, Nov 26, 2009 at 3:03 PM, Tom Brown <tom.brown.code@gmail.com> wrote: | |
# > | |
# >> I would like to include http://code.activestate.com/recipes/523034/ in | |
# >> http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution | |
# >> which is distributed under the Apache License, Version 2.0 with Copyright | |
# >> Google. May we include your code with a comment in the source pointing at | |
# >> the original URL? Thanks, Tom Brown | |
try: | |
# Try the native implementation first | |
from collections import defaultdict | |
except: | |
# Fallback for python2.4, which didn't include collections.defaultdict | |
class defaultdict(dict): | |
def __init__(self, default_factory=None, *a, **kw): | |
if (default_factory is not None and | |
not hasattr(default_factory, '__call__')): | |
raise TypeError('first argument must be callable') | |
dict.__init__(self, *a, **kw) | |
self.default_factory = default_factory | |
def __getitem__(self, key): | |
try: | |
return dict.__getitem__(self, key) | |
except KeyError: | |
return self.__missing__(key) | |
def __missing__(self, key): | |
if self.default_factory is None: | |
raise KeyError(key) | |
self[key] = value = self.default_factory() | |
return value | |
def __reduce__(self): | |
if self.default_factory is None: | |
args = tuple() | |
else: | |
args = self.default_factory, | |
return type(self), args, None, None, self.items() | |
def copy(self): | |
return self.__copy__() | |
def __copy__(self): | |
return type(self)(self.default_factory, self) | |
def __deepcopy__(self, memo): | |
import copy | |
return type(self)(self.default_factory, | |
copy.deepcopy(self.items())) | |
def __repr__(self): | |
return 'defaultdict(%s, %s)' % (self.default_factory, | |
dict.__repr__(self)) | |
OUTPUT_ENCODING = 'utf-8' | |
def EncodeUnicode(text): | |
""" | |
Optionally encode text and return it. The result should be safe to print. | |
""" | |
if type(text) == type(u''): | |
return text.encode(OUTPUT_ENCODING) | |
else: | |
return text | |
def IsValidURL(url): | |
"""Checks the validity of a URL value.""" | |
# TODO: Add more thorough checking of URL | |
return url.startswith(u'http://') or url.startswith(u'https://') | |
def IsValidColor(color): | |
"""Checks the validity of a hex color value.""" | |
return not re.match('^[0-9a-fA-F]{6}$', color) == None | |
def ColorLuminance(color): | |
"""Compute the brightness of an sRGB color using the formula from | |
http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast. | |
Args: | |
color: a string of six hex digits in the format verified by IsValidColor(). | |
Returns: | |
A floating-point number between 0.0 (black) and 255.0 (white). """ | |
r = int(color[0:2], 16) | |
g = int(color[2:4], 16) | |
b = int(color[4:6], 16) | |
return (299*r + 587*g + 114*b) / 1000.0 | |
def IsEmpty(value): | |
return value is None or (isinstance(value, basestring) and not value.strip()) | |
def FindUniqueId(dic): | |
"""Return a string not used as a key in the dictionary dic""" | |
name = str(len(dic)) | |
while name in dic: | |
# Use bigger numbers so it is obvious when an id is picked randomly. | |
name = str(random.randint(1000000, 999999999)) | |
return name | |
def TimeToSecondsSinceMidnight(time_string): | |
"""Convert HHH:MM:SS into seconds since midnight. | |
For example "01:02:03" returns 3723. The leading zero of the hours may be | |
omitted. HH may be more than 23 if the time is on the following day.""" | |
m = re.match(r'(\d{1,3}):([0-5]\d):([0-5]\d)$', time_string) | |
# ignored: matching for leap seconds | |
if not m: | |
raise problems.Error, 'Bad HH:MM:SS "%s"' % time_string | |
return int(m.group(1)) * 3600 + int(m.group(2)) * 60 + int(m.group(3)) | |
def FormatSecondsSinceMidnight(s): | |
"""Formats an int number of seconds past midnight into a string | |
as "HH:MM:SS".""" | |
return "%02d:%02d:%02d" % (s / 3600, (s / 60) % 60, s % 60) | |
def DateStringToDateObject(date_string): | |
"""Return a date object for a string "YYYYMMDD".""" | |
# If this becomes a bottleneck date objects could be cached | |
return datetime.date(int(date_string[0:4]), int(date_string[4:6]), | |
int(date_string[6:8])) | |
def FloatStringToFloat(float_string, problems=None): | |
"""Convert a float as a string to a float or raise an exception""" | |
# Will raise TypeError unless a string | |
match = re.match(r"^[+-]?\d+(\.\d+)?$", float_string) | |
# Will raise TypeError if the string can't be parsed | |
parsed_value = float(float_string) | |
if "x" in float_string: | |
# This is needed because Python 2.4 does not complain about float("0x20"). | |
# But it does complain about float("0b10"), so this should be enough. | |
raise ValueError() | |
if not match and problems is not None: | |
# Does not match the regex, but it's a float according to Python | |
problems.InvalidFloatValue(float_string) | |
return parsed_value | |
def NonNegIntStringToInt(int_string, problems=None): | |
"""Convert an non-negative integer string to an int or raise an exception""" | |
# Will raise TypeError unless a string | |
match = re.match(r"^(?:0|[1-9]\d*)$", int_string) | |
# Will raise ValueError if the string can't be parsed | |
parsed_value = int(int_string) | |
if parsed_value < 0: | |
raise ValueError() | |
elif not match and problems is not None: | |
# Does not match the regex, but it's an int according to Python | |
problems.InvalidNonNegativeIntegerValue(int_string) | |
return parsed_value | |
EARTH_RADIUS = 6378135 # in meters | |
def ApproximateDistance(degree_lat1, degree_lng1, degree_lat2, degree_lng2): | |
"""Compute approximate distance between two points in meters. Assumes the | |
Earth is a sphere.""" | |
# TODO: change to ellipsoid approximation, such as | |
# http://www.codeguru.com/Cpp/Cpp/algorithms/article.php/c5115/ | |
lat1 = math.radians(degree_lat1) | |
lng1 = math.radians(degree_lng1) | |
lat2 = math.radians(degree_lat2) | |
lng2 = math.radians(degree_lng2) | |
dlat = math.sin(0.5 * (lat2 - lat1)) | |
dlng = math.sin(0.5 * (lng2 - lng1)) | |
x = dlat * dlat + dlng * dlng * math.cos(lat1) * math.cos(lat2) | |
return EARTH_RADIUS * (2 * math.atan2(math.sqrt(x), | |
math.sqrt(max(0.0, 1.0 - x)))) | |
def ApproximateDistanceBetweenStops(stop1, stop2): | |
"""Compute approximate distance between two stops in meters. Assumes the | |
Earth is a sphere.""" | |
return ApproximateDistance(stop1.stop_lat, stop1.stop_lon, | |
stop2.stop_lat, stop2.stop_lon) | |
class CsvUnicodeWriter: | |
""" | |
Create a wrapper around a csv writer object which can safely write unicode | |
values. Passes all arguments to csv.writer. | |
""" | |
def __init__(self, *args, **kwargs): | |
self.writer = csv.writer(*args, **kwargs) | |
def writerow(self, row): | |
"""Write row to the csv file. Any unicode strings in row are encoded as | |
utf-8.""" | |
encoded_row = [] | |
for s in row: | |
if isinstance(s, unicode): | |
encoded_row.append(s.encode("utf-8")) | |
else: | |
encoded_row.append(s) | |
try: | |
self.writer.writerow(encoded_row) | |
except Exception, e: | |
print 'error writing %s as %s' % (row, encoded_row) | |
raise e | |
def writerows(self, rows): | |
"""Write rows to the csv file. Any unicode strings in rows are encoded as | |
utf-8.""" | |
for row in rows: | |
self.writerow(row) | |
def __getattr__(self, name): | |
return getattr(self.writer, name) | |
# Map from literal string that should never be found in the csv data to a human | |
# readable description | |
INVALID_LINE_SEPARATOR_UTF8 = { | |
"\x0c": "ASCII Form Feed 0x0C", | |
# May be part of end of line, but not found elsewhere | |
"\x0d": "ASCII Carriage Return 0x0D, \\r", | |
"\xe2\x80\xa8": "Unicode LINE SEPARATOR U+2028", | |
"\xe2\x80\xa9": "Unicode PARAGRAPH SEPARATOR U+2029", | |
"\xc2\x85": "Unicode NEXT LINE SEPARATOR U+0085", | |
} | |
class EndOfLineChecker: | |
"""Wrapper for a file-like object that checks for consistent line ends. | |
The check for consistent end of lines (all CR LF or all LF) only happens if | |
next() is called until it raises StopIteration. | |
""" | |
def __init__(self, f, name, problems): | |
"""Create new object. | |
Args: | |
f: file-like object to wrap | |
name: name to use for f. StringIO objects don't have a name attribute. | |
problems: a ProblemReporterBase object | |
""" | |
self._f = f | |
self._name = name | |
self._crlf = 0 | |
self._crlf_examples = [] | |
self._lf = 0 | |
self._lf_examples = [] | |
self._line_number = 0 # first line will be number 1 | |
self._problems = problems | |
def __iter__(self): | |
return self | |
def next(self): | |
"""Return next line without end of line marker or raise StopIteration.""" | |
try: | |
next_line = self._f.next() | |
except StopIteration: | |
self._FinalCheck() | |
raise | |
self._line_number += 1 | |
m_eol = re.search(r"[\x0a\x0d]*$", next_line) | |
if m_eol.group() == "\x0d\x0a": | |
self._crlf += 1 | |
if self._crlf <= 5: | |
self._crlf_examples.append(self._line_number) | |
elif m_eol.group() == "\x0a": | |
self._lf += 1 | |
if self._lf <= 5: | |
self._lf_examples.append(self._line_number) | |
elif m_eol.group() == "": | |
# Should only happen at the end of the file | |
try: | |
self._f.next() | |
raise RuntimeError("Unexpected row without new line sequence") | |
except StopIteration: | |
# Will be raised again when EndOfLineChecker.next() is next called | |
pass | |
else: | |
self._problems.InvalidLineEnd( | |
codecs.getencoder('string_escape')(m_eol.group())[0], | |
(self._name, self._line_number)) | |
next_line_contents = next_line[0:m_eol.start()] | |
for seq, name in INVALID_LINE_SEPARATOR_UTF8.items(): | |
if next_line_contents.find(seq) != -1: | |
self._problems.OtherProblem( | |
"Line contains %s" % name, | |
context=(self._name, self._line_number)) | |
return next_line_contents | |
def _FinalCheck(self): | |
if self._crlf > 0 and self._lf > 0: | |
crlf_plural = self._crlf > 1 and "s" or "" | |
crlf_lines = ", ".join(["%s" % e for e in self._crlf_examples]) | |
if self._crlf > len(self._crlf_examples): | |
crlf_lines += ", ..." | |
lf_plural = self._lf > 1 and "s" or "" | |
lf_lines = ", ".join(["%s" % e for e in self._lf_examples]) | |
if self._lf > len(self._lf_examples): | |
lf_lines += ", ..." | |
self._problems.OtherProblem( | |
"Found %d CR LF \"\\r\\n\" line end%s (line%s %s) and " | |
"%d LF \"\\n\" line end%s (line%s %s). A file must use a " | |
"consistent line end." % (self._crlf, crlf_plural, crlf_plural, | |
crlf_lines, self._lf, lf_plural, | |
lf_plural, lf_lines), | |
(self._name,)) | |
# Prevent _FinalCheck() from reporting the problem twice, in the unlikely | |
# case that it is run twice | |
self._crlf = 0 | |
self._lf = 0 | |
def SortListOfTripByTime(trips): | |
trips.sort(key=Trip.GetStartTime) | |
#!/usr/bin/python2.5 | |
# Copyright (C) 2007 Google Inc. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software | |
# distributed under the License is distributed on an "AS IS" BASIS, | |
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
# See the License for the specific language governing permissions and | |
# limitations under the License. | |
""" | |
Filters out trips which are not on the defualt routes and | |
set their trip_typeattribute accordingly. | |
For usage information run unusual_trip_filter.py --help | |
""" | |
__author__ = 'Jiri Semecky <jiri.semecky@gmail.com>' | |
import codecs | |
import os | |
import os.path | |
import sys | |
import time | |
import transitfeed | |
from transitfeed import util | |
class UnusualTripFilter(object): | |
"""Class filtering trips going on unusual paths. | |
Those are usually trips going to/from depot or changing to another route | |
in the middle. Sets the 'trip_type' attribute of the trips.txt dataset | |
so that non-standard trips are marked as special (value 1) | |
instead of regular (default value 0). | |
""" | |
def __init__ (self, threshold=0.1, force=False, quiet=False, route_type=None): | |
self._threshold = threshold | |
self._quiet = quiet | |
self._force = force | |
if route_type in transitfeed.Route._ROUTE_TYPE_NAMES: | |
self._route_type = transitfeed.Route._ROUTE_TYPE_NAMES[route_type] | |
elif route_type is None: | |
self._route_type = None | |
else: | |
self._route_type = int(route_type) | |
def filter_line(self, route): | |
"""Mark unusual trips for the given route.""" | |
if self._route_type is not None and self._route_type != route.route_type: | |
self.info('Skipping route %s due to different route_type value (%s)' % | |
(route['route_id'], route['route_type'])) | |
return | |
self.info('Filtering infrequent trips for route %s.' % route.route_id) | |
trip_count = len(route.trips) | |
for pattern_id, pattern in route.GetPatternIdTripDict().items(): | |
ratio = float(1.0 * len(pattern) / trip_count) | |
if not self._force: | |
if (ratio < self._threshold): | |
self.info("\t%d trips on route %s with headsign '%s' recognized " | |
"as unusual (ratio %f)" % | |
(len(pattern), | |
route['route_short_name'], | |
pattern[0]['trip_headsign'], | |
ratio)) | |
for trip in pattern: | |
trip.trip_type = 1 # special | |
self.info("\t\tsetting trip_type of trip %s as special" % | |
trip.trip_id) | |
else: | |
self.info("\t%d trips on route %s with headsign '%s' recognized " | |
"as %s (ratio %f)" % | |
(len(pattern), | |
route['route_short_name'], | |
pattern[0]['trip_headsign'], | |
('regular', 'unusual')[ratio < self._threshold], | |
ratio)) | |
for trip in pattern: | |
trip.trip_type = ('0','1')[ratio < self._threshold] | |
self.info("\t\tsetting trip_type of trip %s as %s" % | |
(trip.trip_id, | |
('regular', 'unusual')[ratio < self._threshold])) | |
def filter(self, dataset): | |
"""Mark unusual trips for all the routes in the dataset.""" | |
self.info('Going to filter infrequent routes in the dataset') | |
for route in dataset.routes.values(): | |
self.filter_line(route) | |
def info(self, text): | |
if not self._quiet: | |
print text.encode("utf-8") | |
def main(): | |
usage = \ | |
'''%prog [options] <GTFS.zip> | |
Sets the trip_type for trips that have an unusual pattern for a route. | |
<GTFS.zip> is overwritten with the modifed GTFS file unless the --output | |
option is used. | |
For more information see | |
http://code.google.com/p/googletransitdatafeed/wiki/UnusualTripFilter | |
''' | |
parser = util.OptionParserLongError( | |
usage=usage, version='%prog '+transitfeed.__version__) | |
parser.add_option('-o', '--output', dest='output', metavar='FILE', | |
help='Name of the output GTFS file (writing to input feed if omitted).') | |
parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true', | |
help='Force use of in-memory sqlite db.') | |
parser.add_option('-t', '--threshold', default=0.1, | |
dest='threshold', type='float', | |
help='Frequency threshold for considering pattern as non-regular.') | |
parser.add_option('-r', '--route_type', default=None, | |
dest='route_type', type='string', | |
help='Filter only selected route type (specified by number' | |
'or one of the following names: ' + \ | |
', '.join(transitfeed.Route._ROUTE_TYPE_NAMES) + ').') | |
parser.add_option('-f', '--override_trip_type', default=False, | |
dest='override_trip_type', action='store_true', | |
help='Forces overwrite of current trip_type values.') | |
parser.add_option('-q', '--quiet', dest='quiet', | |
default=False, action='store_true', | |
help='Suppress information output.') | |
(options, args) = parser.parse_args() | |
if len(args) != 1: | |
parser.error('You must provide the path of a single feed.') | |
filter = UnusualTripFilter(float(options.threshold), | |
force=options.override_trip_type, | |
quiet=options.quiet, | |
route_type=options.route_type) | |
feed_name = args[0] | |
feed_name = feed_name.strip() | |
filter.info('Loading %s' % feed_name) | |
loader = transitfeed.Loader(feed_name, extra_validation=True, | |
memory_db=options.memory_db) | |
data = loader.Load() | |
filter.filter(data) | |
print 'Saving data' | |
# Write the result | |
if options.output is None: | |
data.WriteGoogleTransitFeed(feed_name) | |
else: | |
data.WriteGoogleTransitFeed(options.output) | |
if __name__ == '__main__': | |
util.RunWithCrashHandler(main) | |