Upgrade origin-src to google transit feed 1.2.6
Binary files a/origin-src/transitfeed-1.2.5.tar.gz and /dev/null differ
--- a/origin-src/transitfeed-1.2.5/COPYING
+++ /dev/null
@@ -1,203 +1,1 @@
- Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "[]"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
--- a/origin-src/transitfeed-1.2.5/INSTALL
+++ /dev/null
@@ -1,22 +1,1 @@
-INSTALL file for transitfeed distribution
-
-
-To download and install in one step make sure you have easy-install installed and run
-easy_install transitfeed
-
-
-
-Since you got this far chances are you have downloaded a copy of the source
-code. Install with the command
-
-python setup.py install
-
-
-
-If you don't want to install you may be able to run the scripts from this
-directory. For example, try running
-
-./feedvalidator.py -n test/data/good_feed.zip
-
-
--- a/origin-src/transitfeed-1.2.5/PKG-INFO
+++ /dev/null
@@ -1,21 +1,1 @@
-Metadata-Version: 1.0
-Name: transitfeed
-Version: 1.2.5
-Summary: Google Transit Feed Specification library and tools
-Home-page: http://code.google.com/p/googletransitdatafeed/
-Author: Tom Brown
-Author-email: tom.brown.code@gmail.com
-License: Apache License, Version 2.0
-Download-URL: http://googletransitdatafeed.googlecode.com/files/transitfeed-1.2.5.tar.gz
-Description: This module provides a library for reading, writing and validating Google Transit Feed Specification files. It includes some scripts that validate a feed, display it using the Google Maps API and the start of a KML importer and exporter.
-Platform: OS Independent
-Classifier: Development Status :: 4 - Beta
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: Information Technology
-Classifier: Intended Audience :: Other Audience
-Classifier: License :: OSI Approved :: Apache Software License
-Classifier: Operating System :: OS Independent
-Classifier: Programming Language :: Python
-Classifier: Topic :: Scientific/Engineering :: GIS
-Classifier: Topic :: Software Development :: Libraries :: Python Modules
--- a/origin-src/transitfeed-1.2.5/README
+++ /dev/null
@@ -1,19 +1,1 @@
-README file for transitfeed distribution
-
-
-This distribution contains a library to help you parse and generate Google
-Transit Feed files. It also contains some sample tools that demonstrate the
-library and are useful in their own right when maintaining Google
-Transit Feed files. You may fetch the specification from
-http://code.google.com/transit/spec/transit_feed_specification.htm
-
-
-See INSTALL for installation instructions
-
-The most recent source can be downloaded from our subversion repository at
-http://googletransitdatafeed.googlecode.com/svn/trunk/python/
-
-See http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution
-for more information.
-
--- a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/__init__.py
+++ /dev/null
@@ -1,9 +1,1 @@
-__doc__ = """
-Package holding files for Google Transit Feed Specification Schedule Viewer.
-"""
-# This package contains the data files for schedule_viewer.py, a script that
-# comes with the transitfeed distribution. According to the thread
-# "[Distutils] distutils data_files and setuptools.pkg_resources are driving
-# me crazy" this is the easiest way to include data files. My experience
-# agrees. - Tom 2007-05-29
--- a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/index.html
+++ /dev/null
@@ -1,706 +1,1 @@
-<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
- "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
-<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml">
- <head>
- <meta http-equiv="content-type" content="text/html; charset=utf-8"/>
- <title>[agency]</title>
- <link href="file/style.css" rel="stylesheet" type="text/css" />
- <style type="text/css">
- v\:* {
- behavior:url(#default#VML);
- }
- </style>
- <script src="http://[host]/maps?file=api&v=2&key=[key]" type="text/javascript"></script>
- <script src="/file/labeled_marker.js" type="text/javascript"></script>
- <script language="VBScript" src="/file/svgcheck.vbs"></script>
- <script type="text/javascript">
- //<![CDATA[
- var map;
- // Set to true when debugging for log statements about HTTP requests.
- var log = false;
- var twelveHourTime = false; // set to true to see AM/PM
- var selectedRoute = null;
- var forbid_editing = [forbid_editing];
- function load() {
- if (GBrowserIsCompatible()) {
- sizeRouteList();
- var map_dom = document.getElementById("map");
- map = new GMap2(map_dom);
- map.addControl(new GLargeMapControl());
- map.addControl(new GMapTypeControl());
- map.addControl(new GOverviewMapControl());
- map.enableScrollWheelZoom();
- var bb = new GLatLngBounds(new GLatLng([min_lat], [min_lon]),new GLatLng([max_lat], [max_lon]));
- map.setCenter(bb.getCenter(), map.getBoundsZoomLevel(bb));
- map.enableDoubleClickZoom();
- initIcons();
- GEvent.addListener(map, "moveend", callbackMoveEnd);
- GEvent.addListener(map, "zoomend", callbackZoomEnd);
- callbackMoveEnd(); // Pretend we just moved to current center
- fetchRoutes();
- }
- }
-
- function callbackZoomEnd() {
- }
-
- function callbackMoveEnd() {
- // Map moved, search for stops near the center
- fetchStopsInBounds(map.getBounds());
- }
-
- /**
- * Fetch a sample of stops in the bounding box.
- */
- function fetchStopsInBounds(bounds) {
- url = "/json/boundboxstops?n=" + bounds.getNorthEast().lat()
- + "&e=" + bounds.getNorthEast().lng()
- + "&s=" + bounds.getSouthWest().lat()
- + "&w=" + bounds.getSouthWest().lng()
- + "&limit=50";
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayStopsBackground);
- }
-
- /**
- * Displays stops returned by the server on the map. Expected to be called
- * when GDownloadUrl finishes.
- *
- * @param {String} data JSON encoded list of list, each
- * containing a row of stops.txt
- * @param {Number} responseCode Response code from server
- */
- function callbackDisplayStops(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- clearMap();
- var stops = eval(data);
- if (stops.length == 1) {
- var marker = addStopMarkerFromList(stops[0], true);
- fetchStopInfoWindow(marker);
- } else {
- for (var i=0; i<stops.length; ++i) {
- addStopMarkerFromList(stops[i], true);
- }
- }
- }
-
- function stopTextSearchSubmit() {
- var text = document.getElementById("stopTextSearchInput").value;
- var url = "/json/stopsearch?q=" + text; // TODO URI escape
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayStops);
- }
-
- function tripTextSearchSubmit() {
- var text = document.getElementById("tripTextSearchInput").value;
- selectTrip(text);
- }
-
- /**
- * Add stops markers to the map and remove stops no longer in the
- * background.
- */
- function callbackDisplayStopsBackground(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var stops = eval(data);
- // Make a list of all background markers
- var oldStopMarkers = {};
- for (var stopId in stopMarkersBackground) {
- oldStopMarkers[stopId] = 1;
- }
- // Add new markers to the map and remove from oldStopMarkers
- for (var i=0; i<stops.length; ++i) {
- var marker = addStopMarkerFromList(stops[i], false);
- if (oldStopMarkers[marker.stopId]) {
- delete oldStopMarkers[marker.stopId];
- }
- }
- // Delete all markers that remain in oldStopMarkers
- for (var stopId in oldStopMarkers) {
- GEvent.removeListener(stopMarkersBackground[stopId].clickListener);
- map.removeOverlay(stopMarkersBackground[stopId]);
- delete stopMarkersBackground[stopId]
- }
- }
-
- /**
- * Remove all overlays from the map
- */
- function clearMap() {
- boundsOfPolyLine = null;
- for (var stopId in stopMarkersSelected) {
- GEvent.removeListener(stopMarkersSelected[stopId].clickListener);
- }
- for (var stopId in stopMarkersBackground) {
- GEvent.removeListener(stopMarkersBackground[stopId].clickListener);
- }
- stopMarkersSelected = {};
- stopMarkersBackground = {};
- map.clearOverlays();
- }
-
- /**
- * Return a new GIcon used for stops
- */
- function makeStopIcon() {
- var icon = new GIcon();
- icon.iconSize = new GSize(12, 20);
- icon.shadowSize = new GSize(22, 20);
- icon.iconAnchor = new GPoint(6, 20);
- icon.infoWindowAnchor = new GPoint(5, 1);
- return icon;
- }
-
- /**
- * Initialize icons. Call once during load.
- */
- function initIcons() {
- iconSelected = makeStopIcon();
- iconSelected.image = "/file/mm_20_yellow.png";
- iconSelected.shadow = "/file/mm_20_shadow.png";
- iconBackground = makeStopIcon();
- iconBackground.image = "/file/mm_20_blue_trans.png";
- iconBackground.shadow = "/file/mm_20_shadow_trans.png";
- iconBackgroundStation = makeStopIcon();
- iconBackgroundStation.image = "/file/mm_20_red_trans.png";
- iconBackgroundStation.shadow = "/file/mm_20_shadow_trans.png";
- }
-
- var iconSelected;
- var iconBackground;
- var iconBackgroundStation;
- // Map from stopId to GMarker object for stops selected because they are
- // part of a trip, etc
- var stopMarkersSelected = {};
- // Map from stopId to GMarker object for stops found by the background
- // passive search
- var stopMarkersBackground = {};
- /**
- * Add a stop to the map, given a row from stops.txt.
- */
- function addStopMarkerFromList(list, selected, text) {
- return addStopMarker(list[0], list[1], list[2], list[3], list[4], selected, text);
- }
-
- /**
- * Add a stop to the map, returning the new marker
- */
- function addStopMarker(stopId, stopName, stopLat, stopLon, locationType, selected, text) {
- if (stopMarkersSelected[stopId]) {
- // stop was selected
- var marker = stopMarkersSelected[stopId];
- if (text) {
- oldText = marker.getText();
- if (oldText) {
- oldText = oldText + "<br>";
- }
- marker.setText(oldText + text);
- }
- return marker;
- }
- if (stopMarkersBackground[stopId]) {
- // Stop was in the background. Either delete it from the background or
- // leave it where it is.
- if (selected) {
- map.removeOverlay(stopMarkersBackground[stopId]);
- delete stopMarkersBackground[stopId];
- } else {
- return stopMarkersBackground[stopId];
- }
- }
-
- var icon;
- if (selected) {
- icon = iconSelected;
- } else if (locationType == 1) {
- icon = iconBackgroundStation
- } else {
- icon = iconBackground;
- }
- var ll = new GLatLng(stopLat,stopLon);
- var marker;
- if (selected || text) {
- if (!text) {
- text = ""; // Make sure every selected icon has a text box, even if empty
- }
- var markerOpts = new Object();
- markerOpts.icon = icon;
- markerOpts.labelText = text;
- markerOpts.labelClass = "tooltip";
- markerOpts.labelOffset = new GSize(6, -20);
- marker = new LabeledMarker(ll, markerOpts);
- } else {
- marker = new GMarker(ll, {icon: icon, draggable: !forbid_editing});
- }
- marker.stopName = stopName;
- marker.stopId = stopId;
- if (selected) {
- stopMarkersSelected[stopId] = marker;
- } else {
- stopMarkersBackground[stopId] = marker;
- }
- map.addOverlay(marker);
- marker.clickListener = GEvent.addListener(marker, "click", function() {fetchStopInfoWindow(marker);});
- GEvent.addListener(marker, "dragend", function() {
-
- document.getElementById("edit").style.visibility = "visible";
- document.getElementById("edit_status").innerHTML = "updating..."
- changeStopLocation(marker);
- });
- return marker;
- }
-
- /**
- * Sends new location of a stop to server.
- */
- function changeStopLocation(marker) {
- var url = "/json/setstoplocation?id=" +
- encodeURIComponent(marker.stopId) +
- "&lat=" + encodeURIComponent(marker.getLatLng().lat()) +
- "&lng=" + encodeURIComponent(marker.getLatLng().lng());
- GDownloadUrl(url, function(data, responseCode) {
- document.getElementById("edit_status").innerHTML = unescape(data);
- } );
- if (log)
- GLog.writeUrl(url);
- }
-
- /**
- * Saves the current state of the data file opened at server side to file.
- */
- function saveData() {
- var url = "/json/savedata";
- GDownloadUrl(url, function(data, responseCode) {
- document.getElementById("edit_status").innerHTML = data;} );
- if (log)
- GLog.writeUrl(url);
- }
-
- /**
- * Fetch the next departing trips from the stop for display in an info
- * window.
- */
- function fetchStopInfoWindow(marker) {
- var url = "/json/stoptrips?stop=" + encodeURIComponent(marker.stopId) + "&time=" + parseTimeInput();
- GDownloadUrl(url, function(data, responseCode) {
- callbackDisplayStopInfoWindow(marker, data, responseCode); } );
- if (log)
- GLog.writeUrl(url);
- }
-
- function callbackDisplayStopInfoWindow(marker, data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var timeTrips = eval(data);
- var html = "<b>" + marker.stopName + "</b> (" + marker.stopId + ")<br>";
- var latLng = marker.getLatLng();
- html = html + "(" + latLng.lat() + ", " + latLng.lng() + ")<br>";
- html = html + "<table><tr><th>service_id<th>time<th>name</tr>";
- for (var i=0; i < timeTrips.length; ++i) {
- var time = timeTrips[i][0];
- var tripid = timeTrips[i][1][0];
- var tripname = timeTrips[i][1][1];
- var service_id = timeTrips[i][1][2];
- var timepoint = timeTrips[i][2];
- html = html + "<tr onClick='map.closeInfoWindow();selectTrip(\"" +
- tripid + "\")'>" +
- "<td>" + service_id +
- "<td align='right'>" + (timepoint ? "" : "~") +
- formatTime(time) + "<td>" + tripname + "</tr>";
- }
- html = html + "</table>";
- marker.openInfoWindowHtml(html);
- }
-
- function leadingZero(digit) {
- if (digit < 10)
- return "0" + digit;
- else
- return "" + digit;
- }
-
- function formatTime(secSinceMidnight) {
- var hours = Math.floor(secSinceMidnight / 3600);
- var suffix = "";
-
- if (twelveHourTime) {
- suffix = (hours >= 12) ? "p" : "a";
- suffix += (hours >= 24) ? " next day" : "";
- hours = hours % 12;
- if (hours == 0)
- hours = 12;
- }
- var minutes = Math.floor(secSinceMidnight / 60) % 60;
- var seconds = secSinceMidnight % 60;
- if (seconds == 0) {
- return hours + ":" + leadingZero(minutes) + suffix;
- } else {
- return hours + ":" + leadingZero(minutes) + ":" + leadingZero(seconds) + suffix;
- }
- }
-
- function parseTimeInput() {
- var text = document.getElementById("timeInput").value;
- var m = text.match(/([012]?\d):([012345]?\d)(:([012345]?\d))?/);
- if (m) {
- var seconds = parseInt(m[1], 10) * 3600;
- seconds += parseInt(m[2], 10) * 60;
- if (m[4]) {
- second += parseInt(m[4], 10);
- }
- return seconds;
- } else {
- if (log)
- GLog.write("Couldn't match " + text);
- }
- }
-
- /**
- * Create a string of dots that gets longer with the log of count.
- */
- function countToRepeatedDots(count) {
- // Find ln_2(count) + 1
- var logCount = Math.ceil(Math.log(count) / 0.693148) + 1;
- return new Array(logCount + 1).join(".");
- }
-
- function fetchRoutes() {
- url = "/json/routes";
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayRoutes);
- }
-
- function callbackDisplayRoutes(data, responseCode) {
- if (responseCode != 200) {
- patternDiv.appendChild(div);
- }
- var routes = eval(data);
- var routesList = document.getElementById("routeList");
- while (routesList.hasChildNodes()) {
- routesList.removeChild(routesList.firstChild);
- }
- for (i = 0; i < routes.length; ++i) {
- var routeId = routes[i][0];
- var shortName = document.createElement("span");
- shortName.className = "shortName";
- shortName.appendChild(document.createTextNode(routes[i][1] + " "));
- var routeName = routes[i][2];
- var elem = document.createElement("div");
- elem.appendChild(shortName);
- elem.appendChild(document.createTextNode(routeName));
- elem.id = "route_" + routeId;
- elem.className = "routeChoice";
- elem.title = routeName;
- GEvent.addDomListener(elem, "click", makeClosure(selectRoute, routeId));
-
- var routeContainer = document.createElement("div");
- routeContainer.id = "route_container_" + routeId;
- routeContainer.className = "routeContainer";
- routeContainer.appendChild(elem);
- routesList.appendChild(routeContainer);
- }
- }
-
- function selectRoute(routeId) {
- var routesList = document.getElementById("routeList");
- routeSpans = routesList.getElementsByTagName("div");
- for (var i = 0; i < routeSpans.length; ++i) {
- if (routeSpans[i].className == "routeChoiceSelected") {
- routeSpans[i].className = "routeChoice";
- }
- }
-
- // remove any previously-expanded route
- var tripInfo = document.getElementById("tripInfo");
- if (tripInfo)
- tripInfo.parentNode.removeChild(tripInfo);
-
- selectedRoute = routeId;
- var span = document.getElementById("route_" + routeId);
- span.className = "routeChoiceSelected";
- fetchPatterns(routeId);
- }
-
- function fetchPatterns(routeId) {
- url = "/json/routepatterns?route=" + encodeURIComponent(routeId) + "&time=" + parseTimeInput();
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayPatterns);
- }
-
- function callbackDisplayPatterns(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var div = document.createElement("div");
- div.className = "tripSection";
- div.id = "tripInfo";
- var firstTrip = null;
- var patterns = eval(data);
- clearMap();
- for (i = 0; i < patterns.length; ++i) {
- patternDiv = document.createElement("div")
- patternDiv.className = 'patternSection';
- div.appendChild(patternDiv)
- var pat = patterns[i]; // [patName, patId, len(early trips), trips, len(later trips), has_non_zero_trip_type]
- if (pat[5] == '1') {
- patternDiv.className += " unusualPattern"
- }
- patternDiv.appendChild(document.createTextNode(pat[0]));
- patternDiv.appendChild(document.createTextNode(", " + (pat[2] + pat[3].length + pat[4]) + " trips: "));
- if (pat[2] > 0) {
- patternDiv.appendChild(document.createTextNode(countToRepeatedDots(pat[2]) + " "));
- }
- for (j = 0; j < pat[3].length; ++j) {
- var trip = pat[3][j];
- var tripId = trip[1];
- if ((i == 0) && (j == 0))
- firstTrip = tripId;
- patternDiv.appendChild(document.createTextNode(" "));
- var span = document.createElement("span");
- span.appendChild(document.createTextNode(formatTime(trip[0])));
- span.id = "trip_" + tripId;
- GEvent.addDomListener(span, "click", makeClosure(selectTrip, tripId));
- patternDiv.appendChild(span)
- span.className = "tripChoice";
- }
- if (pat[4] > 0) {
- patternDiv.appendChild(document.createTextNode(" " + countToRepeatedDots(pat[4])));
- }
- patternDiv.appendChild(document.createElement("br"));
- }
- route = document.getElementById("route_container_" + selectedRoute);
- route.appendChild(div);
- if (tripId != null)
- selectTrip(firstTrip);
- }
-
- // Needed to get around limitation in javascript scope rules.
- // See http://calculist.blogspot.com/2005/12/gotcha-gotcha.html
- function makeClosure(f, a, b, c) {
- return function() { f(a, b, c); };
- }
- function make1ArgClosure(f, a, b, c) {
- return function(x) { f(x, a, b, c); };
- }
- function make2ArgClosure(f, a, b, c) {
- return function(x, y) { f(x, y, a, b, c); };
- }
-
- function selectTrip(tripId) {
- var tripInfo = document.getElementById("tripInfo");
- if (tripInfo) {
- tripSpans = tripInfo.getElementsByTagName('span');
- for (var i = 0; i < tripSpans.length; ++i) {
- tripSpans[i].className = 'tripChoice';
- }
- }
- var span = document.getElementById("trip_" + tripId);
- // Won't find the span if a different route is selected
- if (span) {
- span.className = 'tripChoiceSelected';
- }
- clearMap();
- url = "/json/tripstoptimes?trip=" + encodeURIComponent(tripId);
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayTripStopTimes);
- fetchTripPolyLine(tripId);
- fetchTripRows(tripId);
- }
-
- function callbackDisplayTripStopTimes(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var stopsTimes = eval(data);
- if (!stopsTimes) return;
- displayTripStopTimes(stopsTimes[0], stopsTimes[1]);
- }
-
- function fetchTripPolyLine(tripId) {
- url = "/json/tripshape?trip=" + encodeURIComponent(tripId);
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayTripPolyLine);
- }
-
- function callbackDisplayTripPolyLine(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var points = eval(data);
- if (!points) return;
- displayPolyLine(points);
- }
-
- var boundsOfPolyLine = null;
- function expandBoundingBox(latLng) {
- if (boundsOfPolyLine == null) {
- boundsOfPolyLine = new GLatLngBounds(latLng, latLng);
- } else {
- boundsOfPolyLine.extend(latLng);
- }
- }
-
- /**
- * Display a line given a list of points
- *
- * @param {Array} List of lat,lng pairs
- */
- function displayPolyLine(points) {
- var linePoints = Array();
- for (i = 0; i < points.length; ++i) {
- var ll = new GLatLng(points[i][0], points[i][1]);
- expandBoundingBox(ll);
- linePoints[linePoints.length] = ll;
- }
- var polyline = new GPolyline(linePoints, "#FF0000", 4);
- map.addOverlay(polyline);
- map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine));
- }
-
- function displayTripStopTimes(stops, times) {
- for (i = 0; i < stops.length; ++i) {
- var marker;
- if (times && times[i] != null) {
- marker = addStopMarkerFromList(stops[i], true, formatTime(times[i]));
- } else {
- marker = addStopMarkerFromList(stops[i], true);
- }
- expandBoundingBox(marker.getPoint());
- }
- map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine));
- }
-
- function fetchTripRows(tripId) {
- url = "/json/triprows?trip=" + encodeURIComponent(tripId);
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, make2ArgClosure(callbackDisplayTripRows, tripId));
- }
-
- function callbackDisplayTripRows(data, responseCode, tripId) {
- if (responseCode != 200) {
- return;
- }
- var rows = eval(data);
- if (!rows) return;
- var html = "";
- for (var i = 0; i < rows.length; ++i) {
- var filename = rows[i][0];
- var row = rows[i][1];
- html += "<b>" + filename + "</b>: " + formatDictionary(row) + "<br>";
- }
- html += svgTag("/ttablegraph?height=100&trip=" + tripId, "height='115' width='100%'");
- var bottombarDiv = document.getElementById("bottombar");
- bottombarDiv.style.display = "block";
- bottombarDiv.style.height = "175px";
- bottombarDiv.innerHTML = html;
- sizeRouteList();
- }
-
- /**
- * Return HTML to embed a SVG object in this page. src is the location of
- * the SVG and attributes is inserted directly into the object or embed
- * tag.
- */
- function svgTag(src, attributes) {
- if (navigator.userAgent.toLowerCase().indexOf("msie") != -1) {
- if (isSVGControlInstalled()) {
- return "<embed pluginspage='http://www.adobe.com/svg/viewer/install/' src='" + src + "' " + attributes +"></embed>";
- } else {
- return "<p>Please install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a> to get SVG support in IE</p>";
- }
- } else {
- return "<object data='" + src + "' type='image/svg+xml' " + attributes + "><p>No SVG support in your browser. Try Firefox 1.5 or newer or install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a></p></object>";
- }
- }
-
- /**
- * Format an Array object containing key-value pairs into a human readable
- * string.
- */
- function formatDictionary(d) {
- var output = "";
- var first = 1;
- for (var k in d) {
- if (first) {
- first = 0;
- } else {
- output += " ";
- }
- output += "<b>" + k + "</b>=" + d[k];
- }
- return output;
- }
-
-
- function windowHeight() {
- // Standard browsers (Mozilla, Safari, etc.)
- if (self.innerHeight)
- return self.innerHeight;
- // IE 6
- if (document.documentElement && document.documentElement.clientHeight)
- return document.documentElement.clientHeight;
- // IE 5
- if (document.body)
- return document.body.clientHeight;
- // Just in case.
- return 0;
- }
-
- function sizeRouteList() {
- var bottombarHeight = 0;
- var bottombarDiv = document.getElementById('bottombar');
- if (bottombarDiv.style.display != 'none') {
- bottombarHeight = document.getElementById('bottombar').offsetHeight
- + document.getElementById('bottombar').style.marginTop;
- }
- var height = windowHeight() - document.getElementById('topbar').offsetHeight - 15 - bottombarHeight;
- document.getElementById('content').style.height = height + 'px';
- if (map) {
- // Without this displayPolyLine does not use the correct map size
- map.checkResize();
- }
- }
-
- //]]>
- </script>
- </head>
-
-<body class='sidebar-left' onload="load();" onunload="GUnload()" onresize="sizeRouteList()">
-<div id='topbar'>
-<div id="edit">
- <span id="edit_status">...</span>
- <form onSubmit="saveData(); return false;"><input value="Save" type="submit">
-</div>
-<div id="agencyHeader">[agency]</div>
-</div>
-<div id='content'>
- <div id='sidebar-wrapper'><div id='sidebar'>
- Time: <input type="text" value="8:00" width="9" id="timeInput"><br>
- <form onSubmit="stopTextSearchSubmit(); return false;">
- Find Station: <input type="text" id="stopTextSearchInput"><input value="Search" type="submit"></form><br>
- <form onSubmit="tripTextSearchSubmit(); return false;">
- Find Trip ID: <input type="text" id="tripTextSearchInput"><input value="Search" type="submit"></form><br>
- <div id="routeList">routelist</div>
- </div></div>
-
- <div id='map-wrapper'> <div id='map'></div> </div>
-</div>
-
-<div id='bottombar'>bottom bar</div>
-
-</body>
-</html>
-
--- a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/labeled_marker.js
+++ /dev/null
@@ -1,186 +1,1 @@
-/*
-* LabeledMarker Class
-*
-* Copyright 2007 Mike Purvis (http://uwmike.com)
-*
-* Licensed under the Apache License, Version 2.0 (the "License");
-* you may not use this file except in compliance with the License.
-* You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*
-* This class extends the Maps API's standard GMarker class with the ability
-* to support markers with textual labels. Please see articles here:
-*
-* http://googlemapsbook.com/2007/01/22/extending-gmarker/
-* http://googlemapsbook.com/2007/03/06/clickable-labeledmarker/
-*/
-/**
- * Constructor for LabeledMarker, which picks up on strings from the GMarker
- * options array, and then calls the GMarker constructor.
- *
- * @param {GLatLng} latlng
- * @param {GMarkerOptions} Named optional arguments:
- * opt_opts.labelText {String} text to place in the overlay div.
- * opt_opts.labelClass {String} class to use for the overlay div.
- * (default "markerLabel")
- * opt_opts.labelOffset {GSize} label offset, the x- and y-distance between
- * the marker's latlng and the upper-left corner of the text div.
- */
-function LabeledMarker(latlng, opt_opts){
- this.latlng_ = latlng;
- this.opts_ = opt_opts;
-
- this.initText_ = opt_opts.labelText || "";
- this.labelClass_ = opt_opts.labelClass || "markerLabel";
- this.labelOffset_ = opt_opts.labelOffset || new GSize(0, 0);
-
- this.clickable_ = opt_opts.clickable || true;
-
- if (opt_opts.draggable) {
- // This version of LabeledMarker doesn't support dragging.
- opt_opts.draggable = false;
- }
-
- GMarker.apply(this, arguments);
-}
-
-
-// It's a limitation of JavaScript inheritance that we can't conveniently
-// inherit from GMarker without having to run its constructor. In order for
-// the constructor to run, it requires some dummy GLatLng.
-LabeledMarker.prototype = new GMarker(new GLatLng(0, 0));
-
-/**
- * Is called by GMap2's addOverlay method. Creates the text div and adds it
- * to the relevant parent div.
- *
- * @param {GMap2} map the map that has had this labeledmarker added to it.
- */
-LabeledMarker.prototype.initialize = function(map) {
- // Do the GMarker constructor first.
- GMarker.prototype.initialize.apply(this, arguments);
-
- this.map_ = map;
- this.setText(this.initText_);
-}
-
-/**
- * Create a new div for this label.
- */
-LabeledMarker.prototype.makeDiv_ = function(map) {
- if (this.div_) {
- return;
- }
- this.div_ = document.createElement("div");
- this.div_.className = this.labelClass_;
- this.div_.style.position = "absolute";
- this.div_.style.cursor = "pointer";
- this.map_.getPane(G_MAP_MARKER_PANE).appendChild(this.div_);
-
- if (this.clickable_) {
- /**
- * Creates a closure for passing events through to the source marker
- * This is located in here to avoid cluttering the global namespace.
- * The downside is that the local variables from initialize() continue
- * to occupy space on the stack.
- *
- * @param {Object} object to receive event trigger.
- * @param {GEventListener} event to be triggered.
- */
- function newEventPassthru(obj, event) {
- return function() {
- GEvent.trigger(obj, event);
- };
- }
-
- // Pass through events fired on the text div to the marker.
- var eventPassthrus = ['click', 'dblclick', 'mousedown', 'mouseup', 'mouseover', 'mouseout'];
- for(var i = 0; i < eventPassthrus.length; i++) {
- var name = eventPassthrus[i];
- GEvent.addDomListener(this.div_, name, newEventPassthru(this, name));
- }
- }
-}
-
-/**
- * Return the html in the div of this label, or "" if none is set
- */
-LabeledMarker.prototype.getText = function(text) {
- if (this.div_) {
- return this.div_.innerHTML;
- } else {
- return "";
- }
-}
-
-/**
- * Set the html in the div of this label to text. If text is "" or null remove
- * the div.
- */
-LabeledMarker.prototype.setText = function(text) {
- if (this.div_) {
- if (text) {
- this.div_.innerHTML = text;
- } else {
- // remove div
- GEvent.clearInstanceListeners(this.div_);
- this.div_.parentNode.removeChild(this.div_);
- this.div_ = null;
- }
- } else {
- if (text) {
- this.makeDiv_();
- this.div_.innerHTML = text;
- this.redraw();
- }
- }
-}
-
-/**
- * Move the text div based on current projection and zoom level, call the redraw()
- * handler in GMarker.
- *
- * @param {Boolean} force will be true when pixel coordinates need to be recomputed.
- */
-LabeledMarker.prototype.redraw = function(force) {
- GMarker.prototype.redraw.apply(this, arguments);
-
- if (this.div_) {
- // Calculate the DIV coordinates of two opposite corners of our bounds to
- // get the size and position of our rectangle
- var p = this.map_.fromLatLngToDivPixel(this.latlng_);
- var z = GOverlay.getZIndex(this.latlng_.lat());
-
- // Now position our div based on the div coordinates of our bounds
- this.div_.style.left = (p.x + this.labelOffset_.width) + "px";
- this.div_.style.top = (p.y + this.labelOffset_.height) + "px";
- this.div_.style.zIndex = z; // in front of the marker
- }
-}
-
-/**
- * Remove the text div from the map pane, destroy event passthrus, and calls the
- * default remove() handler in GMarker.
- */
- LabeledMarker.prototype.remove = function() {
- this.setText(null);
- GMarker.prototype.remove.apply(this, arguments);
-}
-
-/**
- * Return a copy of this overlay, for the parent Map to duplicate itself in full. This
- * is part of the Overlay interface and is used, for example, to copy everything in the
- * main view into the mini-map.
- */
-LabeledMarker.prototype.copy = function() {
- return new LabeledMarker(this.latlng_, this.opt_opts_);
-}
-
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_blue.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_blue_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_red_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_shadow.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_shadow_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/mm_20_yellow.png and /dev/null differ
--- a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/style.css
+++ /dev/null
@@ -1,162 +1,1 @@
-html { overflow: hidden; }
-html, body {
- margin: 0;
- padding: 0;
- height: 100%;
-}
-
-body { margin: 5px; }
-
-#content {
- position: relative;
- margin-top: 5px;
-}
-
-#map-wrapper {
- position: relative;
- height: 100%;
- width: auto;
- left: 0;
- top: 0;
- z-index: 100;
-}
-
-#map {
- position: relative;
- height: 100%;
- width: auto;
- border: 1px solid #aaa;
-}
-
-#sidebar-wrapper {
- position: absolute;
- height: 100%;
- width: 220px;
- top: 0;
- border: 1px solid #aaa;
- overflow: auto;
- z-index: 300;
-}
-
-#sidebar {
- position: relative;
- width: auto;
- padding: 4px;
- overflow: hidden;
-}
-
-#topbar {
- position: relative;
- padding: 2px;
- border: 1px solid #aaa;
- margin: 0;
-}
-
-#topbar h1 {
- white-space: nowrap;
- overflow: hidden;
- font-size: 14pt;
- font-weight: bold;
- font-face:
- margin: 0;
-}
-
-
-body.sidebar-right #map-wrapper { margin-right: 229px; }
-body.sidebar-right #sidebar-wrapper { right: 0; }
-
-body.sidebar-left #map { margin-left: 229px; }
-body.sidebar-left #sidebar { left: 0; }
-
-body.nosidebar #map { margin: 0; }
-body.nosidebar #sidebar { display: none; }
-
-#bottombar {
- position: relative;
- padding: 2px;
- border: 1px solid #aaa;
- margin-top: 5px;
- display: none;
-}
-
-/* holly hack for IE to get position:bottom right
- see: http://www.positioniseverything.net/abs_relbugs.html
- \*/
-* html #topbar { height: 1px; }
-/* */
-
-body {
- font-family:helvetica,arial,sans, sans-serif;
-}
-h1 {
- margin-top: 0.5em;
- margin-bottom: 0.5em;
-}
-h2 {
- margin-top: 0.2em;
- margin-bottom: 0.2em;
-}
-h3 {
- margin-top: 0.2em;
- margin-bottom: 0.2em;
-}
-.tooltip {
- white-space: nowrap;
- padding: 2px;
- color: black;
- font-size: 12px;
- background-color: white;
- border: 1px solid black;
- cursor: pointer;
- filter:alpha(opacity=60);
- -moz-opacity: 0.6;
- opacity: 0.6;
-}
-#routeList {
- border: 1px solid black;
- overflow: auto;
-}
-.shortName {
- font-size: bigger;
- font-weight: bold;
-}
-.routeChoice,.tripChoice,.routeChoiceSelected,.tripChoiceSelected {
- white-space: nowrap;
- cursor: pointer;
- padding: 0px 2px;
- color: black;
- line-height: 1.4em;
- font-size: smaller;
- overflow: hidden;
-}
-.tripChoice {
- color: blue;
-}
-.routeChoiceSelected,.tripChoiceSelected {
- background-color: blue;
- color: white;
-}
-.tripSection {
- padding-left: 0px;
- font-size: 10pt;
- background-color: lightblue;
-}
-.patternSection {
- margin-left: 8px;
- padding-left: 2px;
- border-bottom: 1px solid grey;
-}
-.unusualPattern {
- background-color: #aaa;
- color: #444;
-}
-/* Following styles are used by location_editor.py */
-#edit {
- visibility: hidden;
- float: right;
- font-size: 80%;
-}
-#edit form {
- display: inline;
-}
--- a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/files/svgcheck.vbs
+++ /dev/null
@@ -1,8 +1,1 @@
-' Copyright 1999-2000 Adobe Systems Inc. All rights reserved. Permission to redistribute
-' granted provided that this file is not modified in any way. This file is provided with
-' absolutely no warranties of any kind.
-Function isSVGControlInstalled()
- on error resume next
- isSVGControlInstalled = IsObject(CreateObject("Adobe.SVGCtl"))
-end Function
--- a/origin-src/transitfeed-1.2.5/build/lib/gtfsscheduleviewer/marey_graph.py
+++ /dev/null
@@ -1,470 +1,1 @@
-#!/usr/bin/python2.5
-#
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Output svg/xml data for a marey graph
-
-Marey graphs are a visualization form typically used for timetables. Time
-is on the x-axis and position on the y-axis. This module reads data from a
-transitfeed.Schedule and creates a marey graph in svg/xml format. The graph
-shows the speed between stops for each trip of a route.
-
-TODO: This module was taken from an internal Google tool. It works but is not
-well intergrated into transitfeed and schedule_viewer. Also, it has lots of
-ugly hacks to compensate set canvas size and so on which could be cleaned up.
-
-For a little more information see (I didn't make this URL ;-)
-http://transliteracies.english.ucsb.edu/post/research-project/research-clearinghouse-individual/research-reports/the-indexical-imagination-marey%e2%80%99s-graphic-method-and-the-technological-transformation-of-writing-in-the-nineteenth-century
-
- MareyGraph: Class, keeps cache of graph data and graph properties
- and draws marey graphs in svg/xml format on request.
-
-"""
-
-import itertools
-import transitfeed
-
-
-class MareyGraph:
- """Produces and caches marey graph from transit feed data."""
-
- _MAX_ZOOM = 5.0 # change docstring of ChangeScaleFactor if this changes
- _DUMMY_SEPARATOR = 10 #pixel
-
- def __init__(self):
- # Timetablerelated state
- self._cache = str()
- self._stoplist = []
- self._tlist = []
- self._stations = []
- self._decorators = []
-
- # TODO: Initialize default values via constructor parameters
- # or via a class constants
-
- # Graph properties
- self._tspan = 30 # number of hours to display
- self._offset = 0 # starting hour
- self._hour_grid = 60 # number of pixels for an hour
- self._min_grid = 5 # number of pixels between subhour lines
-
- # Canvas properties
- self._zoomfactor = 0.9 # svg Scaling factor
- self._xoffset = 0 # move graph horizontally
- self._yoffset = 0 # move graph veritcally
- self._bgcolor = "lightgrey"
-
- # height/width of graph canvas before transform
- self._gwidth = self._tspan * self._hour_grid
-
- def Draw(self, stoplist=None, triplist=None, height=520):
- """Main interface for drawing the marey graph.
-
- If called without arguments, the data generated in the previous call
- will be used. New decorators can be added between calls.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stoplist: [Stop, Stop, ...]
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
-
- Returns:
- # A string that contain a svg/xml web-page with a marey graph.
- " <svg width="1440" height="520" version="1.1" ... "
- """
- output = str()
- if not triplist:
- triplist = []
- if not stoplist:
- stoplist = []
-
- if not self._cache or triplist or stoplist:
- self._gheight = height
- self._tlist=triplist
- self._slist=stoplist
- self._decorators = []
- self._stations = self._BuildStations(stoplist)
- self._cache = "%s %s %s %s" % (self._DrawBox(),
- self._DrawHours(),
- self._DrawStations(),
- self._DrawTrips(triplist))
-
-
-
- output = "%s %s %s %s" % (self._DrawHeader(),
- self._cache,
- self._DrawDecorators(),
- self._DrawFooter())
- return output
-
- def _DrawHeader(self):
- svg_header = """
- <svg width="%s" height="%s" version="1.1"
- xmlns="http://www.w3.org/2000/svg">
- <script type="text/ecmascript"><![CDATA[
- function init(evt) {
- if ( window.svgDocument == null )
- svgDocument = evt.target.ownerDocument;
- }
- var oldLine = 0;
- var oldStroke = 0;
- var hoffset= %s; // Data from python
-
- function parseLinePoints(pointnode){
- var wordlist = pointnode.split(" ");
- var xlist = new Array();
- var h;
- var m;
- // TODO: add linebreaks as appropriate
- var xstr = " Stop Times :";
- for (i=0;i<wordlist.length;i=i+2){
- var coord = wordlist[i].split(",");
- h = Math.floor(parseInt((coord[0])-20)/60);
- m = parseInt((coord[0]-20))%%60;
- xstr = xstr +" "+ (hoffset+h) +":"+m;
- }
-
- return xstr;
- }
-
- function LineClick(tripid, x) {
- var line = document.getElementById(tripid);
- if (oldLine)
- oldLine.setAttribute("stroke",oldStroke);
- oldLine = line;
- oldStroke = line.getAttribute("stroke");
-
- line.setAttribute("stroke","#fff");
-
- var dynTxt = document.getElementById("dynamicText");
- var tripIdTxt = document.createTextNode(x);
- while (dynTxt.hasChildNodes()){
- dynTxt.removeChild(dynTxt.firstChild);
- }
- dynTxt.appendChild(tripIdTxt);
- }
- ]]> </script>
- <style type="text/css"><![CDATA[
- .T { fill:none; stroke-width:1.5 }
- .TB { fill:none; stroke:#e20; stroke-width:2 }
- .Station { fill:none; stroke-width:1 }
- .Dec { fill:none; stroke-width:1.5 }
- .FullHour { fill:none; stroke:#eee; stroke-width:1 }
- .SubHour { fill:none; stroke:#ddd; stroke-width:1 }
- .Label { fill:#aaa; font-family:Helvetica,Arial,sans;
- text-anchor:middle }
- .Info { fill:#111; font-family:Helvetica,Arial,sans;
- text-anchor:start; }
- ]]></style>
- <text class="Info" id="dynamicText" x="0" y="%d"></text>
- <g id="mcanvas" transform="translate(%s,%s)">
- <g id="zcanvas" transform="scale(%s)">
-
- """ % (self._gwidth + self._xoffset + 20, self._gheight + 15,
- self._offset, self._gheight + 10,
- self._xoffset, self._yoffset, self._zoomfactor)
-
- return svg_header
-
- def _DrawFooter(self):
- return "</g></g></svg>"
-
- def _DrawDecorators(self):
- """Used to draw fancy overlays on trip graphs."""
- return " ".join(self._decorators)
-
- def _DrawBox(self):
- tmpstr = """<rect x="%s" y="%s" width="%s" height="%s"
- fill="lightgrey" stroke="%s" stroke-width="2" />
- """ % (0, 0, self._gwidth + 20, self._gheight, self._bgcolor)
- return tmpstr
-
- def _BuildStations(self, stoplist):
- """Dispatches the best algorithm for calculating station line position.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stoplist: [Stop, Stop, ...]
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
-
- Returns:
- # One integer y-coordinate for each station normalized between
- # 0 and X, where X is the height of the graph in pixels
- [0, 33, 140, ... , X]
- """
- stations = []
- dists = self._EuclidianDistances(stoplist)
- stations = self._CalculateYLines(dists)
- return stations
-
- def _EuclidianDistances(self,slist):
- """Calculate euclidian distances between stops.
-
- Uses the stoplists long/lats to approximate distances
- between stations and build a list with y-coordinates for the
- horizontal lines in the graph.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stoplist: [Stop, Stop, ...]
-
- Returns:
- # One integer for each pair of stations
- # indicating the approximate distance
- [0,33,140, ... ,X]
- """
- e_dists2 = [transitfeed.ApproximateDistanceBetweenStops(stop, tail) for
- (stop,tail) in itertools.izip(slist, slist[1:])]
-
- return e_dists2
-
- def _CalculateYLines(self, dists):
- """Builds a list with y-coordinates for the horizontal lines in the graph.
-
- Args:
- # One integer for each pair of stations
- # indicating the approximate distance
- dists: [0,33,140, ... ,X]
-
- Returns:
- # One integer y-coordinate for each station normalized between
- # 0 and X, where X is the height of the graph in pixels
- [0, 33, 140, ... , X]
- """
- tot_dist = sum(dists)
- if tot_dist > 0:
- pixel_dist = [float(d * (self._gheight-20))/tot_dist for d in dists]
- pixel_grid = [0]+[int(pd + sum(pixel_dist[0:i])) for i,pd in
- enumerate(pixel_dist)]
- else:
- pixel_grid = []
-
- return pixel_grid
-
- def _TravelTimes(self,triplist,index=0):
- """ Calculate distances and plot stops.
-
- Uses a timetable to approximate distances
- between stations
-
- Args:
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
- # (Optional) Index of Triplist prefered for timetable Calculation
- index: 3
-
- Returns:
- # One integer for each pair of stations
- # indicating the approximate distance
- [0,33,140, ... ,X]
- """
-
- def DistanceInTravelTime(dep_secs, arr_secs):
- t_dist = arr_secs-dep_secs
- if t_dist<0:
- t_dist = self._DUMMY_SEPARATOR # min separation
- return t_dist
-
- if not triplist:
- return []
-
- if 0 < index < len(triplist):
- trip = triplist[index]
- else:
- trip = triplist[0]
-
- t_dists2 = [DistanceInTravelTime(stop[3],tail[2]) for (stop,tail)
- in itertools.izip(trip.GetTimeStops(),trip.GetTimeStops()[1:])]
- return t_dists2
-
- def _AddWarning(self, str):
- print str
-
- def _DrawTrips(self,triplist,colpar=""):
- """Generates svg polylines for each transit trip.
-
- Args:
- # Class Trip is defined in transitfeed.py
- [Trip, Trip, ...]
-
- Returns:
- # A string containing a polyline tag for each trip
- ' <polyline class="T" stroke="#336633" points="433,0 ...'
- """
-
- stations = []
- if not self._stations and triplist:
- self._stations = self._CalculateYLines(self._TravelTimes(triplist))
- if not self._stations:
- self._AddWarning("Failed to use traveltimes for graph")
- self._stations = self._CalculateYLines(self._Uniform(triplist))
- if not self._stations:
- self._AddWarning("Failed to calculate station distances")
- return
-
- stations = self._stations
- tmpstrs = []
- servlist = []
- for t in triplist:
- if not colpar:
- if t.service_id not in servlist:
- servlist.append(t.service_id)
- shade = int(servlist.index(t.service_id) * (200/len(servlist))+55)
- color = "#00%s00" % hex(shade)[2:4]
- else:
- color=colpar
-
- start_offsets = [0]
- first_stop = t.GetTimeStops()[0]
-
- for j,freq_offset in enumerate(start_offsets):
- if j>0 and not colpar:
- color="purple"
- scriptcall = 'onmouseover="LineClick(\'%s\',\'Trip %s starting %s\')"' % (t.trip_id,
- t.trip_id, transitfeed.FormatSecondsSinceMidnight(t.GetStartTime()))
- tmpstrhead = '<polyline class="T" id="%s" stroke="%s" %s points="' % \
- (str(t.trip_id),color, scriptcall)
- tmpstrs.append(tmpstrhead)
-
- for i, s in enumerate(t.GetTimeStops()):
- arr_t = s[0]
- dep_t = s[1]
- if arr_t is None or dep_t is None:
- continue
- arr_x = int(arr_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset
- dep_x = int(dep_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset
- tmpstrs.append("%s,%s " % (int(arr_x+20), int(stations[i]+20)))
- tmpstrs.append("%s,%s " % (int(dep_x+20), int(stations[i]+20)))
- tmpstrs.append('" />')
- return "".join(tmpstrs)
-
- def _Uniform(self, triplist):
- """Fallback to assuming uniform distance between stations"""
- # This should not be neseccary, but we are in fallback mode
- longest = max([len(t.GetTimeStops()) for t in triplist])
- return [100] * longest
-
- def _DrawStations(self, color="#aaa"):
- """Generates svg with a horizontal line for each station/stop.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stations: [Stop, Stop, ...]
-
- Returns:
- # A string containing a polyline tag for each stop
- " <polyline class="Station" stroke="#336633" points="20,0 ..."
- """
- stations=self._stations
- tmpstrs = []
- for y in stations:
- tmpstrs.append(' <polyline class="Station" stroke="%s" \
- points="%s,%s, %s,%s" />' %(color,20,20+y+.5,self._gwidth+20,20+y+.5))
- return "".join(tmpstrs)
-
- def _DrawHours(self):
- """Generates svg to show a vertical hour and sub-hour grid
-
- Returns:
- # A string containing a polyline tag for each grid line
- " <polyline class="FullHour" points="20,0 ..."
- """
- tmpstrs = []
- for i in range(0, self._gwidth, self._min_grid):
- if i % self._hour_grid == 0:
- tmpstrs.append('<polyline class="FullHour" points="%d,%d, %d,%d" />' \
- % (i + .5 + 20, 20, i + .5 + 20, self._gheight))
- tmpstrs.append('<text class="Label" x="%d" y="%d">%d</text>'
- % (i + 20, 20,
- (i / self._hour_grid + self._offset) % 24))
- else:
- tmpstrs.append('<polyline class="SubHour" points="%d,%d,%d,%d" />' \
- % (i + .5 + 20, 20, i + .5 + 20, self._gheight))
- return "".join(tmpstrs)
-
- def AddStationDecoration(self, index, color="#f00"):
- """Flushes existing decorations and highlights the given station-line.
-
- Args:
- # Integer, index of stop to be highlighted.
- index: 4
- # An optional string with a html color code
- color: "#fff"
- """
- tmpstr = str()
- num_stations = len(self._stations)
- ind = int(index)
- if self._stations:
- if 0<ind<num_stations:
- y = self._stations[ind]
- tmpstr = '<polyline class="Dec" stroke="%s" points="%s,%s,%s,%s" />' \
- % (color, 20, 20+y+.5, self._gwidth+20, 20+y+.5)
- self._decorators.append(tmpstr)
-
- def AddTripDecoration(self, triplist, color="#f00"):
- """Flushes existing decorations and highlights the given trips.
-
- Args:
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
- # An optional string with a html color code
- color: "#fff"
- """
- tmpstr = self._DrawTrips(triplist,color)
- self._decorators.append(tmpstr)
-
- def ChangeScaleFactor(self, newfactor):
- """Changes the zoom of the graph manually.
-
- 1.0 is the original canvas size.
-
- Args:
- # float value between 0.0 and 5.0
- newfactor: 0.7
- """
- if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM:
- self._zoomfactor = newfactor
-
- def ScaleLarger(self):
- """Increases the zoom of the graph one step (0.1 units)."""
- newfactor = self._zoomfactor + 0.1
- if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM:
- self._zoomfactor = newfactor
-
- def ScaleSmaller(self):
- """Decreases the zoom of the graph one step(0.1 units)."""
- newfactor = self._zoomfactor - 0.1
- if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM:
- self._zoomfactor = newfactor
-
- def ClearDecorators(self):
- """Removes all the current decorators.
- """
- self._decorators = []
-
- def AddTextStripDecoration(self,txtstr):
- tmpstr = '<text class="Info" x="%d" y="%d">%s</text>' % (0,
- 20 + self._gheight, txtstr)
- self._decorators.append(tmpstr)
-
- def SetSpan(self, first_arr, last_arr, mint=5 ,maxt=30):
- s_hour = (first_arr / 3600) - 1
- e_hour = (last_arr / 3600) + 1
- self._offset = max(min(s_hour, 23), 0)
- self._tspan = max(min(e_hour - s_hour, maxt), mint)
- self._gwidth = self._tspan * self._hour_grid
-
--- a/origin-src/transitfeed-1.2.5/build/lib/transitfeed/__init__.py
+++ /dev/null
@@ -1,35 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Expose some modules in this package.
-
-Before transitfeed version 1.2.4 all our library code was distributed in a
-one file module, transitfeed.py, and could be used as
-
-import transitfeed
-schedule = transitfeed.Schedule()
-
-At that time the module (one file, transitfeed.py) was converted into a
-package (a directory named transitfeed containing __init__.py and multiple .py
-files). Classes and attributes exposed by the old module may still be imported
-in the same way. Indeed, code that depends on the library <em>should</em>
-continue to use import commands such as the above and ignore _transitfeed.
-"""
-
-from _transitfeed import *
-
-__version__ = _transitfeed.__version__
-
--- a/origin-src/transitfeed-1.2.5/build/lib/transitfeed/_transitfeed.py
+++ /dev/null
@@ -1,4599 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Easy interface for handling a Google Transit Feed file.
-
-Do not import this module directly. Thanks to __init__.py you should do
-something like:
-
- import transitfeed
- schedule = transitfeed.Schedule()
- ...
-
-This module is a library to help you create, read and write Google
-Transit Feed files. Refer to the feed specification, available at
-http://code.google.com/transit/spec/transit_feed_specification.htm, for a
-complete description how the transit feed represents a transit schedule. This
-library supports all required parts of the specification but does not yet
-support all optional parts. Patches welcome!
-
-The specification describes several tables such as stops, routes and trips.
-In a feed file these are stored as comma separeted value files. This library
-represents each row of these tables with a single Python object. This object has
-attributes for each value on the row. For example, schedule.AddStop returns a
-Stop object which has attributes such as stop_lat and stop_name.
-
- Schedule: Central object of the parser
- GenericGTFSObject: A base class for each of the objects below
- Route: Represents a single route
- Trip: Represents a single trip
- Stop: Represents a single stop
- ServicePeriod: Represents a single service, a set of dates
- Agency: Represents the agency in this feed
- Transfer: Represents a single transfer rule
- TimeToSecondsSinceMidnight(): Convert HH:MM:SS into seconds since midnight.
- FormatSecondsSinceMidnight(s): Formats number of seconds past midnight into a string
-"""
-
-# TODO: Preserve arbitrary columns?
-
-import bisect
-import cStringIO as StringIO
-import codecs
-from transitfeed.util import defaultdict
-import csv
-import datetime
-import logging
-import math
-import os
-import random
-try:
- import sqlite3 as sqlite
-except ImportError:
- from pysqlite2 import dbapi2 as sqlite
-import re
-import tempfile
-import time
-import warnings
-# Objects in a schedule (Route, Trip, etc) should not keep a strong reference
-# to the Schedule object to avoid a reference cycle. Schedule needs to use
-# __del__ to cleanup its temporary file. The garbage collector can't handle
-# reference cycles containing objects with custom cleanup code.
-import weakref
-import zipfile
-
-OUTPUT_ENCODING = 'utf-8'
-MAX_DISTANCE_FROM_STOP_TO_SHAPE = 1000
-MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING = 100.0
-MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR = 1000.0
-
-__version__ = '1.2.5'
-
-
-def EncodeUnicode(text):
- """
- Optionally encode text and return it. The result should be safe to print.
- """
- if type(text) == type(u''):
- return text.encode(OUTPUT_ENCODING)
- else:
- return text
-
-
-# These are used to distinguish between errors (not allowed by the spec)
-# and warnings (not recommended) when reporting issues.
-TYPE_ERROR = 0
-TYPE_WARNING = 1
-
-
-class ProblemReporterBase:
- """Base class for problem reporters. Tracks the current context and creates
- an exception object for each problem. Subclasses must implement
- _Report(self, e)"""
-
- def __init__(self):
- self.ClearContext()
-
- def ClearContext(self):
- """Clear any previous context."""
- self._context = None
-
- def SetFileContext(self, file_name, row_num, row, headers):
- """Save the current context to be output with any errors.
-
- Args:
- file_name: string
- row_num: int
- row: list of strings
- headers: list of column headers, its order corresponding to row's
- """
- self._context = (file_name, row_num, row, headers)
-
- def FeedNotFound(self, feed_name, context=None):
- e = FeedNotFound(feed_name=feed_name, context=context,
- context2=self._context)
- self._Report(e)
-
- def UnknownFormat(self, feed_name, context=None):
- e = UnknownFormat(feed_name=feed_name, context=context,
- context2=self._context)
- self._Report(e)
-
- def FileFormat(self, problem, context=None):
- e = FileFormat(problem=problem, context=context,
- context2=self._context)
- self._Report(e)
-
- def MissingFile(self, file_name, context=None):
- e = MissingFile(file_name=file_name, context=context,
- context2=self._context)
- self._Report(e)
-
- def UnknownFile(self, file_name, context=None):
- e = UnknownFile(file_name=file_name, context=context,
- context2=self._context, type=TYPE_WARNING)
- self._Report(e)
-
- def EmptyFile(self, file_name, context=None):
- e = EmptyFile(file_name=file_name, context=context,
- context2=self._context)
- self._Report(e)
-
- def MissingColumn(self, file_name, column_name, context=None):
- e = MissingColumn(file_name=file_name, column_name=column_name,
- context=context, context2=self._context)
- self._Report(e)
-
- def UnrecognizedColumn(self, file_name, column_name, context=None):
- e = UnrecognizedColumn(file_name=file_name, column_name=column_name,
- context=context, context2=self._context,
- type=TYPE_WARNING)
- self._Report(e)
-
- def CsvSyntax(self, description=None, context=None, type=TYPE_ERROR):
- e = CsvSyntax(description=description, context=context,
- context2=self._context, type=type)
- self._Report(e)
-
- def DuplicateColumn(self, file_name, header, count, type=TYPE_ERROR,
- context=None):
- e = DuplicateColumn(file_name=file_name,
- header=header,
- count=count,
- type=type,
- context=context,
- context2=self._context)
- self._Report(e)
-
- def MissingValue(self, column_name, reason=None, context=None):
- e = MissingValue(column_name=column_name, reason=reason, context=context,
- context2=self._context)
- self._Report(e)
-
- def InvalidValue(self, column_name, value, reason=None, context=None,
- type=TYPE_ERROR):
- e = InvalidValue(column_name=column_name, value=value, reason=reason,
- context=context, context2=self._context, type=type)
- self._Report(e)
-
- def DuplicateID(self, column_names, values, context=None, type=TYPE_ERROR):
- if isinstance(column_names, tuple):
- column_names = '(' + ', '.join(column_names) + ')'
- if isinstance(values, tuple):
- values = '(' + ', '.join(values) + ')'
- e = DuplicateID(column_name=column_names, value=values,
- context=context, context2=self._context, type=type)
- self._Report(e)
-
- def UnusedStop(self, stop_id, stop_name, context=None):
- e = UnusedStop(stop_id=stop_id, stop_name=stop_name,
- context=context, context2=self._context, type=TYPE_WARNING)
- self._Report(e)
-
- def UsedStation(self, stop_id, stop_name, context=None):
- e = UsedStation(stop_id=stop_id, stop_name=stop_name,
- context=context, context2=self._context, type=TYPE_ERROR)
- self._Report(e)
-
- def StopTooFarFromParentStation(self, stop_id, stop_name, parent_stop_id,
- parent_stop_name, distance,
- type=TYPE_WARNING, context=None):
- e = StopTooFarFromParentStation(
- stop_id=stop_id, stop_name=stop_name,
- parent_stop_id=parent_stop_id,
- parent_stop_name=parent_stop_name, distance=distance,
- context=context, context2=self._context, type=type)
- self._Report(e)
-
- def StopsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b,
- distance, type=TYPE_WARNING, context=None):
- e = StopsTooClose(
- stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b,
- stop_id_b=stop_id_b, distance=distance, context=context,
- context2=self._context, type=type)
- self._Report(e)
-
- def StationsTooClose(self, stop_name_a, stop_id_a, stop_name_b, stop_id_b,
- distance, type=TYPE_WARNING, context=None):
- e = StationsTooClose(
- stop_name_a=stop_name_a, stop_id_a=stop_id_a, stop_name_b=stop_name_b,
- stop_id_b=stop_id_b, distance=distance, context=context,
- context2=self._context, type=type)
- self._Report(e)
-
- def DifferentStationTooClose(self, stop_name, stop_id,
- station_stop_name, station_stop_id,
- distance, type=TYPE_WARNING, context=None):
- e = DifferentStationTooClose(
- stop_name=stop_name, stop_id=stop_id,
- station_stop_name=station_stop_name, station_stop_id=station_stop_id,
- distance=distance, context=context, context2=self._context, type=type)
- self._Report(e)
-
- def StopTooFarFromShapeWithDistTraveled(self, trip_id, stop_name, stop_id,
- shape_dist_traveled, shape_id,
- distance, max_distance,
- type=TYPE_WARNING):
- e = StopTooFarFromShapeWithDistTraveled(
- trip_id=trip_id, stop_name=stop_name, stop_id=stop_id,
- shape_dist_traveled=shape_dist_traveled, shape_id=shape_id,
- distance=distance, max_distance=max_distance, type=type)
- self._Report(e)
-
- def ExpirationDate(self, expiration, context=None):
- e = ExpirationDate(expiration=expiration, context=context,
- context2=self._context, type=TYPE_WARNING)
- self._Report(e)
-
- def FutureService(self, start_date, context=None):
- e = FutureService(start_date=start_date, context=context,
- context2=self._context, type=TYPE_WARNING)
- self._Report(e)
-
- def InvalidLineEnd(self, bad_line_end, context=None):
- """bad_line_end is a human readable string."""
- e = InvalidLineEnd(bad_line_end=bad_line_end, context=context,
- context2=self._context, type=TYPE_WARNING)
- self._Report(e)
-
- def TooFastTravel(self, trip_id, prev_stop, next_stop, dist, time, speed,
- type=TYPE_ERROR):
- e = TooFastTravel(trip_id=trip_id, prev_stop=prev_stop,
- next_stop=next_stop, time=time, dist=dist, speed=speed,
- context=None, context2=self._context, type=type)
- self._Report(e)
-
- def StopWithMultipleRouteTypes(self, stop_name, stop_id, route_id1, route_id2,
- context=None):
- e = StopWithMultipleRouteTypes(stop_name=stop_name, stop_id=stop_id,
- route_id1=route_id1, route_id2=route_id2,
- context=context, context2=self._context,
- type=TYPE_WARNING)
- self._Report(e)
-
- def DuplicateTrip(self, trip_id1, route_id1, trip_id2, route_id2,
- context=None):
- e = DuplicateTrip(trip_id1=trip_id1, route_id1=route_id1, trip_id2=trip_id2,
- route_id2=route_id2, context=context,
- context2=self._context, type=TYPE_WARNING)
- self._Report(e)
-
- def OtherProblem(self, description, context=None, type=TYPE_ERROR):
- e = OtherProblem(description=description,
- context=context, context2=self._context, type=type)
- self._Report(e)
-
- def TooManyDaysWithoutService(self,
- first_day_without_service,
- last_day_without_service,
- consecutive_days_without_service,
- context=None,
- type=TYPE_WARNING):
- e = TooManyDaysWithoutService(
- first_day_without_service=first_day_without_service,
- last_day_without_service=last_day_without_service,
- consecutive_days_without_service=consecutive_days_without_service,
- context=context,
- context2=self._context,
- type=type)
- self._Report(e)
-
-class ProblemReporter(ProblemReporterBase):
- """This is a basic problem reporter that just prints to console."""
- def _Report(self, e):
- context = e.FormatContext()
- if context:
- print context
- print EncodeUnicode(self._LineWrap(e.FormatProblem(), 78))
-
- @staticmethod
- def _LineWrap(text, width):
- """
- A word-wrap function that preserves existing line breaks
- and most spaces in the text. Expects that existing line
- breaks are posix newlines (\n).
-
- Taken from:
- http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/148061
- """
- return reduce(lambda line, word, width=width: '%s%s%s' %
- (line,
- ' \n'[(len(line) - line.rfind('\n') - 1 +
- len(word.split('\n', 1)[0]) >= width)],
- word),
- text.split(' ')
- )
-
-
-class ExceptionWithContext(Exception):
- def __init__(self, context=None, context2=None, **kwargs):
- """Initialize an exception object, saving all keyword arguments in self.
- context and context2, if present, must be a tuple of (file_name, row_num,
- row, headers). context2 comes from ProblemReporter.SetFileContext. context
- was passed in with the keyword arguments. context2 is ignored if context
- is present."""
- Exception.__init__(self)
-
- if context:
- self.__dict__.update(self.ContextTupleToDict(context))
- elif context2:
- self.__dict__.update(self.ContextTupleToDict(context2))
- self.__dict__.update(kwargs)
-
- if ('type' in kwargs) and (kwargs['type'] == TYPE_WARNING):
- self._type = TYPE_WARNING
- else:
- self._type = TYPE_ERROR
-
- def GetType(self):
- return self._type
-
- def IsError(self):
- return self._type == TYPE_ERROR
-
- def IsWarning(self):
- return self._type == TYPE_WARNING
-
- CONTEXT_PARTS = ['file_name', 'row_num', 'row', 'headers']
- @staticmethod
- def ContextTupleToDict(context):
- """Convert a tuple representing a context into a dict of (key, value) pairs"""
- d = {}
- if not context:
- return d
- for k, v in zip(ExceptionWithContext.CONTEXT_PARTS, context):
- if v != '' and v != None: # Don't ignore int(0), a valid row_num
- d[k] = v
- return d
-
- def __str__(self):
- return self.FormatProblem()
-
- def GetDictToFormat(self):
- """Return a copy of self as a dict, suitable for passing to FormatProblem"""
- d = {}
- for k, v in self.__dict__.items():
- # TODO: Better handling of unicode/utf-8 within Schedule objects.
- # Concatinating a unicode and utf-8 str object causes an exception such
- # as "UnicodeDecodeError: 'ascii' codec can't decode byte ..." as python
- # tries to convert the str to a unicode. To avoid that happening within
- # the problem reporter convert all unicode attributes to utf-8.
- # Currently valid utf-8 fields are converted to unicode in _ReadCsvDict.
- # Perhaps all fields should be left as utf-8.
- d[k] = EncodeUnicode(v)
- return d
-
- def FormatProblem(self, d=None):
- """Return a text string describing the problem.
-
- Args:
- d: map returned by GetDictToFormat with with formatting added
- """
- if not d:
- d = self.GetDictToFormat()
-
- output_error_text = self.__class__.ERROR_TEXT % d
- if ('reason' in d) and d['reason']:
- return '%s\n%s' % (output_error_text, d['reason'])
- else:
- return output_error_text
-
- def FormatContext(self):
- """Return a text string describing the context"""
- text = ''
- if hasattr(self, 'feed_name'):
- text += "In feed '%s': " % self.feed_name
- if hasattr(self, 'file_name'):
- text += self.file_name
- if hasattr(self, 'row_num'):
- text += ":%i" % self.row_num
- if hasattr(self, 'column_name'):
- text += " column %s" % self.column_name
- return text
-
- def __cmp__(self, y):
- """Return an int <0/0/>0 when self is more/same/less significant than y.
-
- Subclasses should define this if exceptions should be listed in something
- other than the order they are reported.
-
- Args:
- y: object to compare to self
-
- Returns:
- An int which is negative if self is more significant than y, 0 if they
- are similar significance and positive if self is less significant than
- y. Returning a float won't work.
-
- Raises:
- TypeError by default, meaning objects of the type can not be compared.
- """
- raise TypeError("__cmp__ not defined")
-
-
-class MissingFile(ExceptionWithContext):
- ERROR_TEXT = "File %(file_name)s is not found"
-
-class EmptyFile(ExceptionWithContext):
- ERROR_TEXT = "File %(file_name)s is empty"
-
-class UnknownFile(ExceptionWithContext):
- ERROR_TEXT = 'The file named %(file_name)s was not expected.\n' \
- 'This may be a misspelled file name or the file may be ' \
- 'included in a subdirectory. Please check spellings and ' \
- 'make sure that there are no subdirectories within the feed'
-
-class FeedNotFound(ExceptionWithContext):
- ERROR_TEXT = 'Couldn\'t find a feed named %(feed_name)s'
-
-class UnknownFormat(ExceptionWithContext):
- ERROR_TEXT = 'The feed named %(feed_name)s had an unknown format:\n' \
- 'feeds should be either .zip files or directories.'
-
-class FileFormat(ExceptionWithContext):
- ERROR_TEXT = 'Files must be encoded in utf-8 and may not contain ' \
- 'any null bytes (0x00). %(file_name)s %(problem)s.'
-
-class MissingColumn(ExceptionWithContext):
- ERROR_TEXT = 'Missing column %(column_name)s in file %(file_name)s'
-
-class UnrecognizedColumn(ExceptionWithContext):
- ERROR_TEXT = 'Unrecognized column %(column_name)s in file %(file_name)s. ' \
- 'This might be a misspelled column name (capitalization ' \
- 'matters!). Or it could be extra information (such as a ' \
- 'proposed feed extension) that the validator doesn\'t know ' \
- 'about yet. Extra information is fine; this warning is here ' \
- 'to catch misspelled optional column names.'
-
-class CsvSyntax(ExceptionWithContext):
- ERROR_TEXT = '%(description)s'
-
-class DuplicateColumn(ExceptionWithContext):
- ERROR_TEXT = 'Column %(header)s appears %(count)i times in file %(file_name)s'
-
-class MissingValue(ExceptionWithContext):
- ERROR_TEXT = 'Missing value for column %(column_name)s'
-
-class InvalidValue(ExceptionWithContext):
- ERROR_TEXT = 'Invalid value %(value)s in field %(column_name)s'
-
-class DuplicateID(ExceptionWithContext):
- ERROR_TEXT = 'Duplicate ID %(value)s in column %(column_name)s'
-
-class UnusedStop(ExceptionWithContext):
- ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) isn't used in any trips"
-
-class UsedStation(ExceptionWithContext):
- ERROR_TEXT = "%(stop_name)s (ID %(stop_id)s) has location_type=1 " \
- "(station) so it should not appear in stop_times"
-
-class StopTooFarFromParentStation(ExceptionWithContext):
- ERROR_TEXT = (
- "%(stop_name)s (ID %(stop_id)s) is too far from its parent station "
- "%(parent_stop_name)s (ID %(parent_stop_id)s) : %(distance).2f meters.")
- def __cmp__(self, y):
- # Sort in decreasing order because more distance is more significant.
- return cmp(y.distance, self.distance)
-
-
-class StopsTooClose(ExceptionWithContext):
- ERROR_TEXT = (
- "The stops \"%(stop_name_a)s\" (ID %(stop_id_a)s) and \"%(stop_name_b)s\""
- " (ID %(stop_id_b)s) are %(distance)0.2fm apart and probably represent "
- "the same location.")
- def __cmp__(self, y):
- # Sort in increasing order because less distance is more significant.
- return cmp(self.distance, y.distance)
-
-class StationsTooClose(ExceptionWithContext):
- ERROR_TEXT = (
- "The stations \"%(stop_name_a)s\" (ID %(stop_id_a)s) and "
- "\"%(stop_name_b)s\" (ID %(stop_id_b)s) are %(distance)0.2fm apart and "
- "probably represent the same location.")
- def __cmp__(self, y):
- # Sort in increasing order because less distance is more significant.
- return cmp(self.distance, y.distance)
-
-class DifferentStationTooClose(ExceptionWithContext):
- ERROR_TEXT = (
- "The parent_station of stop \"%(stop_name)s\" (ID %(stop_id)s) is not "
- "station \"%(station_stop_name)s\" (ID %(station_stop_id)s) but they are "
- "only %(distance)0.2fm apart.")
- def __cmp__(self, y):
- # Sort in increasing order because less distance is more significant.
- return cmp(self.distance, y.distance)
-
-class StopTooFarFromShapeWithDistTraveled(ExceptionWithContext):
- ERROR_TEXT = (
- "For trip %(trip_id)s the stop \"%(stop_name)s\" (ID %(stop_id)s) is "
- "%(distance).0f meters away from the corresponding point "
- "(shape_dist_traveled: %(shape_dist_traveled)f) on shape %(shape_id)s. "
- "It should be closer than %(max_distance).0f meters.")
- def __cmp__(self, y):
- # Sort in decreasing order because more distance is more significant.
- return cmp(y.distance, self.distance)
-
-
-class TooManyDaysWithoutService(ExceptionWithContext):
- ERROR_TEXT = "There are %(consecutive_days_without_service)i consecutive"\
- " days, from %(first_day_without_service)s to" \
- " %(last_day_without_service)s, without any scheduled service." \
- " Please ensure this is intentional."
-
-
-class ExpirationDate(ExceptionWithContext):
- def FormatProblem(self, d=None):
- if not d:
- d = self.GetDictToFormat()
- expiration = d['expiration']
- formatted_date = time.strftime("%B %d, %Y",
- time.localtime(expiration))
- if (expiration < time.mktime(time.localtime())):
- return "This feed expired on %s" % formatted_date
- else:
- return "This feed will soon expire, on %s" % formatted_date
-
-class FutureService(ExceptionWithContext):
- def FormatProblem(self, d=None):
- if not d:
- d = self.GetDictToFormat()
- formatted_date = time.strftime("%B %d, %Y", time.localtime(d['start_date']))
- return ("The earliest service date in this feed is in the future, on %s. "
- "Published feeds must always include the current date." %
- formatted_date)
-
-
-class InvalidLineEnd(ExceptionWithContext):
- ERROR_TEXT = "Each line must end with CR LF or LF except for the last line " \
- "of the file. This line ends with \"%(bad_line_end)s\"."
-
-class StopWithMultipleRouteTypes(ExceptionWithContext):
- ERROR_TEXT = "Stop %(stop_name)s (ID=%(stop_id)s) belongs to both " \
- "subway (ID=%(route_id1)s) and bus line (ID=%(route_id2)s)."
-
-class TooFastTravel(ExceptionWithContext):
- def FormatProblem(self, d=None):
- if not d:
- d = self.GetDictToFormat()
- if not d['speed']:
- return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \
- " to %(next_stop)s. %(dist).0f meters in %(time)d seconds." % d
- else:
- return "High speed travel detected in trip %(trip_id)s: %(prev_stop)s" \
- " to %(next_stop)s. %(dist).0f meters in %(time)d seconds." \
- " (%(speed).0f km/h)." % d
- def __cmp__(self, y):
- # Sort in decreasing order because more distance is more significant. We
- # can't sort by speed because not all TooFastTravel objects have a speed.
- return cmp(y.dist, self.dist)
-
-class DuplicateTrip(ExceptionWithContext):
- ERROR_TEXT = "Trip %(trip_id1)s of route %(route_id1)s might be duplicated " \
- "with trip %(trip_id2)s of route %(route_id2)s. They go " \
- "through the same stops with same service."
-
-class OtherProblem(ExceptionWithContext):
- ERROR_TEXT = '%(description)s'
-
-
-class ExceptionProblemReporter(ProblemReporter):
- def __init__(self, raise_warnings=False):
- ProblemReporterBase.__init__(self)
- self.raise_warnings = raise_warnings
-
- def _Report(self, e):
- if self.raise_warnings or e.IsError():
- raise e
- else:
- ProblemReporter._Report(self, e)
-
-
-default_problem_reporter = ExceptionProblemReporter()
-
-# Add a default handler to send log messages to console
-console = logging.StreamHandler()
-console.setLevel(logging.WARNING)
-log = logging.getLogger("schedule_builder")
-log.addHandler(console)
-
-
-class Error(Exception):
- pass
-
-
-def IsValidURL(url):
- """Checks the validity of a URL value."""
- # TODO: Add more thorough checking of URL
- return url.startswith(u'http://') or url.startswith(u'https://')
-
-
-def IsValidColor(color):
- """Checks the validity of a hex color value."""
- return not re.match('^[0-9a-fA-F]{6}$', color) == None
-
-
-def ColorLuminance(color):
- """Compute the brightness of an sRGB color using the formula from
- http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast.
-
- Args:
- color: a string of six hex digits in the format verified by IsValidColor().
-
- Returns:
- A floating-point number between 0.0 (black) and 255.0 (white). """
- r = int(color[0:2], 16)
- g = int(color[2:4], 16)
- b = int(color[4:6], 16)
- return (299*r + 587*g + 114*b) / 1000.0
-
-
-def IsEmpty(value):
- return value is None or (isinstance(value, basestring) and not value.strip())
-
-
-def FindUniqueId(dic):
- """Return a string not used as a key in the dictionary dic"""
- name = str(len(dic))
- while name in dic:
- name = str(random.randint(1, 999999999))
- return name
-
-
-def TimeToSecondsSinceMidnight(time_string):
- """Convert HHH:MM:SS into seconds since midnight.
-
- For example "01:02:03" returns 3723. The leading zero of the hours may be
- omitted. HH may be more than 23 if the time is on the following day."""
- m = re.match(r'(\d{1,3}):([0-5]\d):([0-5]\d)$', time_string)
- # ignored: matching for leap seconds
- if not m:
- raise Error, 'Bad HH:MM:SS "%s"' % time_string
- return int(m.group(1)) * 3600 + int(m.group(2)) * 60 + int(m.group(3))
-
-
-def FormatSecondsSinceMidnight(s):
- """Formats an int number of seconds past midnight into a string
- as "HH:MM:SS"."""
- return "%02d:%02d:%02d" % (s / 3600, (s / 60) % 60, s % 60)
-
-
-def DateStringToDateObject(date_string):
- """Return a date object for a string "YYYYMMDD"."""
- # If this becomes a bottleneck date objects could be cached
- return datetime.date(int(date_string[0:4]), int(date_string[4:6]),
- int(date_string[6:8]))
-
-
-def FloatStringToFloat(float_string):
- """Convert a float as a string to a float or raise an exception"""
- # Will raise TypeError unless a string
- if not re.match(r"^[+-]?\d+(\.\d+)?$", float_string):
- raise ValueError()
- return float(float_string)
-
-
-def NonNegIntStringToInt(int_string):
- """Convert an non-negative integer string to an int or raise an exception"""
- # Will raise TypeError unless a string
- if not re.match(r"^(?:0|[1-9]\d*)$", int_string):
- raise ValueError()
- return int(int_string)
-
-
-EARTH_RADIUS = 6378135 # in meters
-def ApproximateDistance(degree_lat1, degree_lng1, degree_lat2, degree_lng2):
- """Compute approximate distance between two points in meters. Assumes the
- Earth is a sphere."""
- # TODO: change to ellipsoid approximation, such as
- # http://www.codeguru.com/Cpp/Cpp/algorithms/article.php/c5115/
- lat1 = math.radians(degree_lat1)
- lng1 = math.radians(degree_lng1)
- lat2 = math.radians(degree_lat2)
- lng2 = math.radians(degree_lng2)
- dlat = math.sin(0.5 * (lat2 - lat1))
- dlng = math.sin(0.5 * (lng2 - lng1))
- x = dlat * dlat + dlng * dlng * math.cos(lat1) * math.cos(lat2)
- return EARTH_RADIUS * (2 * math.atan2(math.sqrt(x),
- math.sqrt(max(0.0, 1.0 - x))))
-
-
-def ApproximateDistanceBetweenStops(stop1, stop2):
- """Compute approximate distance between two stops in meters. Assumes the
- Earth is a sphere."""
- return ApproximateDistance(stop1.stop_lat, stop1.stop_lon,
- stop2.stop_lat, stop2.stop_lon)
-
-
-class GenericGTFSObject(object):
- """Object with arbitrary attributes which may be added to a schedule.
-
- This class should be used as the base class for GTFS objects which may
- be stored in a Schedule. It defines some methods for reading and writing
- attributes. If self._schedule is None than the object is not in a Schedule.
-
- Subclasses must:
- * define an __init__ method which sets the _schedule member to None or a
- weakref to a Schedule
- * Set the _TABLE_NAME class variable to a name such as 'stops', 'agency', ...
- * define methods to validate objects of that type
- """
- def __getitem__(self, name):
- """Return a unicode or str representation of name or "" if not set."""
- if name in self.__dict__ and self.__dict__[name] is not None:
- return "%s" % self.__dict__[name]
- else:
- return ""
-
- def __getattr__(self, name):
- """Return None or the default value if name is a known attribute.
-
- This method is only called when name is not found in __dict__.
- """
- if name in self.__class__._FIELD_NAMES:
- return None
- else:
- raise AttributeError(name)
-
- def iteritems(self):
- """Return a iterable for (name, value) pairs of public attributes."""
- for name, value in self.__dict__.iteritems():
- if (not name) or name[0] == "_":
- continue
- yield name, value
-
- def __setattr__(self, name, value):
- """Set an attribute, adding name to the list of columns as needed."""
- object.__setattr__(self, name, value)
- if name[0] != '_' and self._schedule:
- self._schedule.AddTableColumn(self.__class__._TABLE_NAME, name)
-
- def __eq__(self, other):
- """Return true iff self and other are equivalent"""
- if not other:
- return False
-
- if id(self) == id(other):
- return True
-
- for k in self.keys().union(other.keys()):
- # use __getitem__ which returns "" for missing columns values
- if self[k] != other[k]:
- return False
- return True
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __repr__(self):
- return "<%s %s>" % (self.__class__.__name__, sorted(self.iteritems()))
-
- def keys(self):
- """Return iterable of columns used by this object."""
- columns = set()
- for name in vars(self):
- if (not name) or name[0] == "_":
- continue
- columns.add(name)
- return columns
-
- def _ColumnNames(self):
- return self.keys()
-
-
-class Stop(GenericGTFSObject):
- """Represents a single stop. A stop must have a latitude, longitude and name.
-
- Callers may assign arbitrary values to instance attributes.
- Stop.ParseAttributes validates attributes according to GTFS and converts some
- into native types. ParseAttributes may delete invalid attributes.
- Accessing an attribute that is a column in GTFS will return None if this
- object does not have a value or it is ''.
- A Stop object acts like a dict with string values.
-
- Attributes:
- stop_lat: a float representing the latitude of the stop
- stop_lon: a float representing the longitude of the stop
- All other attributes are strings.
- """
- _REQUIRED_FIELD_NAMES = ['stop_id', 'stop_name', 'stop_lat', 'stop_lon']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + \
- ['stop_desc', 'zone_id', 'stop_url', 'stop_code',
- 'location_type', 'parent_station']
- _TABLE_NAME = 'stops'
-
- def __init__(self, lat=None, lng=None, name=None, stop_id=None,
- field_dict=None, stop_code=None):
- """Initialize a new Stop object.
-
- Args:
- field_dict: A dictionary mapping attribute name to unicode string
- lat: a float, ignored when field_dict is present
- lng: a float, ignored when field_dict is present
- name: a string, ignored when field_dict is present
- stop_id: a string, ignored when field_dict is present
- stop_code: a string, ignored when field_dict is present
- """
- self._schedule = None
- if field_dict:
- if isinstance(field_dict, Stop):
- # Special case so that we don't need to re-parse the attributes to
- # native types iteritems returns all attributes that don't start with _
- for k, v in field_dict.iteritems():
- self.__dict__[k] = v
- else:
- self.__dict__.update(field_dict)
- else:
- if lat is not None:
- self.stop_lat = lat
- if lng is not None:
- self.stop_lon = lng
- if name is not None:
- self.stop_name = name
- if stop_id is not None:
- self.stop_id = stop_id
- if stop_code is not None:
- self.stop_code = stop_code
-
- def GetTrips(self, schedule=None):
- """Return iterable containing trips that visit this stop."""
- return [trip for trip, ss in self._GetTripSequence(schedule)]
-
- def _GetTripSequence(self, schedule=None):
- """Return a list of (trip, stop_sequence) for all trips visiting this stop.
-
- A trip may be in the list multiple times with different index.
- stop_sequence is an integer.
-
- Args:
- schedule: Deprecated, do not use.
- """
- if schedule is None:
- schedule = getattr(self, "_schedule", None)
- if schedule is None:
- warnings.warn("No longer supported. _schedule attribute is used to get "
- "stop_times table", DeprecationWarning)
- cursor = schedule._connection.cursor()
- cursor.execute("SELECT trip_id,stop_sequence FROM stop_times "
- "WHERE stop_id=?",
- (self.stop_id, ))
- return [(schedule.GetTrip(row[0]), row[1]) for row in cursor]
-
- def _GetTripIndex(self, schedule=None):
- """Return a list of (trip, index).
-
- trip: a Trip object
- index: an offset in trip.GetStopTimes()
- """
- trip_index = []
- for trip, sequence in self._GetTripSequence(schedule):
- for index, st in enumerate(trip.GetStopTimes()):
- if st.stop_sequence == sequence:
- trip_index.append((trip, index))
- break
- else:
- raise RuntimeError("stop_sequence %d not found in trip_id %s" %
- sequence, trip.trip_id)
- return trip_index
-
- def GetStopTimeTrips(self, schedule=None):
- """Return a list of (time, (trip, index), is_timepoint).
-
- time: an integer. It might be interpolated.
- trip: a Trip object.
- index: the offset of this stop in trip.GetStopTimes(), which may be
- different from the stop_sequence.
- is_timepoint: a bool
- """
- time_trips = []
- for trip, index in self._GetTripIndex(schedule):
- secs, stoptime, is_timepoint = trip.GetTimeInterpolatedStops()[index]
- time_trips.append((secs, (trip, index), is_timepoint))
- return time_trips
-
- def ParseAttributes(self, problems):
- """Parse all attributes, calling problems as needed."""
- # Need to use items() instead of iteritems() because _CheckAndSetAttr may
- # modify self.__dict__
- for name, value in vars(self).items():
- if name[0] == "_":
- continue
- self._CheckAndSetAttr(name, value, problems)
-
- def _CheckAndSetAttr(self, name, value, problems):
- """If value is valid for attribute name store it.
-
- If value is not valid call problems. Return a new value of the correct type
- or None if value couldn't be converted.
- """
- if name == 'stop_lat':
- try:
- if isinstance(value, (float, int)):
- self.stop_lat = value
- else:
- self.stop_lat = FloatStringToFloat(value)
- except (ValueError, TypeError):
- problems.InvalidValue('stop_lat', value)
- del self.stop_lat
- else:
- if self.stop_lat > 90 or self.stop_lat < -90:
- problems.InvalidValue('stop_lat', value)
- elif name == 'stop_lon':
- try:
- if isinstance(value, (float, int)):
- self.stop_lon = value
- else:
- self.stop_lon = FloatStringToFloat(value)
- except (ValueError, TypeError):
- problems.InvalidValue('stop_lon', value)
- del self.stop_lon
- else:
- if self.stop_lon > 180 or self.stop_lon < -180:
- problems.InvalidValue('stop_lon', value)
- elif name == 'stop_url':
- if value and not IsValidURL(value):
- problems.InvalidValue('stop_url', value)
- del self.stop_url
- elif name == 'location_type':
- if value == '':
- self.location_type = 0
- else:
- try:
- self.location_type = int(value)
- except (ValueError, TypeError):
- problems.InvalidValue('location_type', value)
- del self.location_type
- else:
- if self.location_type not in (0, 1):
- problems.InvalidValue('location_type', value, type=TYPE_WARNING)
-
- def __getattr__(self, name):
- """Return None or the default value if name is a known attribute.
-
- This method is only called when name is not found in __dict__.
- """
- if name == "location_type":
- return 0
- elif name == "trip_index":
- return self._GetTripIndex()
- elif name in Stop._FIELD_NAMES:
- return None
- else:
- raise AttributeError(name)
-
- def Validate(self, problems=default_problem_reporter):
- # First check that all required fields are present because ParseAttributes
- # may remove invalid attributes.
- for required in Stop._REQUIRED_FIELD_NAMES:
- if IsEmpty(getattr(self, required, None)):
- # TODO: For now I'm keeping the API stable but it would be cleaner to
- # treat whitespace stop_id as invalid, instead of missing
- problems.MissingValue(required)
-
- # Check individual values and convert to native types
- self.ParseAttributes(problems)
-
- # Check that this object is consistent with itself
- if (self.stop_lat is not None and self.stop_lon is not None and
- abs(self.stop_lat) < 1.0) and (abs(self.stop_lon) < 1.0):
- problems.InvalidValue('stop_lat', self.stop_lat,
- 'Stop location too close to 0, 0',
- type=TYPE_WARNING)
- if (self.stop_desc is not None and self.stop_name is not None and
- self.stop_desc and self.stop_name and
- not IsEmpty(self.stop_desc) and
- self.stop_name.strip().lower() == self.stop_desc.strip().lower()):
- problems.InvalidValue('stop_desc', self.stop_desc,
- 'stop_desc should not be the same as stop_name')
-
- if self.parent_station and self.location_type == 1:
- problems.InvalidValue('parent_station', self.parent_station,
- 'Stop row with location_type=1 (a station) must '
- 'not have a parent_station')
-
-
-class Route(GenericGTFSObject):
- """Represents a single route."""
-
- _REQUIRED_FIELD_NAMES = [
- 'route_id', 'route_short_name', 'route_long_name', 'route_type'
- ]
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + [
- 'agency_id', 'route_desc', 'route_url', 'route_color', 'route_text_color'
- ]
- _ROUTE_TYPES = {
- 0: {'name':'Tram', 'max_speed':100},
- 1: {'name':'Subway', 'max_speed':150},
- 2: {'name':'Rail', 'max_speed':300},
- 3: {'name':'Bus', 'max_speed':100},
- 4: {'name':'Ferry', 'max_speed':80},
- 5: {'name':'Cable Car', 'max_speed':50},
- 6: {'name':'Gondola', 'max_speed':50},
- 7: {'name':'Funicular', 'max_speed':50},
- }
- # Create a reverse lookup dict of route type names to route types.
- _ROUTE_TYPE_IDS = set(_ROUTE_TYPES.keys())
- _ROUTE_TYPE_NAMES = dict((v['name'], k) for k, v in _ROUTE_TYPES.items())
- _TABLE_NAME = 'routes'
-
- def __init__(self, short_name=None, long_name=None, route_type=None,
- route_id=None, agency_id=None, field_dict=None):
- self._schedule = None
- self._trips = []
-
- if not field_dict:
- field_dict = {}
- if short_name is not None:
- field_dict['route_short_name'] = short_name
- if long_name is not None:
- field_dict['route_long_name'] = long_name
- if route_type is not None:
- if route_type in Route._ROUTE_TYPE_NAMES:
- self.route_type = Route._ROUTE_TYPE_NAMES[route_type]
- else:
- field_dict['route_type'] = route_type
- if route_id is not None:
- field_dict['route_id'] = route_id
- if agency_id is not None:
- field_dict['agency_id'] = agency_id
- self.__dict__.update(field_dict)
-
- def AddTrip(self, schedule, headsign, service_period=None, trip_id=None):
- """ Adds a trip to this route.
-
- Args:
- headsign: headsign of the trip as a string
-
- Returns:
- a new Trip object
- """
- if trip_id is None:
- trip_id = unicode(len(schedule.trips))
- if service_period is None:
- service_period = schedule.GetDefaultServicePeriod()
- trip = Trip(route=self, headsign=headsign, service_period=service_period,
- trip_id=trip_id)
- schedule.AddTripObject(trip)
- return trip
-
- def _AddTripObject(self, trip):
- # Only class Schedule may call this. Users of the API should call
- # Route.AddTrip or schedule.AddTripObject.
- self._trips.append(trip)
-
- def __getattr__(self, name):
- """Return None or the default value if name is a known attribute.
-
- This method overrides GenericGTFSObject.__getattr__ to provide backwards
- compatible access to trips.
- """
- if name == 'trips':
- return self._trips
- else:
- return GenericGTFSObject.__getattr__(self, name)
-
- def GetPatternIdTripDict(self):
- """Return a dictionary that maps pattern_id to a list of Trip objects."""
- d = {}
- for t in self._trips:
- d.setdefault(t.pattern_id, []).append(t)
- return d
-
- def Validate(self, problems=default_problem_reporter):
- if IsEmpty(self.route_id):
- problems.MissingValue('route_id')
- if IsEmpty(self.route_type):
- problems.MissingValue('route_type')
-
- if IsEmpty(self.route_short_name) and IsEmpty(self.route_long_name):
- problems.InvalidValue('route_short_name',
- self.route_short_name,
- 'Both route_short_name and '
- 'route_long name are blank.')
-
- if self.route_short_name and len(self.route_short_name) > 6:
- problems.InvalidValue('route_short_name',
- self.route_short_name,
- 'This route_short_name is relatively long, which '
- 'probably means that it contains a place name. '
- 'You should only use this field to hold a short '
- 'code that riders use to identify a route. '
- 'If this route doesn\'t have such a code, it\'s '
- 'OK to leave this field empty.', type=TYPE_WARNING)
-
- if self.route_short_name and self.route_long_name:
- short_name = self.route_short_name.strip().lower()
- long_name = self.route_long_name.strip().lower()
- if (long_name.startswith(short_name + ' ') or
- long_name.startswith(short_name + '(') or
- long_name.startswith(short_name + '-')):
- problems.InvalidValue('route_long_name',
- self.route_long_name,
- 'route_long_name shouldn\'t contain '
- 'the route_short_name value, as both '
- 'fields are often displayed '
- 'side-by-side.', type=TYPE_WARNING)
- if long_name == short_name:
- problems.InvalidValue('route_long_name',
- self.route_long_name,
- 'route_long_name shouldn\'t be the same '
- 'the route_short_name value, as both '
- 'fields are often displayed '
- 'side-by-side. It\'s OK to omit either the '
- 'short or long name (but not both).',
- type=TYPE_WARNING)
- if (self.route_desc and
- ((self.route_desc == self.route_short_name) or
- (self.route_desc == self.route_long_name))):
- problems.InvalidValue('route_desc',
- self.route_desc,
- 'route_desc shouldn\'t be the same as '
- 'route_short_name or route_long_name')
-
- if self.route_type is not None:
- try:
- if not isinstance(self.route_type, int):
- self.route_type = NonNegIntStringToInt(self.route_type)
- except (TypeError, ValueError):
- problems.InvalidValue('route_type', self.route_type)
- else:
- if self.route_type not in Route._ROUTE_TYPE_IDS:
- problems.InvalidValue('route_type',
- self.route_type,
- type=TYPE_WARNING)
-
- if self.route_url and not IsValidURL(self.route_url):
- problems.InvalidValue('route_url', self.route_url)
-
- txt_lum = ColorLuminance('000000') # black (default)
- bg_lum = ColorLuminance('ffffff') # white (default)
- if self.route_color:
- if IsValidColor(self.route_color):
- bg_lum = ColorLuminance(self.route_color)
- else:
- problems.InvalidValue('route_color', self.route_color,
- 'route_color should be a valid color description '
- 'which consists of 6 hexadecimal characters '
- 'representing the RGB values. Example: 44AA06')
- if self.route_text_color:
- if IsValidColor(self.route_text_color):
- txt_lum = ColorLuminance(self.route_text_color)
- else:
- problems.InvalidValue('route_text_color', self.route_text_color,
- 'route_text_color should be a valid color '
- 'description, which consists of 6 hexadecimal '
- 'characters representing the RGB values. '
- 'Example: 44AA06')
- if abs(txt_lum - bg_lum) < 510/7.:
- # http://www.w3.org/TR/2000/WD-AERT-20000426#color-contrast recommends
- # a threshold of 125, but that is for normal text and too harsh for
- # big colored logos like line names, so we keep the original threshold
- # from r541 (but note that weight has shifted between RGB components).
- problems.InvalidValue('route_color', self.route_color,
- 'The route_text_color and route_color should '
- 'be set to contrasting colors, as they are used '
- 'as the text and background color (respectively) '
- 'for displaying route names. When left blank, '
- 'route_text_color defaults to 000000 (black) and '
- 'route_color defaults to FFFFFF (white). A common '
- 'source of issues here is setting route_color to '
- 'a dark color, while leaving route_text_color set '
- 'to black. In this case, route_text_color should '
- 'be set to a lighter color like FFFFFF to ensure '
- 'a legible contrast between the two.',
- type=TYPE_WARNING)
-
-
-def SortListOfTripByTime(trips):
- trips.sort(key=Trip.GetStartTime)
-
-
-class StopTime(object):
- """
- Represents a single stop of a trip. StopTime contains most of the columns
- from the stop_times.txt file. It does not contain trip_id, which is implied
- by the Trip used to access it.
-
- See the Google Transit Feed Specification for the semantic details.
-
- stop: A Stop object
- arrival_time: str in the form HH:MM:SS; readonly after __init__
- departure_time: str in the form HH:MM:SS; readonly after __init__
- arrival_secs: int number of seconds since midnight
- departure_secs: int number of seconds since midnight
- stop_headsign: str
- pickup_type: int
- drop_off_type: int
- shape_dist_traveled: float
- stop_id: str; readonly
- stop_time: The only time given for this stop. If present, it is used
- for both arrival and departure time.
- stop_sequence: int
- """
- _REQUIRED_FIELD_NAMES = ['trip_id', 'arrival_time', 'departure_time',
- 'stop_id', 'stop_sequence']
- _OPTIONAL_FIELD_NAMES = ['stop_headsign', 'pickup_type',
- 'drop_off_type', 'shape_dist_traveled']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + _OPTIONAL_FIELD_NAMES
- _SQL_FIELD_NAMES = ['trip_id', 'arrival_secs', 'departure_secs',
- 'stop_id', 'stop_sequence', 'stop_headsign',
- 'pickup_type', 'drop_off_type', 'shape_dist_traveled']
-
- __slots__ = ('arrival_secs', 'departure_secs', 'stop_headsign', 'stop',
- 'stop_headsign', 'pickup_type', 'drop_off_type',
- 'shape_dist_traveled', 'stop_sequence')
- def __init__(self, problems, stop,
- arrival_time=None, departure_time=None,
- stop_headsign=None, pickup_type=None, drop_off_type=None,
- shape_dist_traveled=None, arrival_secs=None,
- departure_secs=None, stop_time=None, stop_sequence=None):
- if stop_time != None:
- arrival_time = departure_time = stop_time
-
- if arrival_secs != None:
- self.arrival_secs = arrival_secs
- elif arrival_time in (None, ""):
- self.arrival_secs = None # Untimed
- arrival_time = None
- else:
- try:
- self.arrival_secs = TimeToSecondsSinceMidnight(arrival_time)
- except Error:
- problems.InvalidValue('arrival_time', arrival_time)
- self.arrival_secs = None
-
- if departure_secs != None:
- self.departure_secs = departure_secs
- elif departure_time in (None, ""):
- self.departure_secs = None
- departure_time = None
- else:
- try:
- self.departure_secs = TimeToSecondsSinceMidnight(departure_time)
- except Error:
- problems.InvalidValue('departure_time', departure_time)
- self.departure_secs = None
-
- if not isinstance(stop, Stop):
- # Not quite correct, but better than letting the problem propagate
- problems.InvalidValue('stop', stop)
- self.stop = stop
- self.stop_headsign = stop_headsign
-
- if pickup_type in (None, ""):
- self.pickup_type = None
- else:
- try:
- pickup_type = int(pickup_type)
- except ValueError:
- problems.InvalidValue('pickup_type', pickup_type)
- else:
- if pickup_type < 0 or pickup_type > 3:
- problems.InvalidValue('pickup_type', pickup_type)
- self.pickup_type = pickup_type
-
- if drop_off_type in (None, ""):
- self.drop_off_type = None
- else:
- try:
- drop_off_type = int(drop_off_type)
- except ValueError:
- problems.InvalidValue('drop_off_type', drop_off_type)
- else:
- if drop_off_type < 0 or drop_off_type > 3:
- problems.InvalidValue('drop_off_type', drop_off_type)
- self.drop_off_type = drop_off_type
-
- if (self.pickup_type == 1 and self.drop_off_type == 1 and
- self.arrival_secs == None and self.departure_secs == None):
- problems.OtherProblem('This stop time has a pickup_type and '
- 'drop_off_type of 1, indicating that riders '
- 'can\'t get on or off here. Since it doesn\'t '
- 'define a timepoint either, this entry serves no '
- 'purpose and should be excluded from the trip.',
- type=TYPE_WARNING)
-
- if ((self.arrival_secs != None) and (self.departure_secs != None) and
- (self.departure_secs < self.arrival_secs)):
- problems.InvalidValue('departure_time', departure_time,
- 'The departure time at this stop (%s) is before '
- 'the arrival time (%s). This is often caused by '
- 'problems in the feed exporter\'s time conversion')
-
- # If the caller passed a valid arrival time but didn't attempt to pass a
- # departure time complain
- if (self.arrival_secs != None and
- self.departure_secs == None and departure_time == None):
- # self.departure_secs might be None because departure_time was invalid,
- # so we need to check both
- problems.MissingValue('departure_time',
- 'arrival_time and departure_time should either '
- 'both be provided or both be left blank. '
- 'It\'s OK to set them both to the same value.')
- # If the caller passed a valid departure time but didn't attempt to pass a
- # arrival time complain
- if (self.departure_secs != None and
- self.arrival_secs == None and arrival_time == None):
- problems.MissingValue('arrival_time',
- 'arrival_time and departure_time should either '
- 'both be provided or both be left blank. '
- 'It\'s OK to set them both to the same value.')
-
- if shape_dist_traveled in (None, ""):
- self.shape_dist_traveled = None
- else:
- try:
- self.shape_dist_traveled = float(shape_dist_traveled)
- except ValueError:
- problems.InvalidValue('shape_dist_traveled', shape_dist_traveled)
-
- if stop_sequence is not None:
- self.stop_sequence = stop_sequence
-
- def GetFieldValuesTuple(self, trip_id):
- """Return a tuple that outputs a row of _FIELD_NAMES.
-
- trip must be provided because it is not stored in StopTime.
- """
- result = []
- for fn in StopTime._FIELD_NAMES:
- if fn == 'trip_id':
- result.append(trip_id)
- else:
- result.append(getattr(self, fn) or '' )
- return tuple(result)
-
- def GetSqlValuesTuple(self, trip_id):
- result = []
- for fn in StopTime._SQL_FIELD_NAMES:
- if fn == 'trip_id':
- result.append(trip_id)
- else:
- # This might append None, which will be inserted into SQLite as NULL
- result.append(getattr(self, fn))
- return tuple(result)
-
- def GetTimeSecs(self):
- """Return the first of arrival_secs and departure_secs that is not None.
- If both are None return None."""
- if self.arrival_secs != None:
- return self.arrival_secs
- elif self.departure_secs != None:
- return self.departure_secs
- else:
- return None
-
- def __getattr__(self, name):
- if name == 'stop_id':
- return self.stop.stop_id
- elif name == 'arrival_time':
- return (self.arrival_secs != None and
- FormatSecondsSinceMidnight(self.arrival_secs) or '')
- elif name == 'departure_time':
- return (self.departure_secs != None and
- FormatSecondsSinceMidnight(self.departure_secs) or '')
- elif name == 'shape_dist_traveled':
- return ''
- raise AttributeError(name)
-
-
-class Trip(GenericGTFSObject):
- _REQUIRED_FIELD_NAMES = ['route_id', 'service_id', 'trip_id']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + [
- 'trip_headsign', 'direction_id', 'block_id', 'shape_id'
- ]
- _FIELD_NAMES_HEADWAY = ['trip_id', 'start_time', 'end_time', 'headway_secs']
- _TABLE_NAME= "trips"
-
- def __init__(self, headsign=None, service_period=None,
- route=None, trip_id=None, field_dict=None):
- self._schedule = None
- self._headways = [] # [(start_time, end_time, headway_secs)]
- if not field_dict:
- field_dict = {}
- if headsign is not None:
- field_dict['trip_headsign'] = headsign
- if route:
- field_dict['route_id'] = route.route_id
- if trip_id is not None:
- field_dict['trip_id'] = trip_id
- if service_period is not None:
- field_dict['service_id'] = service_period.service_id
- # Earlier versions of transitfeed.py assigned self.service_period here
- # and allowed the caller to set self.service_id. Schedule.Validate
- # checked the service_id attribute if it was assigned and changed it to a
- # service_period attribute. Now only the service_id attribute is used and
- # it is validated by Trip.Validate.
- if service_period is not None:
- # For backwards compatibility
- self.service_id = service_period.service_id
- self.__dict__.update(field_dict)
-
- def GetFieldValuesTuple(self):
- return [getattr(self, fn) or '' for fn in Trip._FIELD_NAMES]
-
- def AddStopTime(self, stop, problems=None, schedule=None, **kwargs):
- """Add a stop to this trip. Stops must be added in the order visited.
-
- Args:
- stop: A Stop object
- kwargs: remaining keyword args passed to StopTime.__init__
-
- Returns:
- None
- """
- if problems is None:
- # TODO: delete this branch when StopTime.__init__ doesn't need a
- # ProblemReporter
- problems = default_problem_reporter
- stoptime = StopTime(problems=problems, stop=stop, **kwargs)
- self.AddStopTimeObject(stoptime, schedule)
-
- def _AddStopTimeObjectUnordered(self, stoptime, schedule):
- """Add StopTime object to this trip.
-
- The trip isn't checked for duplicate sequence numbers so it must be
- validated later."""
- cursor = schedule._connection.cursor()
- insert_query = "INSERT INTO stop_times (%s) VALUES (%s);" % (
- ','.join(StopTime._SQL_FIELD_NAMES),
- ','.join(['?'] * len(StopTime._SQL_FIELD_NAMES)))
- cursor = schedule._connection.cursor()
- cursor.execute(
- insert_query, stoptime.GetSqlValuesTuple(self.trip_id))
-
- def ReplaceStopTimeObject(self, stoptime, schedule=None):
- """Replace a StopTime object from this trip with the given one.
-
- Keys the StopTime object to be replaced by trip_id, stop_sequence
- and stop_id as 'stoptime', with the object 'stoptime'.
- """
-
- if schedule is None:
- schedule = self._schedule
-
- new_secs = stoptime.GetTimeSecs()
- cursor = schedule._connection.cursor()
- cursor.execute("DELETE FROM stop_times WHERE trip_id=? and "
- "stop_sequence=? and stop_id=?",
- (self.trip_id, stoptime.stop_sequence, stoptime.stop_id))
- if cursor.rowcount == 0:
- raise Error, 'Attempted replacement of StopTime object which does not exist'
- self._AddStopTimeObjectUnordered(stoptime, schedule)
-
- def AddStopTimeObject(self, stoptime, schedule=None, problems=None):
- """Add a StopTime object to the end of this trip.
-
- Args:
- stoptime: A StopTime object. Should not be reused in multiple trips.
- schedule: Schedule object containing this trip which must be
- passed to Trip.__init__ or here
- problems: ProblemReporter object for validating the StopTime in its new
- home
-
- Returns:
- None
- """
- if schedule is None:
- schedule = self._schedule
- if schedule is None:
- warnings.warn("No longer supported. _schedule attribute is used to get "
- "stop_times table", DeprecationWarning)
- if problems is None:
- problems = schedule.problem_reporter
-
- new_secs = stoptime.GetTimeSecs()
- cursor = schedule._connection.cursor()
- cursor.execute("SELECT max(stop_sequence), max(arrival_secs), "
- "max(departure_secs) FROM stop_times WHERE trip_id=?",
- (self.trip_id,))
- row = cursor.fetchone()
- if row[0] is None:
- # This is the first stop_time of the trip
- stoptime.stop_sequence = 1
- if new_secs == None:
- problems.OtherProblem(
- 'No time for first StopTime of trip_id "%s"' % (self.trip_id,))
- else:
- stoptime.stop_sequence = row[0] + 1
- prev_secs = max(row[1], row[2])
- if new_secs != None and new_secs < prev_secs:
- problems.OtherProblem(
- 'out of order stop time for stop_id=%s trip_id=%s %s < %s' %
- (EncodeUnicode(stoptime.stop_id), EncodeUnicode(self.trip_id),
- FormatSecondsSinceMidnight(new_secs),
- FormatSecondsSinceMidnight(prev_secs)))
- self._AddStopTimeObjectUnordered(stoptime, schedule)
-
- def GetTimeStops(self):
- """Return a list of (arrival_secs, departure_secs, stop) tuples.
-
- Caution: arrival_secs and departure_secs may be 0, a false value meaning a
- stop at midnight or None, a false value meaning the stop is untimed."""
- return [(st.arrival_secs, st.departure_secs, st.stop) for st in
- self.GetStopTimes()]
-
- def GetCountStopTimes(self):
- """Return the number of stops made by this trip."""
- cursor = self._schedule._connection.cursor()
- cursor.execute(
- 'SELECT count(*) FROM stop_times WHERE trip_id=?', (self.trip_id,))
- return cursor.fetchone()[0]
-
- def GetTimeInterpolatedStops(self):
- """Return a list of (secs, stoptime, is_timepoint) tuples.
-
- secs will always be an int. If the StopTime object does not have explict
- times this method guesses using distance. stoptime is a StopTime object and
- is_timepoint is a bool.
-
- Raises:
- ValueError if this trip does not have the times needed to interpolate
- """
- rv = []
-
- stoptimes = self.GetStopTimes()
- # If there are no stoptimes [] is the correct return value but if the start
- # or end are missing times there is no correct return value.
- if not stoptimes:
- return []
- if (stoptimes[0].GetTimeSecs() is None or
- stoptimes[-1].GetTimeSecs() is None):
- raise ValueError("%s must have time at first and last stop" % (self))
-
- cur_timepoint = None
- next_timepoint = None
- distance_between_timepoints = 0
- distance_traveled_between_timepoints = 0
-
- for i, st in enumerate(stoptimes):
- if st.GetTimeSecs() != None:
- cur_timepoint = st
- distance_between_timepoints = 0
- distance_traveled_between_timepoints = 0
- if i + 1 < len(stoptimes):
- k = i + 1
- distance_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop)
- while stoptimes[k].GetTimeSecs() == None:
- k += 1
- distance_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[k-1].stop, stoptimes[k].stop)
- next_timepoint = stoptimes[k]
- rv.append( (st.GetTimeSecs(), st, True) )
- else:
- distance_traveled_between_timepoints += ApproximateDistanceBetweenStops(stoptimes[i-1].stop, st.stop)
- distance_percent = distance_traveled_between_timepoints / distance_between_timepoints
- total_time = next_timepoint.GetTimeSecs() - cur_timepoint.GetTimeSecs()
- time_estimate = distance_percent * total_time + cur_timepoint.GetTimeSecs()
- rv.append( (int(round(time_estimate)), st, False) )
-
- return rv
-
- def ClearStopTimes(self):
- """Remove all stop times from this trip.
-
- StopTime objects previously returned by GetStopTimes are unchanged but are
- no longer associated with this trip.
- """
- cursor = self._schedule._connection.cursor()
- cursor.execute('DELETE FROM stop_times WHERE trip_id=?', (self.trip_id,))
-
- def GetStopTimes(self, problems=None):
- """Return a sorted list of StopTime objects for this trip."""
- # In theory problems=None should be safe because data from database has been
- # validated. See comment in _LoadStopTimes for why this isn't always true.
- cursor = self._schedule._connection.cursor()
- cursor.execute(
- 'SELECT arrival_secs,departure_secs,stop_headsign,pickup_type,'
- 'drop_off_type,shape_dist_traveled,stop_id,stop_sequence FROM '
- 'stop_times WHERE '
- 'trip_id=? ORDER BY stop_sequence', (self.trip_id,))
- stop_times = []
- for row in cursor.fetchall():
- stop = self._schedule.GetStop(row[6])
- stop_times.append(StopTime(problems=problems, stop=stop, arrival_secs=row[0],
- departure_secs=row[1],
- stop_headsign=row[2],
- pickup_type=row[3],
- drop_off_type=row[4],
- shape_dist_traveled=row[5],
- stop_sequence=row[7]))
- return stop_times
-
- def GetHeadwayStopTimes(self, problems=None):
- """Return a list of StopTime objects for each headway-based run.
-
- Returns:
- a list of list of StopTime objects. Each list of StopTime objects
- represents one run. If this trip doesn't have headways returns an empty
- list.
- """
- stoptimes_list = [] # list of stoptime lists to be returned
- stoptime_pattern = self.GetStopTimes()
- first_secs = stoptime_pattern[0].arrival_secs # first time of the trip
- # for each start time of a headway run
- for run_secs in self.GetHeadwayStartTimes():
- # stop time list for a headway run
- stoptimes = []
- # go through the pattern and generate stoptimes
- for st in stoptime_pattern:
- arrival_secs, departure_secs = None, None # default value if the stoptime is not timepoint
- if st.arrival_secs != None:
- arrival_secs = st.arrival_secs - first_secs + run_secs
- if st.departure_secs != None:
- departure_secs = st.departure_secs - first_secs + run_secs
- # append stoptime
- stoptimes.append(StopTime(problems=problems, stop=st.stop,
- arrival_secs=arrival_secs,
- departure_secs=departure_secs,
- stop_headsign=st.stop_headsign,
- pickup_type=st.pickup_type,
- drop_off_type=st.drop_off_type,
- shape_dist_traveled=st.shape_dist_traveled,
- stop_sequence=st.stop_sequence))
- # add stoptimes to the stoptimes_list
- stoptimes_list.append ( stoptimes )
- return stoptimes_list
-
- def GetStartTime(self, problems=default_problem_reporter):
- """Return the first time of the trip. TODO: For trips defined by frequency
- return the first time of the first trip."""
- cursor = self._schedule._connection.cursor()
- cursor.execute(
- 'SELECT arrival_secs,departure_secs FROM stop_times WHERE '
- 'trip_id=? ORDER BY stop_sequence LIMIT 1', (self.trip_id,))
- (arrival_secs, departure_secs) = cursor.fetchone()
- if arrival_secs != None:
- return arrival_secs
- elif departure_secs != None:
- return departure_secs
- else:
- problems.InvalidValue('departure_time', '',
- 'The first stop_time in trip %s is missing '
- 'times.' % self.trip_id)
-
- def GetHeadwayStartTimes(self):
- """Return a list of start time for each headway-based run.
-
- Returns:
- a sorted list of seconds since midnight, the start time of each run. If
- this trip doesn't have headways returns an empty list."""
- start_times = []
- # for each headway period of the trip
- for start_secs, end_secs, headway_secs in self.GetHeadwayPeriodTuples():
- # reset run secs to the start of the timeframe
- run_secs = start_secs
- while run_secs < end_secs:
- start_times.append(run_secs)
- # increment current run secs by headway secs
- run_secs += headway_secs
- return start_times
-
- def GetEndTime(self, problems=default_problem_reporter):
- """Return the last time of the trip. TODO: For trips defined by frequency
- return the last time of the last trip."""
- cursor = self._schedule._connection.cursor()
- cursor.execute(
- 'SELECT arrival_secs,departure_secs FROM stop_times WHERE '
- 'trip_id=? ORDER BY stop_sequence DESC LIMIT 1', (self.trip_id,))
- (arrival_secs, departure_secs) = cursor.fetchone()
- if departure_secs != None:
- return departure_secs
- elif arrival_secs != None:
- return arrival_secs
- else:
- problems.InvalidValue('arrival_time', '',
- 'The last stop_time in trip %s is missing '
- 'times.' % self.trip_id)
-
- def _GenerateStopTimesTuples(self):
- """Generator for rows of the stop_times file"""
- stoptimes = self.GetStopTimes()
- for i, st in enumerate(stoptimes):
- yield st.GetFieldValuesTuple(self.trip_id)
-
- def GetStopTimesTuples(self):
- results = []
- for time_tuple in self._GenerateStopTimesTuples():
- results.append(time_tuple)
- return results
-
- def GetPattern(self):
- """Return a tuple of Stop objects, in the order visited"""
- stoptimes = self.GetStopTimes()
- return tuple(st.stop for st in stoptimes)
-
- def AddHeadwayPeriod(self, start_time, end_time, headway_secs,
- problem_reporter=default_problem_reporter):
- """Adds a period to this trip during which the vehicle travels
- at regular intervals (rather than specifying exact times for each stop).
-
- Args:
- start_time: The time at which this headway period starts, either in
- numerical seconds since midnight or as "HH:MM:SS" since midnight.
- end_time: The time at which this headway period ends, either in
- numerical seconds since midnight or as "HH:MM:SS" since midnight.
- This value should be larger than start_time.
- headway_secs: The amount of time, in seconds, between occurences of
- this trip.
- problem_reporter: Optional parameter that can be used to select
- how any errors in the other input parameters will be reported.
- Returns:
- None
- """
- if start_time == None or start_time == '': # 0 is OK
- problem_reporter.MissingValue('start_time')
- return
- if isinstance(start_time, basestring):
- try:
- start_time = TimeToSecondsSinceMidnight(start_time)
- except Error:
- problem_reporter.InvalidValue('start_time', start_time)
- return
- elif start_time < 0:
- problem_reporter.InvalidValue('start_time', start_time)
-
- if end_time == None or end_time == '':
- problem_reporter.MissingValue('end_time')
- return
- if isinstance(end_time, basestring):
- try:
- end_time = TimeToSecondsSinceMidnight(end_time)
- except Error:
- problem_reporter.InvalidValue('end_time', end_time)
- return
- elif end_time < 0:
- problem_reporter.InvalidValue('end_time', end_time)
- return
-
- if not headway_secs:
- problem_reporter.MissingValue('headway_secs')
- return
- try:
- headway_secs = int(headway_secs)
- except ValueError:
- problem_reporter.InvalidValue('headway_secs', headway_secs)
- return
-
- if headway_secs <= 0:
- problem_reporter.InvalidValue('headway_secs', headway_secs)
- return
-
- if end_time <= start_time:
- problem_reporter.InvalidValue('end_time', end_time,
- 'should be greater than start_time')
-
- self._headways.append((start_time, end_time, headway_secs))
-
- def ClearHeadwayPeriods(self):
- self._headways = []
-
- def _HeadwayOutputTuple(self, headway):
- return (self.trip_id,
- FormatSecondsSinceMidnight(headway[0]),
- FormatSecondsSinceMidnight(headway[1]),
- unicode(headway[2]))
-
- def GetHeadwayPeriodOutputTuples(self):
- tuples = []
- for headway in self._headways:
- tuples.append(self._HeadwayOutputTuple(headway))
- return tuples
-
- def GetHeadwayPeriodTuples(self):
- return self._headways
-
- def __getattr__(self, name):
- if name == 'service_period':
- assert self._schedule, "Must be in a schedule to get service_period"
- return self._schedule.GetServicePeriod(self.service_id)
- elif name == 'pattern_id':
- if '_pattern_id' not in self.__dict__:
- self.__dict__['_pattern_id'] = hash(self.GetPattern())
- return self.__dict__['_pattern_id']
- else:
- return GenericGTFSObject.__getattr__(self, name)
-
- def Validate(self, problems, validate_children=True):
- """Validate attributes of this object.
-
- Check that this object has all required values set to a valid value without
- reference to the rest of the schedule. If the _schedule attribute is set
- then check that references such as route_id and service_id are correct.
-
- Args:
- problems: A ProblemReporter object
- validate_children: if True and the _schedule attribute is set than call
- ValidateChildren
- """
- if IsEmpty(self.route_id):
- problems.MissingValue('route_id')
- if 'service_period' in self.__dict__:
- # Some tests assign to the service_period attribute. Patch up self before
- # proceeding with validation. See also comment in Trip.__init__.
- self.service_id = self.__dict__['service_period'].service_id
- del self.service_period
- if IsEmpty(self.service_id):
- problems.MissingValue('service_id')
- if IsEmpty(self.trip_id):
- problems.MissingValue('trip_id')
- if hasattr(self, 'direction_id') and (not IsEmpty(self.direction_id)) and \
- (self.direction_id != '0') and (self.direction_id != '1'):
- problems.InvalidValue('direction_id', self.direction_id,
- 'direction_id must be "0" or "1"')
- if self._schedule:
- if self.shape_id and self.shape_id not in self._schedule._shapes:
- problems.InvalidValue('shape_id', self.shape_id)
- if self.route_id and self.route_id not in self._schedule.routes:
- problems.InvalidValue('route_id', self.route_id)
- if (self.service_id and
- self.service_id not in self._schedule.service_periods):
- problems.InvalidValue('service_id', self.service_id)
-
- if validate_children:
- self.ValidateChildren(problems)
-
- def ValidateChildren(self, problems):
- """Validate StopTimes and headways of this trip."""
- assert self._schedule, "Trip must be in a schedule to ValidateChildren"
- # TODO: validate distance values in stop times (if applicable)
- cursor = self._schedule._connection.cursor()
- cursor.execute("SELECT COUNT(stop_sequence) AS a FROM stop_times "
- "WHERE trip_id=? GROUP BY stop_sequence HAVING a > 1",
- (self.trip_id,))
- for row in cursor:
- problems.InvalidValue('stop_sequence', row[0],
- 'Duplicate stop_sequence in trip_id %s' %
- self.trip_id)
-
- stoptimes = self.GetStopTimes(problems)
- if stoptimes:
- if stoptimes[0].arrival_time is None and stoptimes[0].departure_time is None:
- problems.OtherProblem(
- 'No time for start of trip_id "%s""' % (self.trip_id))
- if stoptimes[-1].arrival_time is None and stoptimes[-1].departure_time is None:
- problems.OtherProblem(
- 'No time for end of trip_id "%s""' % (self.trip_id))
-
- # Sorts the stoptimes by sequence and then checks that the arrival time
- # for each time point is after the departure time of the previous.
- stoptimes.sort(key=lambda x: x.stop_sequence)
- prev_departure = 0
- prev_stop = None
- prev_distance = None
- try:
- route_type = self._schedule.GetRoute(self.route_id).route_type
- max_speed = Route._ROUTE_TYPES[route_type]['max_speed']
- except KeyError, e:
- # If route_type cannot be found, assume it is 0 (Tram) for checking
- # speeds between stops.
- max_speed = Route._ROUTE_TYPES[0]['max_speed']
- for timepoint in stoptimes:
- # Distance should be a nonnegative float number, so it should be
- # always larger than None.
- distance = timepoint.shape_dist_traveled
- if distance is not None:
- if distance > prev_distance and distance >= 0:
- prev_distance = distance
- else:
- if distance == prev_distance:
- type = TYPE_WARNING
- else:
- type = TYPE_ERROR
- problems.InvalidValue('stoptimes.shape_dist_traveled', distance,
- 'For the trip %s the stop %s has shape_dist_traveled=%s, '
- 'which should be larger than the previous ones. In this '
- 'case, the previous distance was %s.' %
- (self.trip_id, timepoint.stop_id, distance, prev_distance),
- type=type)
-
- if timepoint.arrival_secs is not None:
- self._CheckSpeed(prev_stop, timepoint.stop, prev_departure,
- timepoint.arrival_secs, max_speed, problems)
-
- if timepoint.arrival_secs >= prev_departure:
- prev_departure = timepoint.departure_secs
- prev_stop = timepoint.stop
- else:
- problems.OtherProblem('Timetravel detected! Arrival time '
- 'is before previous departure '
- 'at sequence number %s in trip %s' %
- (timepoint.stop_sequence, self.trip_id))
-
- if self.shape_id and self.shape_id in self._schedule._shapes:
- shape = self._schedule.GetShape(self.shape_id)
- max_shape_dist = shape.max_distance
- st = stoptimes[-1]
- if (st.shape_dist_traveled and
- st.shape_dist_traveled > max_shape_dist):
- problems.OtherProblem(
- 'In stop_times.txt, the stop with trip_id=%s and '
- 'stop_sequence=%d has shape_dist_traveled=%f, which is larger '
- 'than the max shape_dist_traveled=%f of the corresponding '
- 'shape (shape_id=%s)' %
- (self.trip_id, st.stop_sequence, st.shape_dist_traveled,
- max_shape_dist, self.shape_id), type=TYPE_WARNING)
-
- # shape_dist_traveled is valid in shape if max_shape_dist larger than
- # 0.
- if max_shape_dist > 0:
- for st in stoptimes:
- if st.shape_dist_traveled is None:
- continue
- pt = shape.GetPointWithDistanceTraveled(st.shape_dist_traveled)
- if pt:
- stop = self._schedule.GetStop(st.stop_id)
- distance = ApproximateDistance(stop.stop_lat, stop.stop_lon,
- pt[0], pt[1])
- if distance > MAX_DISTANCE_FROM_STOP_TO_SHAPE:
- problems.StopTooFarFromShapeWithDistTraveled(
- self.trip_id, stop.stop_name, stop.stop_id, pt[2],
- self.shape_id, distance, MAX_DISTANCE_FROM_STOP_TO_SHAPE)
-
- # O(n^2), but we don't anticipate many headway periods per trip
- for headway_index, headway in enumerate(self._headways[0:-1]):
- for other in self._headways[headway_index + 1:]:
- if (other[0] < headway[1]) and (other[1] > headway[0]):
- problems.OtherProblem('Trip contains overlapping headway periods '
- '%s and %s' %
- (self._HeadwayOutputTuple(headway),
- self._HeadwayOutputTuple(other)))
-
- def _CheckSpeed(self, prev_stop, next_stop, depart_time,
- arrive_time, max_speed, problems):
- # Checks that the speed between two stops is not faster than max_speed
- if prev_stop != None:
- try:
- time_between_stops = arrive_time - depart_time
- except TypeError:
- return
-
- try:
- dist_between_stops = \
- ApproximateDistanceBetweenStops(next_stop, prev_stop)
- except TypeError, e:
- return
-
- if time_between_stops == 0:
- # HASTUS makes it hard to output GTFS with times to the nearest second;
- # it rounds times to the nearest minute. Therefore stop_times at the
- # same time ending in :00 are fairly common. These times off by no more
- # than 30 have not caused a problem. See
- # http://code.google.com/p/googletransitdatafeed/issues/detail?id=193
- # Show a warning if times are not rounded to the nearest minute or
- # distance is more than max_speed for one minute.
- if depart_time % 60 != 0 or dist_between_stops / 1000 * 60 > max_speed:
- problems.TooFastTravel(self.trip_id,
- prev_stop.stop_name,
- next_stop.stop_name,
- dist_between_stops,
- time_between_stops,
- speed=None,
- type=TYPE_WARNING)
- return
- # This needs floating point division for precision.
- speed_between_stops = ((float(dist_between_stops) / 1000) /
- (float(time_between_stops) / 3600))
- if speed_between_stops > max_speed:
- problems.TooFastTravel(self.trip_id,
- prev_stop.stop_name,
- next_stop.stop_name,
- dist_between_stops,
- time_between_stops,
- speed_between_stops,
- type=TYPE_WARNING)
-
-# TODO: move these into a separate file
-class ISO4217(object):
- """Represents the set of currencies recognized by the ISO-4217 spec."""
- codes = { # map of alpha code to numerical code
- 'AED': 784, 'AFN': 971, 'ALL': 8, 'AMD': 51, 'ANG': 532, 'AOA': 973,
- 'ARS': 32, 'AUD': 36, 'AWG': 533, 'AZN': 944, 'BAM': 977, 'BBD': 52,
- 'BDT': 50, 'BGN': 975, 'BHD': 48, 'BIF': 108, 'BMD': 60, 'BND': 96,
- 'BOB': 68, 'BOV': 984, 'BRL': 986, 'BSD': 44, 'BTN': 64, 'BWP': 72,
- 'BYR': 974, 'BZD': 84, 'CAD': 124, 'CDF': 976, 'CHE': 947, 'CHF': 756,
- 'CHW': 948, 'CLF': 990, 'CLP': 152, 'CNY': 156, 'COP': 170, 'COU': 970,
- 'CRC': 188, 'CUP': 192, 'CVE': 132, 'CYP': 196, 'CZK': 203, 'DJF': 262,
- 'DKK': 208, 'DOP': 214, 'DZD': 12, 'EEK': 233, 'EGP': 818, 'ERN': 232,
- 'ETB': 230, 'EUR': 978, 'FJD': 242, 'FKP': 238, 'GBP': 826, 'GEL': 981,
- 'GHC': 288, 'GIP': 292, 'GMD': 270, 'GNF': 324, 'GTQ': 320, 'GYD': 328,
- 'HKD': 344, 'HNL': 340, 'HRK': 191, 'HTG': 332, 'HUF': 348, 'IDR': 360,
- 'ILS': 376, 'INR': 356, 'IQD': 368, 'IRR': 364, 'ISK': 352, 'JMD': 388,
- 'JOD': 400, 'JPY': 392, 'KES': 404, 'KGS': 417, 'KHR': 116, 'KMF': 174,
- 'KPW': 408, 'KRW': 410, 'KWD': 414, 'KYD': 136, 'KZT': 398, 'LAK': 418,
- 'LBP': 422, 'LKR': 144, 'LRD': 430, 'LSL': 426, 'LTL': 440, 'LVL': 428,
- 'LYD': 434, 'MAD': 504, 'MDL': 498, 'MGA': 969, 'MKD': 807, 'MMK': 104,
- 'MNT': 496, 'MOP': 446, 'MRO': 478, 'MTL': 470, 'MUR': 480, 'MVR': 462,
- 'MWK': 454, 'MXN': 484, 'MXV': 979, 'MYR': 458, 'MZN': 943, 'NAD': 516,
- 'NGN': 566, 'NIO': 558, 'NOK': 578, 'NPR': 524, 'NZD': 554, 'OMR': 512,
- 'PAB': 590, 'PEN': 604, 'PGK': 598, 'PHP': 608, 'PKR': 586, 'PLN': 985,
- 'PYG': 600, 'QAR': 634, 'ROL': 642, 'RON': 946, 'RSD': 941, 'RUB': 643,
- 'RWF': 646, 'SAR': 682, 'SBD': 90, 'SCR': 690, 'SDD': 736, 'SDG': 938,
- 'SEK': 752, 'SGD': 702, 'SHP': 654, 'SKK': 703, 'SLL': 694, 'SOS': 706,
- 'SRD': 968, 'STD': 678, 'SYP': 760, 'SZL': 748, 'THB': 764, 'TJS': 972,
- 'TMM': 795, 'TND': 788, 'TOP': 776, 'TRY': 949, 'TTD': 780, 'TWD': 901,
- 'TZS': 834, 'UAH': 980, 'UGX': 800, 'USD': 840, 'USN': 997, 'USS': 998,
- 'UYU': 858, 'UZS': 860, 'VEB': 862, 'VND': 704, 'VUV': 548, 'WST': 882,
- 'XAF': 950, 'XAG': 961, 'XAU': 959, 'XBA': 955, 'XBB': 956, 'XBC': 957,
- 'XBD': 958, 'XCD': 951, 'XDR': 960, 'XFO': None, 'XFU': None, 'XOF': 952,
- 'XPD': 964, 'XPF': 953, 'XPT': 962, 'XTS': 963, 'XXX': 999, 'YER': 886,
- 'ZAR': 710, 'ZMK': 894, 'ZWD': 716,
- }
-
-
-class Fare(object):
- """Represents a fare type."""
- _REQUIRED_FIELD_NAMES = ['fare_id', 'price', 'currency_type',
- 'payment_method', 'transfers']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['transfer_duration']
-
- def __init__(self,
- fare_id=None, price=None, currency_type=None,
- payment_method=None, transfers=None, transfer_duration=None,
- field_list=None):
- self.rules = []
- (self.fare_id, self.price, self.currency_type, self.payment_method,
- self.transfers, self.transfer_duration) = \
- (fare_id, price, currency_type, payment_method,
- transfers, transfer_duration)
- if field_list:
- (self.fare_id, self.price, self.currency_type, self.payment_method,
- self.transfers, self.transfer_duration) = field_list
-
- try:
- self.price = float(self.price)
- except (TypeError, ValueError):
- pass
- try:
- self.payment_method = int(self.payment_method)
- except (TypeError, ValueError):
- pass
- if self.transfers == None or self.transfers == "":
- self.transfers = None
- else:
- try:
- self.transfers = int(self.transfers)
- except (TypeError, ValueError):
- pass
- if self.transfer_duration == None or self.transfer_duration == "":
- self.transfer_duration = None
- else:
- try:
- self.transfer_duration = int(self.transfer_duration)
- except (TypeError, ValueError):
- pass
-
- def GetFareRuleList(self):
- return self.rules
-
- def ClearFareRules(self):
- self.rules = []
-
- def GetFieldValuesTuple(self):
- return [getattr(self, fn) for fn in Fare._FIELD_NAMES]
-
- def __getitem__(self, name):
- return getattr(self, name)
-
- def __eq__(self, other):
- if not other:
- return False
-
- if id(self) == id(other):
- return True
-
- if self.GetFieldValuesTuple() != other.GetFieldValuesTuple():
- return False
-
- self_rules = [r.GetFieldValuesTuple() for r in self.GetFareRuleList()]
- self_rules.sort()
- other_rules = [r.GetFieldValuesTuple() for r in other.GetFareRuleList()]
- other_rules.sort()
- return self_rules == other_rules
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def Validate(self, problems=default_problem_reporter):
- if IsEmpty(self.fare_id):
- problems.MissingValue("fare_id")
-
- if self.price == None:
- problems.MissingValue("price")
- elif not isinstance(self.price, float) and not isinstance(self.price, int):
- problems.InvalidValue("price", self.price)
- elif self.price < 0:
- problems.InvalidValue("price", self.price)
-
- if IsEmpty(self.currency_type):
- problems.MissingValue("currency_type")
- elif self.currency_type not in ISO4217.codes:
- problems.InvalidValue("currency_type", self.currency_type)
-
- if self.payment_method == "" or self.payment_method == None:
- problems.MissingValue("payment_method")
- elif (not isinstance(self.payment_method, int) or
- self.payment_method not in range(0, 2)):
- problems.InvalidValue("payment_method", self.payment_method)
-
- if not ((self.transfers == None) or
- (isinstance(self.transfers, int) and
- self.transfers in range(0, 3))):
- problems.InvalidValue("transfers", self.transfers)
-
- if ((self.transfer_duration != None) and
- not isinstance(self.transfer_duration, int)):
- problems.InvalidValue("transfer_duration", self.transfer_duration)
- if self.transfer_duration and (self.transfer_duration < 0):
- problems.InvalidValue("transfer_duration", self.transfer_duration)
- if (self.transfer_duration and (self.transfer_duration > 0) and
- self.transfers == 0):
- problems.InvalidValue("transfer_duration", self.transfer_duration,
- "can't have a nonzero transfer_duration for "
- "a fare that doesn't allow transfers!")
-
-
-class FareRule(object):
- """This class represents a rule that determines which itineraries a
- fare rule applies to."""
- _REQUIRED_FIELD_NAMES = ['fare_id']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['route_id',
- 'origin_id', 'destination_id',
- 'contains_id']
-
- def __init__(self, fare_id=None, route_id=None,
- origin_id=None, destination_id=None, contains_id=None,
- field_list=None):
- (self.fare_id, self.route_id, self.origin_id, self.destination_id,
- self.contains_id) = \
- (fare_id, route_id, origin_id, destination_id, contains_id)
- if field_list:
- (self.fare_id, self.route_id, self.origin_id, self.destination_id,
- self.contains_id) = field_list
-
- # canonicalize non-content values as None
- if not self.route_id:
- self.route_id = None
- if not self.origin_id:
- self.origin_id = None
- if not self.destination_id:
- self.destination_id = None
- if not self.contains_id:
- self.contains_id = None
-
- def GetFieldValuesTuple(self):
- return [getattr(self, fn) for fn in FareRule._FIELD_NAMES]
-
- def __getitem__(self, name):
- return getattr(self, name)
-
- def __eq__(self, other):
- if not other:
- return False
-
- if id(self) == id(other):
- return True
-
- return self.GetFieldValuesTuple() == other.GetFieldValuesTuple()
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
-
-class Shape(object):
- """This class represents a geographic shape that corresponds to the route
- taken by one or more Trips."""
- _REQUIRED_FIELD_NAMES = ['shape_id', 'shape_pt_lat', 'shape_pt_lon',
- 'shape_pt_sequence']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['shape_dist_traveled']
- def __init__(self, shape_id):
- # List of shape point tuple (lat, lng, shape_dist_traveled), where lat and
- # lon is the location of the shape point, and shape_dist_traveled is an
- # increasing metric representing the distance traveled along the shape.
- self.points = []
- # An ID that uniquely identifies a shape in the dataset.
- self.shape_id = shape_id
- # The max shape_dist_traveled of shape points in this shape.
- self.max_distance = 0
- # List of shape_dist_traveled of each shape point.
- self.distance = []
-
- def AddPoint(self, lat, lon, distance=None,
- problems=default_problem_reporter):
-
- try:
- lat = float(lat)
- if abs(lat) > 90.0:
- problems.InvalidValue('shape_pt_lat', lat)
- return
- except (TypeError, ValueError):
- problems.InvalidValue('shape_pt_lat', lat)
- return
-
- try:
- lon = float(lon)
- if abs(lon) > 180.0:
- problems.InvalidValue('shape_pt_lon', lon)
- return
- except (TypeError, ValueError):
- problems.InvalidValue('shape_pt_lon', lon)
- return
-
- if (abs(lat) < 1.0) and (abs(lon) < 1.0):
- problems.InvalidValue('shape_pt_lat', lat,
- 'Point location too close to 0, 0, which means '
- 'that it\'s probably an incorrect location.',
- type=TYPE_WARNING)
- return
-
- if distance == '': # canonicalizing empty string to None for comparison
- distance = None
-
- if distance != None:
- try:
- distance = float(distance)
- if (distance < self.max_distance and not
- (len(self.points) == 0 and distance == 0)): # first one can be 0
- problems.InvalidValue('shape_dist_traveled', distance,
- 'Each subsequent point in a shape should '
- 'have a distance value that\'s at least as '
- 'large as the previous ones. In this case, '
- 'the previous distance was %f.' %
- self.max_distance)
- return
- else:
- self.max_distance = distance
- self.distance.append(distance)
- except (TypeError, ValueError):
- problems.InvalidValue('shape_dist_traveled', distance,
- 'This value should be a positive number.')
- return
-
- self.points.append((lat, lon, distance))
-
- def ClearPoints(self):
- self.points = []
-
- def __eq__(self, other):
- if not other:
- return False
-
- if id(self) == id(other):
- return True
-
- return self.points == other.points
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __repr__(self):
- return "<Shape %s>" % self.__dict__
-
- def Validate(self, problems=default_problem_reporter):
- if IsEmpty(self.shape_id):
- problems.MissingValue('shape_id')
-
- if not self.points:
- problems.OtherProblem('The shape with shape_id "%s" contains no points.' %
- self.shape_id, type=TYPE_WARNING)
-
- def GetPointWithDistanceTraveled(self, shape_dist_traveled):
- """Returns a point on the shape polyline with the input shape_dist_traveled.
-
- Args:
- shape_dist_traveled: The input shape_dist_traveled.
-
- Returns:
- The shape point as a tuple (lat, lng, shape_dist_traveled), where lat and
- lng is the location of the shape point, and shape_dist_traveled is an
- increasing metric representing the distance traveled along the shape.
- Returns None if there is data error in shape.
- """
- if not self.distance:
- return None
- if shape_dist_traveled <= self.distance[0]:
- return self.points[0]
- if shape_dist_traveled >= self.distance[-1]:
- return self.points[-1]
-
- index = bisect.bisect(self.distance, shape_dist_traveled)
- (lat0, lng0, dist0) = self.points[index - 1]
- (lat1, lng1, dist1) = self.points[index]
-
- # Interpolate if shape_dist_traveled does not equal to any of the point
- # in shape segment.
- # (lat0, lng0) (lat, lng) (lat1, lng1)
- # -----|--------------------|---------------------|------
- # dist0 shape_dist_traveled dist1
- # \------- ca --------/ \-------- bc -------/
- # \----------------- ba ------------------/
- ca = shape_dist_traveled - dist0
- bc = dist1 - shape_dist_traveled
- ba = bc + ca
- if ba == 0:
- # This only happens when there's data error in shapes and should have been
- # catched before. Check to avoid crash.
- return None
- # This won't work crossing longitude 180 and is only an approximation which
- # works well for short distance.
- lat = (lat1 * ca + lat0 * bc) / ba
- lng = (lng1 * ca + lng0 * bc) / ba
- return (lat, lng, shape_dist_traveled)
-
-
-class ISO639(object):
- # Set of all the 2-letter ISO 639-1 language codes.
- codes_2letter = set([
- 'aa', 'ab', 'ae', 'af', 'ak', 'am', 'an', 'ar', 'as', 'av', 'ay', 'az',
- 'ba', 'be', 'bg', 'bh', 'bi', 'bm', 'bn', 'bo', 'br', 'bs', 'ca', 'ce',
- 'ch', 'co', 'cr', 'cs', 'cu', 'cv', 'cy', 'da', 'de', 'dv', 'dz', 'ee',
- 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fj', 'fo', 'fr',
- 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'gv', 'ha', 'he', 'hi', 'ho', 'hr',
- 'ht', 'hu', 'hy', 'hz', 'ia', 'id', 'ie', 'ig', 'ii', 'ik', 'io', 'is',
- 'it', 'iu', 'ja', 'jv', 'ka', 'kg', 'ki', 'kj', 'kk', 'kl', 'km', 'kn',
- 'ko', 'kr', 'ks', 'ku', 'kv', 'kw', 'ky', 'la', 'lb', 'lg', 'li', 'ln',
- 'lo', 'lt', 'lu', 'lv', 'mg', 'mh', 'mi', 'mk', 'ml', 'mn', 'mo', 'mr',
- 'ms', 'mt', 'my', 'na', 'nb', 'nd', 'ne', 'ng', 'nl', 'nn', 'no', 'nr',
- 'nv', 'ny', 'oc', 'oj', 'om', 'or', 'os', 'pa', 'pi', 'pl', 'ps', 'pt',
- 'qu', 'rm', 'rn', 'ro', 'ru', 'rw', 'sa', 'sc', 'sd', 'se', 'sg', 'si',
- 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'ss', 'st', 'su', 'sv', 'sw',
- 'ta', 'te', 'tg', 'th', 'ti', 'tk', 'tl', 'tn', 'to', 'tr', 'ts', 'tt',
- 'tw', 'ty', 'ug', 'uk', 'ur', 'uz', 've', 'vi', 'vo', 'wa', 'wo', 'xh',
- 'yi', 'yo', 'za', 'zh', 'zu',
- ])
-
-
-class Agency(GenericGTFSObject):
- """Represents an agency in a schedule.
-
- Callers may assign arbitrary values to instance attributes. __init__ makes no
- attempt at validating the attributes. Call Validate() to check that
- attributes are valid and the agency object is consistent with itself.
-
- Attributes:
- All attributes are strings.
- """
- _REQUIRED_FIELD_NAMES = ['agency_name', 'agency_url', 'agency_timezone']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['agency_id', 'agency_lang',
- 'agency_phone']
- _TABLE_NAME = 'agency'
-
- def __init__(self, name=None, url=None, timezone=None, id=None,
- field_dict=None, lang=None, **kwargs):
- """Initialize a new Agency object.
-
- Args:
- field_dict: A dictionary mapping attribute name to unicode string
- name: a string, ignored when field_dict is present
- url: a string, ignored when field_dict is present
- timezone: a string, ignored when field_dict is present
- id: a string, ignored when field_dict is present
- kwargs: arbitrary keyword arguments may be used to add attributes to the
- new object, ignored when field_dict is present
- """
- self._schedule = None
-
- if not field_dict:
- if name:
- kwargs['agency_name'] = name
- if url:
- kwargs['agency_url'] = url
- if timezone:
- kwargs['agency_timezone'] = timezone
- if id:
- kwargs['agency_id'] = id
- if lang:
- kwargs['agency_lang'] = lang
- field_dict = kwargs
-
- self.__dict__.update(field_dict)
-
- def Validate(self, problems=default_problem_reporter):
- """Validate attribute values and this object's internal consistency.
-
- Returns:
- True iff all validation checks passed.
- """
- found_problem = False
- for required in Agency._REQUIRED_FIELD_NAMES:
- if IsEmpty(getattr(self, required, None)):
- problems.MissingValue(required)
- found_problem = True
-
- if self.agency_url and not IsValidURL(self.agency_url):
- problems.InvalidValue('agency_url', self.agency_url)
- found_problem = True
-
- if (not IsEmpty(self.agency_lang) and
- self.agency_lang.lower() not in ISO639.codes_2letter):
- problems.InvalidValue('agency_lang', self.agency_lang)
- found_problem = True
-
- try:
- import pytz
- if self.agency_timezone not in pytz.common_timezones:
- problems.InvalidValue(
- 'agency_timezone',
- self.agency_timezone,
- '"%s" is not a common timezone name according to pytz version %s' %
- (self.agency_timezone, pytz.VERSION))
- found_problem = True
- except ImportError: # no pytz
- print ("Timezone not checked "
- "(install pytz package for timezone validation)")
- return not found_problem
-
-
-class Transfer(object):
- """Represents a transfer in a schedule"""
- _REQUIRED_FIELD_NAMES = ['from_stop_id', 'to_stop_id', 'transfer_type']
- _FIELD_NAMES = _REQUIRED_FIELD_NAMES + ['min_transfer_time']
-
- def __init__(self, schedule=None, from_stop_id=None, to_stop_id=None, transfer_type=None,
- min_transfer_time=None, field_dict=None):
- if schedule is not None:
- self._schedule = weakref.proxy(schedule) # See weakref comment at top
- else:
- self._schedule = None
- if field_dict:
- self.__dict__.update(field_dict)
- else:
- self.from_stop_id = from_stop_id
- self.to_stop_id = to_stop_id
- self.transfer_type = transfer_type
- self.min_transfer_time = min_transfer_time
-
- if getattr(self, 'transfer_type', None) in ("", None):
- # Use the default, recommended transfer, if attribute is not set or blank
- self.transfer_type = 0
- else:
- try:
- self.transfer_type = NonNegIntStringToInt(self.transfer_type)
- except (TypeError, ValueError):
- pass
-
- if hasattr(self, 'min_transfer_time'):
- try:
- self.min_transfer_time = NonNegIntStringToInt(self.min_transfer_time)
- except (TypeError, ValueError):
- pass
- else:
- self.min_transfer_time = None
-
- def GetFieldValuesTuple(self):
- return [getattr(self, fn) for fn in Transfer._FIELD_NAMES]
-
- def __getitem__(self, name):
- return getattr(self, name)
-
- def __eq__(self, other):
- if not other:
- return False
-
- if id(self) == id(other):
- return True
-
- return self.GetFieldValuesTuple() == other.GetFieldValuesTuple()
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __repr__(self):
- return "<Transfer %s>" % self.__dict__
-
- def Validate(self, problems=default_problem_reporter):
- if IsEmpty(self.from_stop_id):
- problems.MissingValue('from_stop_id')
- elif self._schedule:
- if self.from_stop_id not in self._schedule.stops.keys():
- problems.InvalidValue('from_stop_id', self.from_stop_id)
-
- if IsEmpty(self.to_stop_id):
- problems.MissingValue('to_stop_id')
- elif self._schedule:
- if self.to_stop_id not in self._schedule.stops.keys():
- problems.InvalidValue('to_stop_id', self.to_stop_id)
-
- if not IsEmpty(self.transfer_type):
- if (not isinstance(self.transfer_type, int)) or \
- (self.transfer_type not in range(0, 4)):
- problems.InvalidValue('transfer_type', self.transfer_type)
-
- if not IsEmpty(self.min_transfer_time):
- if (not isinstance(self.min_transfer_time, int)) or \
- self.min_transfer_time < 0:
- problems.InvalidValue('min_transfer_time', self.min_transfer_time)
-
-
-class ServicePeriod(object):
- """Represents a service, which identifies a set of dates when one or more
- trips operate."""
- _DAYS_OF_WEEK = [
- 'monday', 'tuesday', 'wednesday', 'thursday', 'friday',
- 'saturday', 'sunday'
- ]
- _FIELD_NAMES_REQUIRED = [
- 'service_id', 'start_date', 'end_date'
- ] + _DAYS_OF_WEEK
- _FIELD_NAMES = _FIELD_NAMES_REQUIRED # no optional fields in this one
- _FIELD_NAMES_CALENDAR_DATES = ['service_id', 'date', 'exception_type']
-
- def __init__(self, id=None, field_list=None):
- self.original_day_values = []
- if field_list:
- self.service_id = field_list[self._FIELD_NAMES.index('service_id')]
- self.day_of_week = [False] * len(self._DAYS_OF_WEEK)
-
- for day in self._DAYS_OF_WEEK:
- value = field_list[self._FIELD_NAMES.index(day)] or '' # can be None
- self.original_day_values += [value.strip()]
- self.day_of_week[self._DAYS_OF_WEEK.index(day)] = (value == u'1')
-
- self.start_date = field_list[self._FIELD_NAMES.index('start_date')]
- self.end_date = field_list[self._FIELD_NAMES.index('end_date')]
- else:
- self.service_id = id
- self.day_of_week = [False] * 7
- self.start_date = None
- self.end_date = None
- self.date_exceptions = {} # Map from 'YYYYMMDD' to 1 (add) or 2 (remove)
-
- def _IsValidDate(self, date):
- if re.match('^\d{8}$', date) == None:
- return False
-
- try:
- time.strptime(date, "%Y%m%d")
- return True
- except ValueError:
- return False
-
- def GetDateRange(self):
- """Return the range over which this ServicePeriod is valid.
-
- The range includes exception dates that add service outside of
- (start_date, end_date), but doesn't shrink the range if exception
- dates take away service at the edges of the range.
-
- Returns:
- A tuple of "YYYYMMDD" strings, (start date, end date) or (None, None) if
- no dates have been given.
- """
- start = self.start_date
- end = self.end_date
-
- for date in self.date_exceptions:
- if self.date_exceptions[date] == 2:
- continue
- if not start or (date < start):
- start = date
- if not end or (date > end):
- end = date
- if start is None:
- start = end
- elif end is None:
- end = start
- # If start and end are None we did a little harmless shuffling
- return (start, end)
-
- def GetCalendarFieldValuesTuple(self):
- """Return the tuple of calendar.txt values or None if this ServicePeriod
- should not be in calendar.txt ."""
- if self.start_date and self.end_date:
- return [getattr(self, fn) for fn in ServicePeriod._FIELD_NAMES]
-
- def GenerateCalendarDatesFieldValuesTuples(self):
- """Generates tuples of calendar_dates.txt values. Yield zero tuples if
- this ServicePeriod should not be in calendar_dates.txt ."""
- for date, exception_type in self.date_exceptions.items():
- yield (self.service_id, date, unicode(exception_type))
-
- def GetCalendarDatesFieldValuesTuples(self):
- """Return a list of date execeptions"""
- result = []
- for date_tuple in self.GenerateCalendarDatesFieldValuesTuples():
- result.append(date_tuple)
- result.sort() # helps with __eq__
- return result
-
- def SetDateHasService(self, date, has_service=True, problems=None):
- if date in self.date_exceptions and problems:
- problems.DuplicateID(('service_id', 'date'),
- (self.service_id, date),
- type=TYPE_WARNING)
- self.date_exceptions[date] = has_service and 1 or 2
-
- def ResetDateToNormalService(self, date):
- if date in self.date_exceptions:
- del self.date_exceptions[date]
-
- def SetStartDate(self, start_date):
- """Set the first day of service as a string in YYYYMMDD format"""
- self.start_date = start_date
-
- def SetEndDate(self, end_date):
- """Set the last day of service as a string in YYYYMMDD format"""
- self.end_date = end_date
-
- def SetDayOfWeekHasService(self, dow, has_service=True):
- """Set service as running (or not) on a day of the week. By default the
- service does not run on any days.
-
- Args:
- dow: 0 for Monday through 6 for Sunday
- has_service: True if this service operates on dow, False if it does not.
-
- Returns:
- None
- """
- assert(dow >= 0 and dow < 7)
- self.day_of_week[dow] = has_service
-
- def SetWeekdayService(self, has_service=True):
- """Set service as running (or not) on all of Monday through Friday."""
- for i in range(0, 5):
- self.SetDayOfWeekHasService(i, has_service)
-
- def SetWeekendService(self, has_service=True):
- """Set service as running (or not) on Saturday and Sunday."""
- self.SetDayOfWeekHasService(5, has_service)
- self.SetDayOfWeekHasService(6, has_service)
-
- def SetServiceId(self, service_id):
- """Set the service_id for this schedule. Generally the default will
- suffice so you won't need to call this method."""
- self.service_id = service_id
-
- def IsActiveOn(self, date, date_object=None):
- """Test if this service period is active on a date.
-
- Args:
- date: a string of form "YYYYMMDD"
- date_object: a date object representing the same date as date.
- This parameter is optional, and present only for performance
- reasons.
- If the caller constructs the date string from a date object
- that date object can be passed directly, thus avoiding the
- costly conversion from string to date object.
-
- Returns:
- True iff this service is active on date.
- """
- if date in self.date_exceptions:
- if self.date_exceptions[date] == 1:
- return True
- else:
- return False
- if (self.start_date and self.end_date and self.start_date <= date and
- date <= self.end_date):
- if date_object is None:
- date_object = DateStringToDateObject(date)
- return self.day_of_week[date_object.weekday()]
- return False
-
- def ActiveDates(self):
- """Return dates this service period is active as a list of "YYYYMMDD"."""
- (earliest, latest) = self.GetDateRange()
- if earliest is None:
- return []
- dates = []
- date_it = DateStringToDateObject(earliest)
- date_end = DateStringToDateObject(latest)
- delta = datetime.timedelta(days=1)
- while date_it <= date_end:
- date_it_string = date_it.strftime("%Y%m%d")
- if self.IsActiveOn(date_it_string, date_it):
- dates.append(date_it_string)
- date_it = date_it + delta
- return dates
-
- def __getattr__(self, name):
- try:
- # Return 1 if value in day_of_week is True, 0 otherwise
- return (self.day_of_week[ServicePeriod._DAYS_OF_WEEK.index(name)]
- and 1 or 0)
- except KeyError:
- pass
- except ValueError: # not a day of the week
- pass
- raise AttributeError(name)
-
- def __getitem__(self, name):
- return getattr(self, name)
-
- def __eq__(self, other):
- if not other:
- return False
-
- if id(self) == id(other):
- return True
-
- if (self.GetCalendarFieldValuesTuple() !=
- other.GetCalendarFieldValuesTuple()):
- return False
-
- if (self.GetCalendarDatesFieldValuesTuples() !=
- other.GetCalendarDatesFieldValuesTuples()):
- return False
-
- return True
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def Validate(self, problems=default_problem_reporter):
- if IsEmpty(self.service_id):
- problems.MissingValue('service_id')
- # self.start_date/self.end_date is None in 3 cases:
- # ServicePeriod created by loader and
- # 1a) self.service_id wasn't in calendar.txt
- # 1b) calendar.txt didn't have a start_date/end_date column
- # ServicePeriod created directly and
- # 2) start_date/end_date wasn't set
- # In case 1a no problem is reported. In case 1b the missing required column
- # generates an error in _ReadCSV so this method should not report another
- # problem. There is no way to tell the difference between cases 1b and 2
- # so case 2 is ignored because making the feedvalidator pretty is more
- # important than perfect validation when an API users makes a mistake.
- start_date = None
- if self.start_date is not None:
- if IsEmpty(self.start_date):
- problems.MissingValue('start_date')
- elif self._IsValidDate(self.start_date):
- start_date = self.start_date
- else:
- problems.InvalidValue('start_date', self.start_date)
- end_date = None
- if self.end_date is not None:
- if IsEmpty(self.end_date):
- problems.MissingValue('end_date')
- elif self._IsValidDate(self.end_date):
- end_date = self.end_date
- else:
- problems.InvalidValue('end_date', self.end_date)
- if start_date and end_date and end_date < start_date:
- problems.InvalidValue('end_date', end_date,
- 'end_date of %s is earlier than '
- 'start_date of "%s"' %
- (end_date, start_date))
- if self.original_day_values:
- index = 0
- for value in self.original_day_values:
- column_name = self._DAYS_OF_WEEK[index]
- if IsEmpty(value):
- problems.MissingValue(column_name)
- elif (value != u'0') and (value != '1'):
- problems.InvalidValue(column_name, value)
- index += 1
- if (True not in self.day_of_week and
- 1 not in self.date_exceptions.values()):
- problems.OtherProblem('Service period with service_id "%s" '
- 'doesn\'t have service on any days '
- 'of the week.' % self.service_id,
- type=TYPE_WARNING)
- for date in self.date_exceptions:
- if not self._IsValidDate(date):
- problems.InvalidValue('date', date)
-
-
-class CsvUnicodeWriter:
- """
- Create a wrapper around a csv writer object which can safely write unicode
- values. Passes all arguments to csv.writer.
- """
- def __init__(self, *args, **kwargs):
- self.writer = csv.writer(*args, **kwargs)
-
- def writerow(self, row):
- """Write row to the csv file. Any unicode strings in row are encoded as
- utf-8."""
- encoded_row = []
- for s in row:
- if isinstance(s, unicode):
- encoded_row.append(s.encode("utf-8"))
- else:
- encoded_row.append(s)
- try:
- self.writer.writerow(encoded_row)
- except Exception, e:
- print 'error writing %s as %s' % (row, encoded_row)
- raise e
-
- def writerows(self, rows):
- """Write rows to the csv file. Any unicode strings in rows are encoded as
- utf-8."""
- for row in rows:
- self.writerow(row)
-
- def __getattr__(self, name):
- return getattr(self.writer, name)
-
-
-class Schedule:
- """Represents a Schedule, a collection of stops, routes, trips and
- an agency. This is the main class for this module."""
-
- def __init__(self, problem_reporter=default_problem_reporter,
- memory_db=True, check_duplicate_trips=False):
- # Map from table name to list of columns present in this schedule
- self._table_columns = {}
-
- self._agencies = {}
- self.stops = {}
- self.routes = {}
- self.trips = {}
- self.service_periods = {}
- self.fares = {}
- self.fare_zones = {} # represents the set of all known fare zones
- self._shapes = {} # shape_id to Shape
- self._transfers = [] # list of transfers
- self._default_service_period = None
- self._default_agency = None
- self.problem_reporter = problem_reporter
- self._check_duplicate_trips = check_duplicate_trips
- self.ConnectDb(memory_db)
-
- def AddTableColumn(self, table, column):
- """Add column to table if it is not already there."""
- if column not in self._table_columns[table]:
- self._table_columns[table].append(column)
-
- def AddTableColumns(self, table, columns):
- """Add columns to table if they are not already there.
-
- Args:
- table: table name as a string
- columns: an iterable of column names"""
- table_columns = self._table_columns.setdefault(table, [])
- for attr in columns:
- if attr not in table_columns:
- table_columns.append(attr)
-
- def GetTableColumns(self, table):
- """Return list of columns in a table."""
- return self._table_columns[table]
-
- def __del__(self):
- if hasattr(self, '_temp_db_filename'):
- os.remove(self._temp_db_filename)
-
- def ConnectDb(self, memory_db):
- if memory_db:
- self._connection = sqlite.connect(":memory:")
- else:
- try:
- self._temp_db_file = tempfile.NamedTemporaryFile()
- self._connection = sqlite.connect(self._temp_db_file.name)
- except sqlite.OperationalError:
- # Windows won't let a file be opened twice. mkstemp does not remove the
- # file when all handles to it are closed.
- self._temp_db_file = None
- (fd, self._temp_db_filename) = tempfile.mkstemp(".db")
- os.close(fd)
- self._connection = sqlite.connect(self._temp_db_filename)
-
- cursor = self._connection.cursor()
- cursor.execute("""CREATE TABLE stop_times (
- trip_id CHAR(50),
- arrival_secs INTEGER,
- departure_secs INTEGER,
- stop_id CHAR(50),
- stop_sequence INTEGER,
- stop_headsign VAR CHAR(100),
- pickup_type INTEGER,
- drop_off_type INTEGER,
- shape_dist_traveled FLOAT);""")
- cursor.execute("""CREATE INDEX trip_index ON stop_times (trip_id);""")
- cursor.execute("""CREATE INDEX stop_index ON stop_times (stop_id);""")
-
- def GetStopBoundingBox(self):
- return (min(s.stop_lat for s in self.stops.values()),
- min(s.stop_lon for s in self.stops.values()),
- max(s.stop_lat for s in self.stops.values()),
- max(s.stop_lon for s in self.stops.values()),
- )
-
- def AddAgency(self, name, url, timezone, agency_id=None):
- """Adds an agency to this schedule."""
- agency = Agency(name, url, timezone, agency_id)
- self.AddAgencyObject(agency)
- return agency
-
- def AddAgencyObject(self, agency, problem_reporter=None, validate=True):
- assert agency._schedule is None
-
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- if agency.agency_id in self._agencies:
- problem_reporter.DuplicateID('agency_id', agency.agency_id)
- return
-
- self.AddTableColumns('agency', agency._ColumnNames())
- agency._schedule = weakref.proxy(self)
-
- if validate:
- agency.Validate(problem_reporter)
- self._agencies[agency.agency_id] = agency
-
- def GetAgency(self, agency_id):
- """Return Agency with agency_id or throw a KeyError"""
- return self._agencies[agency_id]
-
- def GetDefaultAgency(self):
- """Return the default Agency. If no default Agency has been set select the
- default depending on how many Agency objects are in the Schedule. If there
- are 0 make a new Agency the default, if there is 1 it becomes the default,
- if there is more than 1 then return None.
- """
- if not self._default_agency:
- if len(self._agencies) == 0:
- self.NewDefaultAgency()
- elif len(self._agencies) == 1:
- self._default_agency = self._agencies.values()[0]
- return self._default_agency
-
- def NewDefaultAgency(self, **kwargs):
- """Create a new Agency object and make it the default agency for this Schedule"""
- agency = Agency(**kwargs)
- if not agency.agency_id:
- agency.agency_id = FindUniqueId(self._agencies)
- self._default_agency = agency
- self.SetDefaultAgency(agency, validate=False) # Blank agency won't validate
- return agency
-
- def SetDefaultAgency(self, agency, validate=True):
- """Make agency the default and add it to the schedule if not already added"""
- assert isinstance(agency, Agency)
- self._default_agency = agency
- if agency.agency_id not in self._agencies:
- self.AddAgencyObject(agency, validate=validate)
-
- def GetAgencyList(self):
- """Returns the list of Agency objects known to this Schedule."""
- return self._agencies.values()
-
- def GetServicePeriod(self, service_id):
- """Returns the ServicePeriod object with the given ID."""
- return self.service_periods[service_id]
-
- def GetDefaultServicePeriod(self):
- """Return the default ServicePeriod. If no default ServicePeriod has been
- set select the default depending on how many ServicePeriod objects are in
- the Schedule. If there are 0 make a new ServicePeriod the default, if there
- is 1 it becomes the default, if there is more than 1 then return None.
- """
- if not self._default_service_period:
- if len(self.service_periods) == 0:
- self.NewDefaultServicePeriod()
- elif len(self.service_periods) == 1:
- self._default_service_period = self.service_periods.values()[0]
- return self._default_service_period
-
- def NewDefaultServicePeriod(self):
- """Create a new ServicePeriod object, make it the default service period and
- return it. The default service period is used when you create a trip without
- providing an explict service period. """
- service_period = ServicePeriod()
- service_period.service_id = FindUniqueId(self.service_periods)
- # blank service won't validate in AddServicePeriodObject
- self.SetDefaultServicePeriod(service_period, validate=False)
- return service_period
-
- def SetDefaultServicePeriod(self, service_period, validate=True):
- assert isinstance(service_period, ServicePeriod)
- self._default_service_period = service_period
- if service_period.service_id not in self.service_periods:
- self.AddServicePeriodObject(service_period, validate=validate)
-
- def AddServicePeriodObject(self, service_period, problem_reporter=None,
- validate=True):
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- if service_period.service_id in self.service_periods:
- problem_reporter.DuplicateID('service_id', service_period.service_id)
- return
-
- if validate:
- service_period.Validate(problem_reporter)
- self.service_periods[service_period.service_id] = service_period
-
- def GetServicePeriodList(self):
- return self.service_periods.values()
-
- def GetDateRange(self):
- """Returns a tuple of (earliest, latest) dates on which the service
- periods in the schedule define service, in YYYYMMDD form."""
-
- ranges = [period.GetDateRange() for period in self.GetServicePeriodList()]
- starts = filter(lambda x: x, [item[0] for item in ranges])
- ends = filter(lambda x: x, [item[1] for item in ranges])
-
- if not starts or not ends:
- return (None, None)
-
- return (min(starts), max(ends))
-
- def GetServicePeriodsActiveEachDate(self, date_start, date_end):
- """Return a list of tuples (date, [period1, period2, ...]).
-
- For each date in the range [date_start, date_end) make list of each
- ServicePeriod object which is active.
-
- Args:
- date_start: The first date in the list, a date object
- date_end: The first date after the list, a date object
-
- Returns:
- A list of tuples. Each tuple contains a date object and a list of zero or
- more ServicePeriod objects.
- """
- date_it = date_start
- one_day = datetime.timedelta(days=1)
- date_service_period_list = []
- while date_it < date_end:
- periods_today = []
- date_it_string = date_it.strftime("%Y%m%d")
- for service in self.GetServicePeriodList():
- if service.IsActiveOn(date_it_string, date_it):
- periods_today.append(service)
- date_service_period_list.append((date_it, periods_today))
- date_it += one_day
- return date_service_period_list
-
-
- def AddStop(self, lat, lng, name):
- """Add a stop to this schedule.
-
- A new stop_id is created for this stop. Do not use this method unless all
- stops in this Schedule are created with it. See source for details.
-
- Args:
- lat: Latitude of the stop as a float or string
- lng: Longitude of the stop as a float or string
- name: Name of the stop, which will appear in the feed
-
- Returns:
- A new Stop object
- """
- # TODO: stop_id isn't guarenteed to be unique and conflicts are not
- # handled. Please fix.
- stop_id = unicode(len(self.stops))
- stop = Stop(stop_id=stop_id, lat=lat, lng=lng, name=name)
- self.AddStopObject(stop)
- return stop
-
- def AddStopObject(self, stop, problem_reporter=None):
- """Add Stop object to this schedule if stop_id is non-blank."""
- assert stop._schedule is None
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- if not stop.stop_id:
- return
-
- if stop.stop_id in self.stops:
- problem_reporter.DuplicateID('stop_id', stop.stop_id)
- return
-
- stop._schedule = weakref.proxy(self)
- self.AddTableColumns('stops', stop._ColumnNames())
- self.stops[stop.stop_id] = stop
- if hasattr(stop, 'zone_id') and stop.zone_id:
- self.fare_zones[stop.zone_id] = True
-
- def GetStopList(self):
- return self.stops.values()
-
- def AddRoute(self, short_name, long_name, route_type):
- """Add a route to this schedule.
-
- Args:
- short_name: Short name of the route, such as "71L"
- long_name: Full name of the route, such as "NW 21st Ave/St Helens Rd"
- route_type: A type such as "Tram", "Subway" or "Bus"
- Returns:
- A new Route object
- """
- route_id = unicode(len(self.routes))
- route = Route(short_name=short_name, long_name=long_name,
- route_type=route_type, route_id=route_id)
- route.agency_id = self.GetDefaultAgency().agency_id
- self.AddRouteObject(route)
- return route
-
- def AddRouteObject(self, route, problem_reporter=None):
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- route.Validate(problem_reporter)
-
- if route.route_id in self.routes:
- problem_reporter.DuplicateID('route_id', route.route_id)
- return
-
- if route.agency_id not in self._agencies:
- if not route.agency_id and len(self._agencies) == 1:
- # we'll just assume that the route applies to the only agency
- pass
- else:
- problem_reporter.InvalidValue('agency_id', route.agency_id,
- 'Route uses an unknown agency_id.')
- return
-
- self.AddTableColumns('routes', route._ColumnNames())
- route._schedule = weakref.proxy(self)
- self.routes[route.route_id] = route
-
- def GetRouteList(self):
- return self.routes.values()
-
- def GetRoute(self, route_id):
- return self.routes[route_id]
-
- def AddShapeObject(self, shape, problem_reporter=None):
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- shape.Validate(problem_reporter)
-
- if shape.shape_id in self._shapes:
- problem_reporter.DuplicateID('shape_id', shape.shape_id)
- return
-
- self._shapes[shape.shape_id] = shape
-
- def GetShapeList(self):
- return self._shapes.values()
-
- def GetShape(self, shape_id):
- return self._shapes[shape_id]
-
- def AddTripObject(self, trip, problem_reporter=None, validate=True):
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- if trip.trip_id in self.trips:
- problem_reporter.DuplicateID('trip_id', trip.trip_id)
- return
-
- self.AddTableColumns('trips', trip._ColumnNames())
- trip._schedule = weakref.proxy(self)
- self.trips[trip.trip_id] = trip
-
- # Call Trip.Validate after setting trip._schedule so that references
- # are checked. trip.ValidateChildren will be called directly by
- # schedule.Validate, after stop_times has been loaded.
- if validate:
- if not problem_reporter:
- problem_reporter = self.problem_reporter
- trip.Validate(problem_reporter, validate_children=False)
- try:
- self.routes[trip.route_id]._AddTripObject(trip)
- except KeyError:
- # Invalid route_id was reported in the Trip.Validate call above
- pass
-
- def GetTripList(self):
- return self.trips.values()
-
- def GetTrip(self, trip_id):
- return self.trips[trip_id]
-
- def AddFareObject(self, fare, problem_reporter=None):
- if not problem_reporter:
- problem_reporter = self.problem_reporter
- fare.Validate(problem_reporter)
-
- if fare.fare_id in self.fares:
- problem_reporter.DuplicateID('fare_id', fare.fare_id)
- return
-
- self.fares[fare.fare_id] = fare
-
- def GetFareList(self):
- return self.fares.values()
-
- def GetFare(self, fare_id):
- return self.fares[fare_id]
-
- def AddFareRuleObject(self, rule, problem_reporter=None):
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- if IsEmpty(rule.fare_id):
- problem_reporter.MissingValue('fare_id')
- return
-
- if rule.route_id and rule.route_id not in self.routes:
- problem_reporter.InvalidValue('route_id', rule.route_id)
- if rule.origin_id and rule.origin_id not in self.fare_zones:
- problem_reporter.InvalidValue('origin_id', rule.origin_id)
- if rule.destination_id and rule.destination_id not in self.fare_zones:
- problem_reporter.InvalidValue('destination_id', rule.destination_id)
- if rule.contains_id and rule.contains_id not in self.fare_zones:
- problem_reporter.InvalidValue('contains_id', rule.contains_id)
-
- if rule.fare_id in self.fares:
- self.GetFare(rule.fare_id).rules.append(rule)
- else:
- problem_reporter.InvalidValue('fare_id', rule.fare_id,
- '(This fare_id doesn\'t correspond to any '
- 'of the IDs defined in the '
- 'fare attributes.)')
-
- def AddTransferObject(self, transfer, problem_reporter=None):
- assert transfer._schedule is None, "only add Transfer to a schedule once"
- transfer._schedule = weakref.proxy(self) # See weakref comment at top
- if not problem_reporter:
- problem_reporter = self.problem_reporter
-
- transfer.Validate(problem_reporter)
- self._transfers.append(transfer)
-
- def GetTransferList(self):
- return self._transfers
-
- def GetStop(self, id):
- return self.stops[id]
-
- def GetFareZones(self):
- """Returns the list of all fare zones that have been identified by
- the stops that have been added."""
- return self.fare_zones.keys()
-
- def GetNearestStops(self, lat, lon, n=1):
- """Return the n nearest stops to lat,lon"""
- dist_stop_list = []
- for s in self.stops.values():
- # TODO: Use ApproximateDistanceBetweenStops?
- dist = (s.stop_lat - lat)**2 + (s.stop_lon - lon)**2
- if len(dist_stop_list) < n:
- bisect.insort(dist_stop_list, (dist, s))
- elif dist < dist_stop_list[-1][0]:
- bisect.insort(dist_stop_list, (dist, s))
- dist_stop_list.pop() # Remove stop with greatest distance
- return [stop for dist, stop in dist_stop_list]
-
- def GetStopsInBoundingBox(self, north, east, south, west, n):
- """Return a sample of up to n stops in a bounding box"""
- stop_list = []
- for s in self.stops.values():
- if (s.stop_lat <= north and s.stop_lat >= south and
- s.stop_lon <= east and s.stop_lon >= west):
- stop_list.append(s)
- if len(stop_list) == n:
- break
- return stop_list
-
- def Load(self, feed_path, extra_validation=False):
- loader = Loader(feed_path, self, problems=self.problem_reporter,
- extra_validation=extra_validation)
- loader.Load()
-
- def _WriteArchiveString(self, archive, filename, stringio):
- zi = zipfile.ZipInfo(filename)
- # See
- # http://stackoverflow.com/questions/434641/how-do-i-set-permissions-attributes-on-a-file-in-a-zip-file-using-pythons-zipf
- zi.external_attr = 0666 << 16L # Set unix permissions to -rw-rw-rw
- # ZIP_DEFLATED requires zlib. zlib comes with Python 2.4 and 2.5
- zi.compress_type = zipfile.ZIP_DEFLATED
- archive.writestr(zi, stringio.getvalue())
-
- def WriteGoogleTransitFeed(self, file):
- """Output this schedule as a Google Transit Feed in file_name.
-
- Args:
- file: path of new feed file (a string) or a file-like object
-
- Returns:
- None
- """
- # Compression type given when adding each file
- archive = zipfile.ZipFile(file, 'w')
-
- if 'agency' in self._table_columns:
- agency_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(agency_string)
- columns = self.GetTableColumns('agency')
- writer.writerow(columns)
- for a in self._agencies.values():
- writer.writerow([EncodeUnicode(a[c]) for c in columns])
- self._WriteArchiveString(archive, 'agency.txt', agency_string)
-
- calendar_dates_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(calendar_dates_string)
- writer.writerow(ServicePeriod._FIELD_NAMES_CALENDAR_DATES)
- has_data = False
- for period in self.service_periods.values():
- for row in period.GenerateCalendarDatesFieldValuesTuples():
- has_data = True
- writer.writerow(row)
- wrote_calendar_dates = False
- if has_data:
- wrote_calendar_dates = True
- self._WriteArchiveString(archive, 'calendar_dates.txt',
- calendar_dates_string)
-
- calendar_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(calendar_string)
- writer.writerow(ServicePeriod._FIELD_NAMES)
- has_data = False
- for s in self.service_periods.values():
- row = s.GetCalendarFieldValuesTuple()
- if row:
- has_data = True
- writer.writerow(row)
- if has_data or not wrote_calendar_dates:
- self._WriteArchiveString(archive, 'calendar.txt', calendar_string)
-
- if 'stops' in self._table_columns:
- stop_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(stop_string)
- columns = self.GetTableColumns('stops')
- writer.writerow(columns)
- for s in self.stops.values():
- writer.writerow([EncodeUnicode(s[c]) for c in columns])
- self._WriteArchiveString(archive, 'stops.txt', stop_string)
-
- if 'routes' in self._table_columns:
- route_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(route_string)
- columns = self.GetTableColumns('routes')
- writer.writerow(columns)
- for r in self.routes.values():
- writer.writerow([EncodeUnicode(r[c]) for c in columns])
- self._WriteArchiveString(archive, 'routes.txt', route_string)
-
- if 'trips' in self._table_columns:
- trips_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(trips_string)
- columns = self.GetTableColumns('trips')
- writer.writerow(columns)
- for t in self.trips.values():
- writer.writerow([EncodeUnicode(t[c]) for c in columns])
- self._WriteArchiveString(archive, 'trips.txt', trips_string)
-
- # write frequencies.txt (if applicable)
- headway_rows = []
- for trip in self.GetTripList():
- headway_rows += trip.GetHeadwayPeriodOutputTuples()
- if headway_rows:
- headway_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(headway_string)
- writer.writerow(Trip._FIELD_NAMES_HEADWAY)
- writer.writerows(headway_rows)
- self._WriteArchiveString(archive, 'frequencies.txt', headway_string)
-
- # write fares (if applicable)
- if self.GetFareList():
- fare_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(fare_string)
- writer.writerow(Fare._FIELD_NAMES)
- writer.writerows(f.GetFieldValuesTuple() for f in self.GetFareList())
- self._WriteArchiveString(archive, 'fare_attributes.txt', fare_string)
-
- # write fare rules (if applicable)
- rule_rows = []
- for fare in self.GetFareList():
- for rule in fare.GetFareRuleList():
- rule_rows.append(rule.GetFieldValuesTuple())
- if rule_rows:
- rule_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(rule_string)
- writer.writerow(FareRule._FIELD_NAMES)
- writer.writerows(rule_rows)
- self._WriteArchiveString(archive, 'fare_rules.txt', rule_string)
- stop_times_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(stop_times_string)
- writer.writerow(StopTime._FIELD_NAMES)
- for t in self.trips.values():
- writer.writerows(t._GenerateStopTimesTuples())
- self._WriteArchiveString(archive, 'stop_times.txt', stop_times_string)
-
- # write shapes (if applicable)
- shape_rows = []
- for shape in self.GetShapeList():
- seq = 1
- for (lat, lon, dist) in shape.points:
- shape_rows.append((shape.shape_id, lat, lon, seq, dist))
- seq += 1
- if shape_rows:
- shape_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(shape_string)
- writer.writerow(Shape._FIELD_NAMES)
- writer.writerows(shape_rows)
- self._WriteArchiveString(archive, 'shapes.txt', shape_string)
-
- # write transfers (if applicable)
- if self.GetTransferList():
- transfer_string = StringIO.StringIO()
- writer = CsvUnicodeWriter(transfer_string)
- writer.writerow(Transfer._FIELD_NAMES)
- writer.writerows(f.GetFieldValuesTuple() for f in self.GetTransferList())
- self._WriteArchiveString(archive, 'transfers.txt', transfer_string)
-
- archive.close()
-
- def GenerateDateTripsDeparturesList(self, date_start, date_end):
- """Return a list of (date object, number of trips, number of departures).
-
- The list is generated for dates in the range [date_start, date_end).
-
- Args:
- date_start: The first date in the list, a date object
- date_end: The first date after the list, a date object
-
- Returns:
- a list of (date object, number of trips, number of departures) tuples
- """
-
- service_id_to_trips = defaultdict(lambda: 0)
- service_id_to_departures = defaultdict(lambda: 0)
- for trip in self.GetTripList():
- headway_start_times = trip.GetHeadwayStartTimes()
- if headway_start_times:
- trip_runs = len(headway_start_times)
- else:
- trip_runs = 1
-
- service_id_to_trips[trip.service_id] += trip_runs
- service_id_to_departures[trip.service_id] += (
- (trip.GetCountStopTimes() - 1) * trip_runs)
-
- date_services = self.GetServicePeriodsActiveEachDate(date_start, date_end)
- date_trips = []
-
- for date, services in date_services:
- day_trips = sum(service_id_to_trips[s.service_id] for s in services)
- day_departures = sum(
- service_id_to_departures[s.service_id] for s in services)
- date_trips.append((date, day_trips, day_departures))
- return date_trips
-
- def ValidateFeedStartAndExpirationDates(self,
- problems,
- first_date,
- last_date,
- today):
- """Validate the start and expiration dates of the feed.
- Issue a warning if it only starts in the future, or if
- it expires within 60 days.
-
- Args:
- problems: The problem reporter object
- first_date: A date object representing the first day the feed is active
- last_date: A date object representing the last day the feed is active
- today: A date object representing the date the validation is being run on
-
- Returns:
- None
- """
- warning_cutoff = today + datetime.timedelta(days=60)
- if last_date < warning_cutoff:
- problems.ExpirationDate(time.mktime(last_date.timetuple()))
-
- if first_date > today:
- problems.FutureService(time.mktime(first_date.timetuple()))
-
- def ValidateServiceGaps(self,
- problems,
- validation_start_date,
- validation_end_date,
- service_gap_interval):
- """Validate consecutive dates without service in the feed.
- Issue a warning if it finds service gaps of at least
- "service_gap_interval" consecutive days in the date range
- [validation_start_date, last_service_date)
-
- Args:
- problems: The problem reporter object
- validation_start_date: A date object representing the date from which the
- validation should take place
- validation_end_date: A date object representing the first day the feed is
- active
- service_gap_interval: An integer indicating how many consecutive days the
- service gaps need to have for a warning to be issued
-
- Returns:
- None
- """
- if service_gap_interval is None:
- return
-
- departures = self.GenerateDateTripsDeparturesList(validation_start_date,
- validation_end_date)
-
- # The first day without service of the _current_ gap
- first_day_without_service = validation_start_date
- # The last day without service of the _current_ gap
- last_day_without_service = validation_start_date
-
- consecutive_days_without_service = 0
-
- for day_date, day_trips, _ in departures:
- if day_trips == 0:
- if consecutive_days_without_service == 0:
- first_day_without_service = day_date
- consecutive_days_without_service += 1
- last_day_without_service = day_date
- else:
- if consecutive_days_without_service >= service_gap_interval:
- problems.TooManyDaysWithoutService(first_day_without_service,
- last_day_without_service,
- consecutive_days_without_service)
-
- consecutive_days_without_service = 0
-
- # We have to check if there is a gap at the end of the specified date range
- if consecutive_days_without_service >= service_gap_interval:
- problems.TooManyDaysWithoutService(first_day_without_service,
- last_day_without_service,
- consecutive_days_without_service)
-
- def Validate(self,
- problems=None,
- validate_children=True,
- today=None,
- service_gap_interval=None):
- """Validates various holistic aspects of the schedule
- (mostly interrelationships between the various data sets)."""
-
- if today is None:
- today = datetime.date.today()
-
- if not problems:
- problems = self.problem_reporter
-
- (start_date, end_date) = self.GetDateRange()
- if not end_date or not start_date:
- problems.OtherProblem('This feed has no effective service dates!',
- type=TYPE_WARNING)
- else:
- try:
- last_service_day = datetime.datetime(
- *(time.strptime(end_date, "%Y%m%d")[0:6])).date()
- first_service_day = datetime.datetime(
- *(time.strptime(start_date, "%Y%m%d")[0:6])).date()
-
- except ValueError:
- # Format of start_date and end_date checked in class ServicePeriod
- pass
-
- else:
-
- self.ValidateFeedStartAndExpirationDates(problems,
- first_service_day,
- last_service_day,
- today)
-
- # We start checking for service gaps a bit in the past if the
- # feed was active then. See
- # http://code.google.com/p/googletransitdatafeed/issues/detail?id=188
- #
- # We subtract 1 from service_gap_interval so that if today has
- # service no warning is issued.
- #
- # Service gaps are searched for only up to one year from today
- if service_gap_interval is not None:
- service_gap_timedelta = datetime.timedelta(
- days=service_gap_interval - 1)
- one_year = datetime.timedelta(days=365)
- self.ValidateServiceGaps(
- problems,
- max(first_service_day,
- today - service_gap_timedelta),
- min(last_service_day,
- today + one_year),
- service_gap_interval)
-
- # TODO: Check Trip fields against valid values
-
- # Check for stops that aren't referenced by any trips and broken
- # parent_station references. Also check that the parent station isn't too
- # far from its child stops.
- for stop in self.stops.values():
- if validate_children:
- stop.Validate(problems)
- cursor = self._connection.cursor()
- cursor.execute("SELECT count(*) FROM stop_times WHERE stop_id=? LIMIT 1",
- (stop.stop_id,))
- count = cursor.fetchone()[0]
- if stop.location_type == 0 and count == 0:
- problems.UnusedStop(stop.stop_id, stop.stop_name)
- elif stop.location_type == 1 and count != 0:
- problems.UsedStation(stop.stop_id, stop.stop_name)
-
- if stop.location_type != 1 and stop.parent_station:
- if stop.parent_station not in self.stops:
- problems.InvalidValue("parent_station",
- EncodeUnicode(stop.parent_station),
- "parent_station '%s' not found for stop_id "
- "'%s' in stops.txt" %
- (EncodeUnicode(stop.parent_station),
- EncodeUnicode(stop.stop_id)))
- elif self.stops[stop.parent_station].location_type != 1:
- problems.InvalidValue("parent_station",
- EncodeUnicode(stop.parent_station),
- "parent_station '%s' of stop_id '%s' must "
- "have location_type=1 in stops.txt" %
- (EncodeUnicode(stop.parent_station),
- EncodeUnicode(stop.stop_id)))
- else:
- parent_station = self.stops[stop.parent_station]
- distance = ApproximateDistanceBetweenStops(stop, parent_station)
- if distance > MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_ERROR:
- problems.StopTooFarFromParentStation(
- stop.stop_id, stop.stop_name, parent_station.stop_id,
- parent_station.stop_name, distance, TYPE_ERROR)
- elif distance > MAX_DISTANCE_BETWEEN_STOP_AND_PARENT_STATION_WARNING:
- problems.StopTooFarFromParentStation(
- stop.stop_id, stop.stop_name, parent_station.stop_id,
- parent_station.stop_name, distance, TYPE_WARNING)
-
- #TODO: check that every station is used.
- # Then uncomment testStationWithoutReference.
-
- # Check for stops that might represent the same location (specifically,
- # stops that are less that 2 meters apart) First filter out stops without a
- # valid lat and lon. Then sort by latitude, then find the distance between
- # each pair of stations within 2 meters latitude of each other. This avoids
- # doing n^2 comparisons in the average case and doesn't need a spatial
- # index.
- sorted_stops = filter(lambda s: s.stop_lat and s.stop_lon,
- self.GetStopList())
- sorted_stops.sort(key=(lambda x: x.stop_lat))
- TWO_METERS_LAT = 0.000018
- for index, stop in enumerate(sorted_stops[:-1]):
- index += 1
- while ((index < len(sorted_stops)) and
- ((sorted_stops[index].stop_lat - stop.stop_lat) < TWO_METERS_LAT)):
- distance = ApproximateDistanceBetweenStops(stop, sorted_stops[index])
- if distance < 2:
- other_stop = sorted_stops[index]
- if stop.location_type == 0 and other_stop.location_type == 0:
- problems.StopsTooClose(
- EncodeUnicode(stop.stop_name),
- EncodeUnicode(stop.stop_id),
- EncodeUnicode(other_stop.stop_name),
- EncodeUnicode(other_stop.stop_id), distance)
- elif stop.location_type == 1 and other_stop.location_type == 1:
- problems.StationsTooClose(
- EncodeUnicode(stop.stop_name), EncodeUnicode(stop.stop_id),
- EncodeUnicode(other_stop.stop_name),
- EncodeUnicode(other_stop.stop_id), distance)
- elif (stop.location_type in (0, 1) and
- other_stop.location_type in (0, 1)):
- if stop.location_type == 0 and other_stop.location_type == 1:
- this_stop = stop
- this_station = other_stop
- elif stop.location_type == 1 and other_stop.location_type == 0:
- this_stop = other_stop
- this_station = stop
- if this_stop.parent_station != this_station.stop_id:
- problems.DifferentStationTooClose(
- EncodeUnicode(this_stop.stop_name),
- EncodeUnicode(this_stop.stop_id),
- EncodeUnicode(this_station.stop_name),
- EncodeUnicode(this_station.stop_id), distance)
- index += 1
-
- # Check for multiple routes using same short + long name
- route_names = {}
- for route in self.routes.values():
- if validate_children:
- route.Validate(problems)
- short_name = ''
- if not IsEmpty(route.route_short_name):
- short_name = route.route_short_name.lower().strip()
- long_name = ''
- if not IsEmpty(route.route_long_name):
- long_name = route.route_long_name.lower().strip()
- name = (short_name, long_name)
- if name in route_names:
- problems.InvalidValue('route_long_name',
- long_name,
- 'The same combination of '
- 'route_short_name and route_long_name '
- 'shouldn\'t be used for more than one '
- 'route, as it is for the for the two routes '
- 'with IDs "%s" and "%s".' %
- (route.route_id, route_names[name].route_id),
- type=TYPE_WARNING)
- else:
- route_names[name] = route
-
- stop_types = {} # a dict mapping stop_id to [route_id, route_type, is_match]
- trips = {} # a dict mapping tuple to (route_id, trip_id)
- for trip in sorted(self.trips.values()):
- if trip.route_id not in self.routes:
- continue
- route_type = self.GetRoute(trip.route_id).route_type
- arrival_times = []
- stop_ids = []
- for index, st in enumerate(trip.GetStopTimes(problems)):
- stop_id = st.stop.stop_id
- arrival_times.append(st.arrival_time)
- stop_ids.append(stop_id)
- # Check a stop if which belongs to both subway and bus.
- if (route_type == Route._ROUTE_TYPE_NAMES['Subway'] or
- route_type == Route._ROUTE_TYPE_NAMES['Bus']):
- if stop_id not in stop_types:
- stop_types[stop_id] = [trip.route_id, route_type, 0]
- elif (stop_types[stop_id][1] != route_type and
- stop_types[stop_id][2] == 0):
- stop_types[stop_id][2] = 1
- if stop_types[stop_id][1] == Route._ROUTE_TYPE_NAMES['Subway']:
- subway_route_id = stop_types[stop_id][0]
- bus_route_id = trip.route_id
- else:
- subway_route_id = trip.route_id
- bus_route_id = stop_types[stop_id][0]
- problems.StopWithMultipleRouteTypes(st.stop.stop_name, stop_id,
- subway_route_id, bus_route_id)
-
- # Check duplicate trips which go through the same stops with same
- # service and start times.
- if self._check_duplicate_trips:
- if not stop_ids or not arrival_times:
- continue
- key = (trip.service_id, min(arrival_times), str(stop_ids))
- if key not in trips:
- trips[key] = (trip.route_id, trip.trip_id)
- else:
- problems.DuplicateTrip(trips[key][1], trips[key][0], trip.trip_id,
- trip.route_id)
-
- # Check that routes' agency IDs are valid, if set
- for route in self.routes.values():
- if (not IsEmpty(route.agency_id) and
- not route.agency_id in self._agencies):
- problems.InvalidValue('agency_id',
- route.agency_id,
- 'The route with ID "%s" specifies agency_id '
- '"%s", which doesn\'t exist.' %
- (route.route_id, route.agency_id))
-
- # Make sure all trips have stop_times
- # We're doing this here instead of in Trip.Validate() so that
- # Trips can be validated without error during the reading of trips.txt
- for trip in self.trips.values():
- trip.ValidateChildren(problems)
- count_stop_times = trip.GetCountStopTimes()
- if not count_stop_times:
- problems.OtherProblem('The trip with the trip_id "%s" doesn\'t have '
- 'any stop times defined.' % trip.trip_id,
- type=TYPE_WARNING)
- if len(trip._headways) > 0: # no stoptimes, but there are headways
- problems.OtherProblem('Frequencies defined, but no stop times given '
- 'in trip %s' % trip.trip_id, type=TYPE_ERROR)
- elif count_stop_times == 1:
- problems.OtherProblem('The trip with the trip_id "%s" only has one '
- 'stop on it; it should have at least one more '
- 'stop so that the riders can leave!' %
- trip.trip_id, type=TYPE_WARNING)
- else:
- # These methods report InvalidValue if there's no first or last time
- trip.GetStartTime(problems=problems)
- trip.GetEndTime(problems=problems)
-
- # Check for unused shapes
- known_shape_ids = set(self._shapes.keys())
- used_shape_ids = set()
- for trip in self.GetTripList():
- used_shape_ids.add(trip.shape_id)
- unused_shape_ids = known_shape_ids - used_shape_ids
- if unused_shape_ids:
- problems.OtherProblem('The shapes with the following shape_ids aren\'t '
- 'used by any trips: %s' %
- ', '.join(unused_shape_ids),
- type=TYPE_WARNING)
-
-
-# Map from literal string that should never be found in the csv data to a human
-# readable description
-INVALID_LINE_SEPARATOR_UTF8 = {
- "\x0c": "ASCII Form Feed 0x0C",
- # May be part of end of line, but not found elsewhere
- "\x0d": "ASCII Carriage Return 0x0D, \\r",
- "\xe2\x80\xa8": "Unicode LINE SEPARATOR U+2028",
- "\xe2\x80\xa9": "Unicode PARAGRAPH SEPARATOR U+2029",
- "\xc2\x85": "Unicode NEXT LINE SEPARATOR U+0085",
-}
-
-class EndOfLineChecker:
- """Wrapper for a file-like object that checks for consistent line ends.
-
- The check for consistent end of lines (all CR LF or all LF) only happens if
- next() is called until it raises StopIteration.
- """
- def __init__(self, f, name, problems):
- """Create new object.
-
- Args:
- f: file-like object to wrap
- name: name to use for f. StringIO objects don't have a name attribute.
- problems: a ProblemReporterBase object
- """
- self._f = f
- self._name = name
- self._crlf = 0
- self._crlf_examples = []
- self._lf = 0
- self._lf_examples = []
- self._line_number = 0 # first line will be number 1
- self._problems = problems
-
- def __iter__(self):
- return self
-
- def next(self):
- """Return next line without end of line marker or raise StopIteration."""
- try:
- next_line = self._f.next()
- except StopIteration:
- self._FinalCheck()
- raise
-
- self._line_number += 1
- m_eol = re.search(r"[\x0a\x0d]*$", next_line)
- if m_eol.group() == "\x0d\x0a":
- self._crlf += 1
- if self._crlf <= 5:
- self._crlf_examples.append(self._line_number)
- elif m_eol.group() == "\x0a":
- self._lf += 1
- if self._lf <= 5:
- self._lf_examples.append(self._line_number)
- elif m_eol.group() == "":
- # Should only happen at the end of the file
- try:
- self._f.next()
- raise RuntimeError("Unexpected row without new line sequence")
- except StopIteration:
- # Will be raised again when EndOfLineChecker.next() is next called
- pass
- else:
- self._problems.InvalidLineEnd(
- codecs.getencoder('string_escape')(m_eol.group())[0],
- (self._name, self._line_number))
- next_line_contents = next_line[0:m_eol.start()]
- for seq, name in INVALID_LINE_SEPARATOR_UTF8.items():
- if next_line_contents.find(seq) != -1:
- self._problems.OtherProblem(
- "Line contains %s" % name,
- context=(self._name, self._line_number))
- return next_line_contents
-
- def _FinalCheck(self):
- if self._crlf > 0 and self._lf > 0:
- crlf_plural = self._crlf > 1 and "s" or ""
- crlf_lines = ", ".join(["%s" % e for e in self._crlf_examples])
- if self._crlf > len(self._crlf_examples):
- crlf_lines += ", ..."
- lf_plural = self._lf > 1 and "s" or ""
- lf_lines = ", ".join(["%s" % e for e in self._lf_examples])
- if self._lf > len(self._lf_examples):
- lf_lines += ", ..."
-
- self._problems.OtherProblem(
- "Found %d CR LF \"\\r\\n\" line end%s (line%s %s) and "
- "%d LF \"\\n\" line end%s (line%s %s). A file must use a "
- "consistent line end." % (self._crlf, crlf_plural, crlf_plural,
- crlf_lines, self._lf, lf_plural,
- lf_plural, lf_lines),
- (self._name,))
- # Prevent _FinalCheck() from reporting the problem twice, in the unlikely
- # case that it is run twice
- self._crlf = 0
- self._lf = 0
-
-
-# Filenames specified in GTFS spec
-KNOWN_FILENAMES = [
- 'agency.txt',
- 'stops.txt',
- 'routes.txt',
- 'trips.txt',
- 'stop_times.txt',
- 'calendar.txt',
- 'calendar_dates.txt',
- 'fare_attributes.txt',
- 'fare_rules.txt',
- 'shapes.txt',
- 'frequencies.txt',
- 'transfers.txt',
-]
-
-class Loader:
- def __init__(self,
- feed_path=None,
- schedule=None,
- problems=default_problem_reporter,
- extra_validation=False,
- load_stop_times=True,
- memory_db=True,
- zip=None,
- check_duplicate_trips=False):
- """Initialize a new Loader object.
-
- Args:
- feed_path: string path to a zip file or directory
- schedule: a Schedule object or None to have one created
- problems: a ProblemReporter object, the default reporter raises an
- exception for each problem
- extra_validation: True if you would like extra validation
- load_stop_times: load the stop_times table, used to speed load time when
- times are not needed. The default is True.
- memory_db: if creating a new Schedule object use an in-memory sqlite
- database instead of creating one in a temporary file
- zip: a zipfile.ZipFile object, optionally used instead of path
- """
- if not schedule:
- schedule = Schedule(problem_reporter=problems, memory_db=memory_db,
- check_duplicate_trips=check_duplicate_trips)
- self._extra_validation = extra_validation
- self._schedule = schedule
- self._problems = problems
- self._path = feed_path
- self._zip = zip
- self._load_stop_times = load_stop_times
-
- def _DetermineFormat(self):
- """Determines whether the feed is in a form that we understand, and
- if so, returns True."""
- if self._zip:
- # If zip was passed to __init__ then path isn't used
- assert not self._path
- return True
-
- if not isinstance(self._path, basestring) and hasattr(self._path, 'read'):
- # A file-like object, used for testing with a StringIO file
- self._zip = zipfile.ZipFile(self._path, mode='r')
- return True
-
- if not os.path.exists(self._path):
- self._problems.FeedNotFound(self._path)
- return False
-
- if self._path.endswith('.zip'):
- try:
- self._zip = zipfile.ZipFile(self._path, mode='r')
- except IOError: # self._path is a directory
- pass
- except zipfile.BadZipfile:
- self._problems.UnknownFormat(self._path)
- return False
-
- if not self._zip and not os.path.isdir(self._path):
- self._problems.UnknownFormat(self._path)
- return False
-
- return True
-
- def _GetFileNames(self):
- """Returns a list of file names in the feed."""
- if self._zip:
- return self._zip.namelist()
- else:
- return os.listdir(self._path)
-
- def _CheckFileNames(self):
- filenames = self._GetFileNames()
- for feed_file in filenames:
- if feed_file not in KNOWN_FILENAMES:
- if not feed_file.startswith('.'):
- # Don't worry about .svn files and other hidden files
- # as this will break the tests.
- self._problems.UnknownFile(feed_file)
-
- def _GetUtf8Contents(self, file_name):
- """Check for errors in file_name and return a string for csv reader."""
- contents = self._FileContents(file_name)
- if not contents: # Missing file
- return
-
- # Check for errors that will prevent csv.reader from working
- if len(contents) >= 2 and contents[0:2] in (codecs.BOM_UTF16_BE,
- codecs.BOM_UTF16_LE):
- self._problems.FileFormat("appears to be encoded in utf-16", (file_name, ))
- # Convert and continue, so we can find more errors
- contents = codecs.getdecoder('utf-16')(contents)[0].encode('utf-8')
-
- null_index = contents.find('\0')
- if null_index != -1:
- # It is easier to get some surrounding text than calculate the exact
- # row_num
- m = re.search(r'.{,20}\0.{,20}', contents, re.DOTALL)
- self._problems.FileFormat(
- "contains a null in text \"%s\" at byte %d" %
- (codecs.getencoder('string_escape')(m.group()), null_index + 1),
- (file_name, ))
- return
-
- # strip out any UTF-8 Byte Order Marker (otherwise it'll be
- # treated as part of the first column name, causing a mis-parse)
- contents = contents.lstrip(codecs.BOM_UTF8)
- return contents
-
- def _ReadCsvDict(self, file_name, all_cols, required):
- """Reads lines from file_name, yielding a dict of unicode values."""
- assert file_name.endswith(".txt")
- table_name = file_name[0:-4]
- contents = self._GetUtf8Contents(file_name)
- if not contents:
- return
-
- eol_checker = EndOfLineChecker(StringIO.StringIO(contents),
- file_name, self._problems)
- # The csv module doesn't provide a way to skip trailing space, but when I
- # checked 15/675 feeds had trailing space in a header row and 120 had spaces
- # after fields. Space after header fields can cause a serious parsing
- # problem, so warn. Space after body fields can cause a problem time,
- # integer and id fields; they will be validated at higher levels.
- reader = csv.reader(eol_checker, skipinitialspace=True)
-
- raw_header = reader.next()
- header_occurrences = defaultdict(lambda: 0)
- header = []
- valid_columns = [] # Index into raw_header and raw_row
- for i, h in enumerate(raw_header):
- h_stripped = h.strip()
- if not h_stripped:
- self._problems.CsvSyntax(
- description="The header row should not contain any blank values. "
- "The corresponding column will be skipped for the "
- "entire file.",
- context=(file_name, 1, [''] * len(raw_header), raw_header),
- type=TYPE_ERROR)
- continue
- elif h != h_stripped:
- self._problems.CsvSyntax(
- description="The header row should not contain any "
- "space characters.",
- context=(file_name, 1, [''] * len(raw_header), raw_header),
- type=TYPE_WARNING)
- header.append(h_stripped)
- valid_columns.append(i)
- header_occurrences[h_stripped] += 1
-
- for name, count in header_occurrences.items():
- if count > 1:
- self._problems.DuplicateColumn(
- header=name,
- file_name=file_name,
- count=count)
-
- self._schedule._table_columns[table_name] = header
-
- # check for unrecognized columns, which are often misspellings
- unknown_cols = set(header) - set(all_cols)
- if len(unknown_cols) == len(header):
- self._problems.CsvSyntax(
- description="The header row did not contain any known column "
- "names. The file is most likely missing the header row "
- "or not in the expected CSV format.",
- context=(file_name, 1, [''] * len(raw_header), raw_header),
- type=TYPE_ERROR)
- else:
- for col in unknown_cols:
- # this is provided in order to create a nice colored list of
- # columns in the validator output
- context = (file_name, 1, [''] * len(header), header)
- self._problems.UnrecognizedColumn(file_name, col, context)
-
- missing_cols = set(required) - set(header)
- for col in missing_cols:
- # this is provided in order to create a nice colored list of
- # columns in the validator output
- context = (file_name, 1, [''] * len(header), header)
- self._problems.MissingColumn(file_name, col, context)
-
- line_num = 1 # First line read by reader.next() above
- for raw_row in reader:
- line_num += 1
- if len(raw_row) == 0: # skip extra empty lines in file
- continue
-
- if len(raw_row) > len(raw_header):
- self._problems.OtherProblem('Found too many cells (commas) in line '
- '%d of file "%s". Every row in the file '
- 'should have the same number of cells as '
- 'the header (first line) does.' %
- (line_num, file_name),
- (file_name, line_num),
- type=TYPE_WARNING)
-
- if len(raw_row) < len(raw_header):
- self._problems.OtherProblem('Found missing cells (commas) in line '
- '%d of file "%s". Every row in the file '
- 'should have the same number of cells as '
- 'the header (first line) does.' %
- (line_num, file_name),
- (file_name, line_num),
- type=TYPE_WARNING)
-
- # raw_row is a list of raw bytes which should be valid utf-8. Convert each
- # valid_columns of raw_row into Unicode.
- valid_values = []
- unicode_error_columns = [] # index of valid_values elements with an error
- for i in valid_columns:
- try:
- valid_values.append(raw_row[i].decode('utf-8'))
- except UnicodeDecodeError:
- # Replace all invalid characters with REPLACEMENT CHARACTER (U+FFFD)
- valid_values.append(codecs.getdecoder("utf8")
- (raw_row[i], errors="replace")[0])
- unicode_error_columns.append(len(valid_values) - 1)
- except IndexError:
- break
-
- # The error report may contain a dump of all values in valid_values so
- # problems can not be reported until after converting all of raw_row to
- # Unicode.
- for i in unicode_error_columns:
- self._problems.InvalidValue(header[i], valid_values[i],
- 'Unicode error',
- (file_name, line_num,
- valid_values, header))
-
-
- d = dict(zip(header, valid_values))
- yield (d, line_num, header, valid_values)
-
- # TODO: Add testing for this specific function
- def _ReadCSV(self, file_name, cols, required):
- """Reads lines from file_name, yielding a list of unicode values
- corresponding to the column names in cols."""
- contents = self._GetUtf8Contents(file_name)
- if not contents:
- return
-
- eol_checker = EndOfLineChecker(StringIO.StringIO(contents),
- file_name, self._problems)
- reader = csv.reader(eol_checker) # Use excel dialect
-
- header = reader.next()
- header = map(lambda x: x.strip(), header) # trim any whitespace
- header_occurrences = defaultdict(lambda: 0)
- for column_header in header:
- header_occurrences[column_header] += 1
-
- for name, count in header_occurrences.items():
- if count > 1:
- self._problems.DuplicateColumn(
- header=name,
- file_name=file_name,
- count=count)
-
- # check for unrecognized columns, which are often misspellings
- unknown_cols = set(header).difference(set(cols))
- for col in unknown_cols:
- # this is provided in order to create a nice colored list of
- # columns in the validator output
- context = (file_name, 1, [''] * len(header), header)
- self._problems.UnrecognizedColumn(file_name, col, context)
-
- col_index = [-1] * len(cols)
- for i in range(len(cols)):
- if cols[i] in header:
- col_index[i] = header.index(cols[i])
- elif cols[i] in required:
- self._problems.MissingColumn(file_name, cols[i])
-
- row_num = 1
- for row in reader:
- row_num += 1
- if len(row) == 0: # skip extra empty lines in file
- continue
-
- if len(row) > len(header):
- self._problems.OtherProblem('Found too many cells (commas) in line '
- '%d of file "%s". Every row in the file '
- 'should have the same number of cells as '
- 'the header (first line) does.' %
- (row_num, file_name), (file_name, row_num),
- type=TYPE_WARNING)
-
- if len(row) < len(header):
- self._problems.OtherProblem('Found missing cells (commas) in line '
- '%d of file "%s". Every row in the file '
- 'should have the same number of cells as '
- 'the header (first line) does.' %
- (row_num, file_name), (file_name, row_num),
- type=TYPE_WARNING)
-
- result = [None] * len(cols)
- unicode_error_columns = [] # A list of column numbers with an error
- for i in range(len(cols)):
- ci = col_index[i]
- if ci >= 0:
- if len(row) <= ci: # handle short CSV rows
- result[i] = u''
- else:
- try:
- result[i] = row[ci].decode('utf-8').strip()
- except UnicodeDecodeError:
- # Replace all invalid characters with
- # REPLACEMENT CHARACTER (U+FFFD)
- result[i] = codecs.getdecoder("utf8")(row[ci],
- errors="replace")[0].strip()
- unicode_error_columns.append(i)
-
- for i in unicode_error_columns:
- self._problems.InvalidValue(cols[i], result[i],
- 'Unicode error',
- (file_name, row_num, result, cols))
- yield (result, row_num, cols)
-
- def _HasFile(self, file_name):
- """Returns True if there's a file in the current feed with the
- given file_name in the current feed."""
- if self._zip:
- return file_name in self._zip.namelist()
- else:
- file_path = os.path.join(self._path, file_name)
- return os.path.exists(file_path) and os.path.isfile(file_path)
-
- def _FileContents(self, file_name):
- results = None
- if self._zip:
- try:
- results = self._zip.read(file_name)
- except KeyError: # file not found in archve
- self._problems.MissingFile(file_name)
- return None
- else:
- try:
- data_file = open(os.path.join(self._path, file_name), 'rb')
- results = data_file.read()
- except IOError: # file not found
- self._problems.MissingFile(file_name)
- return None
-
- if not results:
- self._problems.EmptyFile(file_name)
- return results
-
- def _LoadAgencies(self):
- for (d, row_num, header, row) in self._ReadCsvDict('agency.txt',
- Agency._FIELD_NAMES,
- Agency._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext('agency.txt', row_num, row, header)
- agency = Agency(field_dict=d)
- self._schedule.AddAgencyObject(agency, self._problems)
- self._problems.ClearContext()
-
- def _LoadStops(self):
- for (d, row_num, header, row) in self._ReadCsvDict(
- 'stops.txt',
- Stop._FIELD_NAMES,
- Stop._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext('stops.txt', row_num, row, header)
-
- stop = Stop(field_dict=d)
- stop.Validate(self._problems)
- self._schedule.AddStopObject(stop, self._problems)
-
- self._problems.ClearContext()
-
- def _LoadRoutes(self):
- for (d, row_num, header, row) in self._ReadCsvDict(
- 'routes.txt',
- Route._FIELD_NAMES,
- Route._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext('routes.txt', row_num, row, header)
-
- route = Route(field_dict=d)
- self._schedule.AddRouteObject(route, self._problems)
-
- self._problems.ClearContext()
-
- def _LoadCalendar(self):
- file_name = 'calendar.txt'
- file_name_dates = 'calendar_dates.txt'
- if not self._HasFile(file_name) and not self._HasFile(file_name_dates):
- self._problems.MissingFile(file_name)
- return
-
- # map period IDs to (period object, (file_name, row_num, row, cols))
- periods = {}
-
- # process calendar.txt
- if self._HasFile(file_name):
- has_useful_contents = False
- for (row, row_num, cols) in \
- self._ReadCSV(file_name,
- ServicePeriod._FIELD_NAMES,
- ServicePeriod._FIELD_NAMES_REQUIRED):
- context = (file_name, row_num, row, cols)
- self._problems.SetFileContext(*context)
-
- period = ServicePeriod(field_list=row)
-
- if period.service_id in periods:
- self._problems.DuplicateID('service_id', period.service_id)
- else:
- periods[period.service_id] = (period, context)
- self._problems.ClearContext()
-
- # process calendar_dates.txt
- if self._HasFile(file_name_dates):
- # ['service_id', 'date', 'exception_type']
- fields = ServicePeriod._FIELD_NAMES_CALENDAR_DATES
- for (row, row_num, cols) in self._ReadCSV(file_name_dates,
- fields, fields):
- context = (file_name_dates, row_num, row, cols)
- self._problems.SetFileContext(*context)
-
- service_id = row[0]
-
- period = None
- if service_id in periods:
- period = periods[service_id][0]
- else:
- period = ServicePeriod(service_id)
- periods[period.service_id] = (period, context)
-
- exception_type = row[2]
- if exception_type == u'1':
- period.SetDateHasService(row[1], True, self._problems)
- elif exception_type == u'2':
- period.SetDateHasService(row[1], False, self._problems)
- else:
- self._problems.InvalidValue('exception_type', exception_type)
- self._problems.ClearContext()
-
- # Now insert the periods into the schedule object, so that they're
- # validated with both calendar and calendar_dates info present
- for period, context in periods.values():
- self._problems.SetFileContext(*context)
- self._schedule.AddServicePeriodObject(period, self._problems)
- self._problems.ClearContext()
-
- def _LoadShapes(self):
- if not self._HasFile('shapes.txt'):
- return
-
- shapes = {} # shape_id to tuple
- for (row, row_num, cols) in self._ReadCSV('shapes.txt',
- Shape._FIELD_NAMES,
- Shape._REQUIRED_FIELD_NAMES):
- file_context = ('shapes.txt', row_num, row, cols)
- self._problems.SetFileContext(*file_context)
-
- (shape_id, lat, lon, seq, dist) = row
- if IsEmpty(shape_id):
- self._problems.MissingValue('shape_id')
- continue
- try:
- seq = int(seq)
- except (TypeError, ValueError):
- self._problems.InvalidValue('shape_pt_sequence', seq,
- 'Value should be a number (0 or higher)')
- continue
-
- shapes.setdefault(shape_id, []).append((seq, lat, lon, dist, file_context))
- self._problems.ClearContext()
-
- for shape_id, points in shapes.items():
- shape = Shape(shape_id)
- points.sort()
- if points and points[0][0] < 0:
- self._problems.InvalidValue('shape_pt_sequence', points[0][0],
- 'In shape %s, a negative sequence number '
- '%d was found; sequence numbers should be '
- '0 or higher.' % (shape_id, points[0][0]))
-
- last_seq = None
- for (seq, lat, lon, dist, file_context) in points:
- if (seq == last_seq):
- self._problems.SetFileContext(*file_context)
- self._problems.InvalidValue('shape_pt_sequence', seq,
- 'The sequence number %d occurs more '
- 'than once in shape %s.' %
- (seq, shape_id))
- last_seq = seq
- shape.AddPoint(lat, lon, dist, self._problems)
- self._problems.ClearContext()
-
- self._schedule.AddShapeObject(shape, self._problems)
-
- def _LoadTrips(self):
- for (d, row_num, header, row) in self._ReadCsvDict(
- 'trips.txt',
- Trip._FIELD_NAMES,
- Trip._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext('trips.txt', row_num, row, header)
-
- trip = Trip(field_dict=d)
- self._schedule.AddTripObject(trip, self._problems)
-
- self._problems.ClearContext()
-
- def _LoadFares(self):
- if not self._HasFile('fare_attributes.txt'):
- return
- for (row, row_num, cols) in self._ReadCSV('fare_attributes.txt',
- Fare._FIELD_NAMES,
- Fare._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext('fare_attributes.txt', row_num, row, cols)
-
- fare = Fare(field_list=row)
- self._schedule.AddFareObject(fare, self._problems)
-
- self._problems.ClearContext()
-
- def _LoadFareRules(self):
- if not self._HasFile('fare_rules.txt'):
- return
- for (row, row_num, cols) in self._ReadCSV('fare_rules.txt',
- FareRule._FIELD_NAMES,
- FareRule._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext('fare_rules.txt', row_num, row, cols)
-
- rule = FareRule(field_list=row)
- self._schedule.AddFareRuleObject(rule, self._problems)
-
- self._problems.ClearContext()
-
- def _LoadHeadways(self):
- file_name = 'frequencies.txt'
- if not self._HasFile(file_name): # headways are an optional feature
- return
-
- # ['trip_id', 'start_time', 'end_time', 'headway_secs']
- fields = Trip._FIELD_NAMES_HEADWAY
- modified_trips = {}
- for (row, row_num, cols) in self._ReadCSV(file_name, fields, fields):
- self._problems.SetFileContext(file_name, row_num, row, cols)
- (trip_id, start_time, end_time, headway_secs) = row
- try:
- trip = self._schedule.GetTrip(trip_id)
- trip.AddHeadwayPeriod(start_time, end_time, headway_secs,
- self._problems)
- modified_trips[trip_id] = trip
- except KeyError:
- self._problems.InvalidValue('trip_id', trip_id)
- self._problems.ClearContext()
-
- for trip in modified_trips.values():
- trip.Validate(self._problems)
-
- def _LoadStopTimes(self):
- for (row, row_num, cols) in self._ReadCSV('stop_times.txt',
- StopTime._FIELD_NAMES,
- StopTime._REQUIRED_FIELD_NAMES):
- file_context = ('stop_times.txt', row_num, row, cols)
- self._problems.SetFileContext(*file_context)
-
- (trip_id, arrival_time, departure_time, stop_id, stop_sequence,
- stop_headsign, pickup_type, drop_off_type, shape_dist_traveled) = row
-
- try:
- sequence = int(stop_sequence)
- except (TypeError, ValueError):
- self._problems.InvalidValue('stop_sequence', stop_sequence,
- 'This should be a number.')
- continue
- if sequence < 0:
- self._problems.InvalidValue('stop_sequence', sequence,
- 'Sequence numbers should be 0 or higher.')
-
- if stop_id not in self._schedule.stops:
- self._problems.InvalidValue('stop_id', stop_id,
- 'This value wasn\'t defined in stops.txt')
- continue
- stop = self._schedule.stops[stop_id]
- if trip_id not in self._schedule.trips:
- self._problems.InvalidValue('trip_id', trip_id,
- 'This value wasn\'t defined in trips.txt')
- continue
- trip = self._schedule.trips[trip_id]
-
- # If self._problems.Report returns then StopTime.__init__ will return
- # even if the StopTime object has an error. Thus this code may add a
- # StopTime that didn't validate to the database.
- # Trip.GetStopTimes then tries to make a StopTime from the invalid data
- # and calls the problem reporter for errors. An ugly solution is to
- # wrap problems and a better solution is to move all validation out of
- # __init__. For now make sure Trip.GetStopTimes gets a problem reporter
- # when called from Trip.Validate.
- stop_time = StopTime(self._problems, stop, arrival_time,
- departure_time, stop_headsign,
- pickup_type, drop_off_type,
- shape_dist_traveled, stop_sequence=sequence)
- trip._AddStopTimeObjectUnordered(stop_time, self._schedule)
- self._problems.ClearContext()
-
- # stop_times are validated in Trip.ValidateChildren, called by
- # Schedule.Validate
-
- def _LoadTransfers(self):
- file_name = 'transfers.txt'
- if not self._HasFile(file_name): # transfers are an optional feature
- return
- for (d, row_num, header, row) in self._ReadCsvDict(file_name,
- Transfer._FIELD_NAMES,
- Transfer._REQUIRED_FIELD_NAMES):
- self._problems.SetFileContext(file_name, row_num, row, header)
- transfer = Transfer(field_dict=d)
- self._schedule.AddTransferObject(transfer, self._problems)
- self._problems.ClearContext()
-
- def Load(self):
- self._problems.ClearContext()
- if not self._DetermineFormat():
- return self._schedule
-
- self._CheckFileNames()
-
- self._LoadAgencies()
- self._LoadStops()
- self._LoadRoutes()
- self._LoadCalendar()
- self._LoadShapes()
- self._LoadTrips()
- self._LoadHeadways()
- if self._load_stop_times:
- self._LoadStopTimes()
- self._LoadFares()
- self._LoadFareRules()
- self._LoadTransfers()
-
- if self._zip:
- self._zip.close()
- self._zip = None
-
- if self._extra_validation:
- self._schedule.Validate(self._problems, validate_children=False)
-
- return self._schedule
-
-
-class ShapeLoader(Loader):
- """A subclass of Loader that only loads the shapes from a GTFS file."""
-
- def __init__(self, *args, **kwargs):
- """Initialize a new ShapeLoader object.
-
- See Loader.__init__ for argument documentation.
- """
- Loader.__init__(self, *args, **kwargs)
-
- def Load(self):
- self._LoadShapes()
- return self._schedule
-
--- a/origin-src/transitfeed-1.2.5/build/lib/transitfeed/shapelib.py
+++ /dev/null
@@ -1,613 +1,1 @@
-#!/usr/bin/python2.4
-#
-# Copyright 2007 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""A library for manipulating points and polylines.
-
-This is a library for creating and manipulating points on the unit
-sphere, as an approximate model of Earth. The primary use of this
-library is to make manipulation and matching of polylines easy in the
-transitfeed library.
-
-NOTE: in this library, Earth is modelled as a sphere, whereas
-GTFS specifies that latitudes and longitudes are in WGS84. For the
-purpose of comparing and matching latitudes and longitudes that
-are relatively close together on the surface of the earth, this
-is adequate; for other purposes, this library may not be accurate
-enough.
-"""
-
-__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)'
-
-import copy
-import decimal
-import heapq
-import math
-
-class ShapeError(Exception):
- """Thrown whenever there is a shape parsing error."""
- pass
-
-
-EARTH_RADIUS_METERS = 6371010.0
-
-
-class Point(object):
- """
- A class representing a point on the unit sphere in three dimensions.
- """
- def __init__(self, x, y, z):
- self.x = x
- self.y = y
- self.z = z
-
- def __hash__(self):
- return hash((self.x, self.y, self.z))
-
- def __cmp__(self, other):
- if not isinstance(other, Point):
- raise TypeError('Point.__cmp__(x,y) requires y to be a "Point", '
- 'not a "%s"' % type(other).__name__)
- return cmp((self.x, self.y, self.z), (other.x, other.y, other.z))
-
- def __str__(self):
- return "(%.15f, %.15f, %.15f) " % (self.x, self.y, self.z)
-
- def Norm2(self):
- """
- Returns the L_2 (Euclidean) norm of self.
- """
- sum = self.x * self.x + self.y * self.y + self.z * self.z
- return math.sqrt(float(sum))
-
- def IsUnitLength(self):
- return abs(self.Norm2() - 1.0) < 1e-14
-
- def Plus(self, other):
- """
- Returns a new point which is the pointwise sum of self and other.
- """
- return Point(self.x + other.x,
- self.y + other.y,
- self.z + other.z)
-
- def Minus(self, other):
- """
- Returns a new point which is the pointwise subtraction of other from
- self.
- """
- return Point(self.x - other.x,
- self.y - other.y,
- self.z - other.z)
-
- def DotProd(self, other):
- """
- Returns the (scalar) dot product of self with other.
- """
- return self.x * other.x + self.y * other.y + self.z * other.z
-
- def Times(self, val):
- """
- Returns a new point which is pointwise multiplied by val.
- """
- return Point(self.x * val, self.y * val, self.z * val)
-
- def Normalize(self):
- """
- Returns a unit point in the same direction as self.
- """
- return self.Times(1 / self.Norm2())
-
- def RobustCrossProd(self, other):
- """
- A robust version of cross product. If self and other
- are not nearly the same point, returns the same value
- as CrossProd() modulo normalization. Otherwise returns
- an arbitrary unit point orthogonal to self.
- """
- assert(self.IsUnitLength() and other.IsUnitLength())
- x = self.Plus(other).CrossProd(other.Minus(self))
- if abs(x.x) > 1e-15 or abs(x.y) > 1e-15 or abs(x.z) > 1e-15:
- return x.Normalize()
- else:
- return self.Ortho()
-
- def LargestComponent(self):
- """
- Returns (i, val) where i is the component index (0 - 2)
- which has largest absolute value and val is the value
- of the component.
- """
- if abs(self.x) > abs(self.y):
- if abs(self.x) > abs(self.z):
- return (0, self.x)
- else:
- return (2, self.z)
- else:
- if abs(self.y) > abs(self.z):
- return (1, self.y)
- else:
- return (2, self.z)
-
- def Ortho(self):
- """Returns a unit-length point orthogonal to this point"""
- (index, val) = self.LargestComponent()
- index = index - 1
- if index < 0:
- index = 2
- temp = Point(0.012, 0.053, 0.00457)
- if index == 0:
- temp.x = 1
- elif index == 1:
- temp.y = 1
- elif index == 2:
- temp.z = 1
- return self.CrossProd(temp).Normalize()
-
- def CrossProd(self, other):
- """
- Returns the cross product of self and other.
- """
- return Point(
- self.y * other.z - self.z * other.y,
- self.z * other.x - self.x * other.z,
- self.x * other.y - self.y * other.x)
-
- @staticmethod
- def _approxEq(a, b):
- return abs(a - b) < 1e-11
-
- def Equals(self, other):
- """
- Returns true of self and other are approximately equal.
- """
- return (self._approxEq(self.x, other.x)
- and self._approxEq(self.y, other.y)
- and self._approxEq(self.z, other.z))
-
- def Angle(self, other):
- """
- Returns the angle in radians between self and other.
- """
- return math.atan2(self.CrossProd(other).Norm2(),
- self.DotProd(other))
-
- def ToLatLng(self):
- """
- Returns that latitude and longitude that this point represents
- under a spherical Earth model.
- """
- rad_lat = math.atan2(self.z, math.sqrt(self.x * self.x + self.y * self.y))
- rad_lng = math.atan2(self.y, self.x)
- return (rad_lat * 180.0 / math.pi, rad_lng * 180.0 / math.pi)
-
- @staticmethod
- def FromLatLng(lat, lng):
- """
- Returns a new point representing this latitude and longitude under
- a spherical Earth model.
- """
- phi = lat * (math.pi / 180.0)
- theta = lng * (math.pi / 180.0)
- cosphi = math.cos(phi)
- return Point(math.cos(theta) * cosphi,
- math.sin(theta) * cosphi,
- math.sin(phi))
-
- def GetDistanceMeters(self, other):
- assert(self.IsUnitLength() and other.IsUnitLength())
- return self.Angle(other) * EARTH_RADIUS_METERS
-
-
-def SimpleCCW(a, b, c):
- """
- Returns true if the triangle abc is oriented counterclockwise.
- """
- return c.CrossProd(a).DotProd(b) > 0
-
-def GetClosestPoint(x, a, b):
- """
- Returns the point on the great circle segment ab closest to x.
- """
- assert(x.IsUnitLength())
- assert(a.IsUnitLength())
- assert(b.IsUnitLength())
-
- a_cross_b = a.RobustCrossProd(b)
- # project to the great circle going through a and b
- p = x.Minus(
- a_cross_b.Times(
- x.DotProd(a_cross_b) / a_cross_b.Norm2()))
-
- # if p lies between a and b, return it
- if SimpleCCW(a_cross_b, a, p) and SimpleCCW(p, b, a_cross_b):
- return p.Normalize()
-
- # otherwise return the closer of a or b
- if x.Minus(a).Norm2() <= x.Minus(b).Norm2():
- return a
- else:
- return b
-
-
-class Poly(object):
- """
- A class representing a polyline.
- """
- def __init__(self, points = [], name=None):
- self._points = list(points)
- self._name = name
-
- def AddPoint(self, p):
- """
- Adds a new point to the end of the polyline.
- """
- assert(p.IsUnitLength())
- self._points.append(p)
-
- def GetName(self):
- return self._name
-
- def GetPoint(self, i):
- return self._points[i]
-
- def GetPoints(self):
- return self._points
-
- def GetNumPoints(self):
- return len(self._points)
-
- def _GetPointSafe(self, i):
- try:
- return self.GetPoint(i)
- except IndexError:
- return None
-
- def GetClosestPoint(self, p):
- """
- Returns (closest_p, closest_i), where closest_p is the closest point
- to p on the piecewise linear curve represented by the polyline,
- and closest_i is the index of the point on the polyline just before
- the polyline segment that contains closest_p.
- """
- assert(len(self._points) > 0)
- closest_point = self._points[0]
- closest_i = 0
-
- for i in range(0, len(self._points) - 1):
- (a, b) = (self._points[i], self._points[i+1])
- cur_closest_point = GetClosestPoint(p, a, b)
- if p.Angle(cur_closest_point) < p.Angle(closest_point):
- closest_point = cur_closest_point.Normalize()
- closest_i = i
-
- return (closest_point, closest_i)
-
- def LengthMeters(self):
- """Return length of this polyline in meters."""
- assert(len(self._points) > 0)
- length = 0
- for i in range(0, len(self._points) - 1):
- length += self._points[i].GetDistanceMeters(self._points[i+1])
- return length
-
- def Reversed(self):
- """Return a polyline that is the reverse of this polyline."""
- return Poly(reversed(self.GetPoints()), self.GetName())
-
- def CutAtClosestPoint(self, p):
- """
- Let x be the point on the polyline closest to p. Then
- CutAtClosestPoint returns two new polylines, one representing
- the polyline from the beginning up to x, and one representing
- x onwards to the end of the polyline. x is the first point
- returned in the second polyline.
- """
- (closest, i) = self.GetClosestPoint(p)
-
- tmp = [closest]
- tmp.extend(self._points[i+1:])
- return (Poly(self._points[0:i+1]),
- Poly(tmp))
-
- def GreedyPolyMatchDist(self, shape):
- """
- Tries a greedy matching algorithm to match self to the
- given shape. Returns the maximum distance in meters of
- any point in self to its matched point in shape under the
- algorithm.
-
- Args: shape, a Poly object.
- """
- tmp_shape = Poly(shape.GetPoints())
- max_radius = 0
- for (i, point) in enumerate(self._points):
- tmp_shape = tmp_shape.CutAtClosestPoint(point)[1]
- dist = tmp_shape.GetPoint(0).GetDistanceMeters(point)
- max_radius = max(max_radius, dist)
- return max_radius
-
- @staticmethod
- def MergePolys(polys, merge_point_threshold=10):
- """
- Merge multiple polylines, in the order that they were passed in.
- Merged polyline will have the names of their component parts joined by ';'.
- Example: merging [a,b], [c,d] and [e,f] will result in [a,b,c,d,e,f].
- However if the endpoints of two adjacent polylines are less than
- merge_point_threshold meters apart, we will only use the first endpoint in
- the merged polyline.
- """
- name = ";".join((p.GetName(), '')[p.GetName() is None] for p in polys)
- merged = Poly([], name)
- if polys:
- first_poly = polys[0]
- for p in first_poly.GetPoints():
- merged.AddPoint(p)
- last_point = merged._GetPointSafe(-1)
- for poly in polys[1:]:
- first_point = poly._GetPointSafe(0)
- if (last_point and first_point and
- last_point.GetDistanceMeters(first_point) <= merge_point_threshold):
- points = poly.GetPoints()[1:]
- else:
- points = poly.GetPoints()
- for p in points:
- merged.AddPoint(p)
- last_point = merged._GetPointSafe(-1)
- return merged
-
-
- def __str__(self):
- return self._ToString(str)
-
- def ToLatLngString(self):
- return self._ToString(lambda p: str(p.ToLatLng()))
-
- def _ToString(self, pointToStringFn):
- return "%s: %s" % (self.GetName() or "",
- ", ".join([pointToStringFn(p) for p in self._points]))
-
-
-class PolyCollection(object):
- """
- A class representing a collection of polylines.
- """
- def __init__(self):
- self._name_to_shape = {}
- pass
-
- def AddPoly(self, poly, smart_duplicate_handling=True):
- """
- Adds a new polyline to the collection.
- """
- inserted_name = poly.GetName()
- if poly.GetName() in self._name_to_shape:
- if not smart_duplicate_handling:
- raise ShapeError("Duplicate shape found: " + poly.GetName())
-
- print ("Warning: duplicate shape id being added to collection: " +
- poly.GetName())
- if poly.GreedyPolyMatchDist(self._name_to_shape[poly.GetName()]) < 10:
- print " (Skipping as it apears to be an exact duplicate)"
- else:
- print " (Adding new shape variant with uniquified name)"
- inserted_name = "%s-%d" % (inserted_name, len(self._name_to_shape))
- self._name_to_shape[inserted_name] = poly
-
- def NumPolys(self):
- return len(self._name_to_shape)
-
- def FindMatchingPolys(self, start_point, end_point, max_radius=150):
- """
- Returns a list of polylines in the collection that have endpoints
- within max_radius of the given start and end points.
- """
- matches = []
- for shape in self._name_to_shape.itervalues():
- if start_point.GetDistanceMeters(shape.GetPoint(0)) < max_radius and \
- end_point.GetDistanceMeters(shape.GetPoint(-1)) < max_radius:
- matches.append(shape)
- return matches
-
-class PolyGraph(PolyCollection):
- """
- A class representing a graph where the edges are polylines.
- """
- def __init__(self):
- PolyCollection.__init__(self)
- self._nodes = {}
-
- def AddPoly(self, poly, smart_duplicate_handling=True):
- PolyCollection.AddPoly(self, poly, smart_duplicate_handling)
- start_point = poly.GetPoint(0)
- end_point = poly.GetPoint(-1)
- self._AddNodeWithEdge(start_point, poly)
- self._AddNodeWithEdge(end_point, poly)
-
- def _AddNodeWithEdge(self, point, edge):
- if point in self._nodes:
- self._nodes[point].add(edge)
- else:
- self._nodes[point] = set([edge])
-
- def ShortestPath(self, start, goal):
- """Uses the A* algorithm to find a shortest path between start and goal.
-
- For more background see http://en.wikipedia.org/wiki/A-star_algorithm
-
- Some definitions:
- g(x): The actual shortest distance traveled from initial node to current
- node.
- h(x): The estimated (or "heuristic") distance from current node to goal.
- We use the distance on Earth from node to goal as the heuristic.
- This heuristic is both admissible and monotonic (see wikipedia for
- more details).
- f(x): The sum of g(x) and h(x), used to prioritize elements to look at.
-
- Arguments:
- start: Point that is in the graph, start point of the search.
- goal: Point that is in the graph, end point for the search.
-
- Returns:
- A Poly object representing the shortest polyline through the graph from
- start to goal, or None if no path found.
- """
-
- assert start in self._nodes
- assert goal in self._nodes
- closed_set = set() # Set of nodes already evaluated.
- open_heap = [(0, start)] # Nodes to visit, heapified by f(x).
- open_set = set([start]) # Same as open_heap, but a set instead of a heap.
- g_scores = { start: 0 } # Distance from start along optimal path
- came_from = {} # Map to reconstruct optimal path once we're done.
- while open_set:
- (f_x, x) = heapq.heappop(open_heap)
- open_set.remove(x)
- if x == goal:
- return self._ReconstructPath(came_from, goal)
- closed_set.add(x)
- edges = self._nodes[x]
- for edge in edges:
- if edge.GetPoint(0) == x:
- y = edge.GetPoint(-1)
- else:
- y = edge.GetPoint(0)
- if y in closed_set:
- continue
- tentative_g_score = g_scores[x] + edge.LengthMeters()
- tentative_is_better = False
- if y not in open_set:
- h_y = y.GetDistanceMeters(goal)
- f_y = tentative_g_score + h_y
- open_set.add(y)
- heapq.heappush(open_heap, (f_y, y))
- tentative_is_better = True
- elif tentative_g_score < g_scores[y]:
- tentative_is_better = True
- if tentative_is_better:
- came_from[y] = (x, edge)
- g_scores[y] = tentative_g_score
- return None
-
- def _ReconstructPath(self, came_from, current_node):
- """
- Helper method for ShortestPath, to reconstruct path.
-
- Arguments:
- came_from: a dictionary mapping Point to (Point, Poly) tuples.
- This dictionary keeps track of the previous neighbor to a node, and
- the edge used to get from the previous neighbor to the node.
- current_node: the current Point in the path.
-
- Returns:
- A Poly that represents the path through the graph from the start of the
- search to current_node.
- """
- if current_node in came_from:
- (previous_node, previous_edge) = came_from[current_node]
- if previous_edge.GetPoint(0) == current_node:
- previous_edge = previous_edge.Reversed()
- p = self._ReconstructPath(came_from, previous_node)
- return Poly.MergePolys([p, previous_edge], merge_point_threshold=0)
- else:
- return Poly([], '')
-
- def FindShortestMultiPointPath(self, points, max_radius=150, keep_best_n=10,
- verbosity=0):
- """
- Return a polyline, representing the shortest path through this graph that
- has edge endpoints on each of a given list of points in sequence. We allow
- fuzziness in matching of input points to points in this graph.
-
- We limit ourselves to a view of the best keep_best_n paths at any time, as a
- greedy optimization.
- """
- assert len(points) > 1
- nearby_points = []
- paths_found = [] # A heap sorted by inverse path length.
-
- for i, point in enumerate(points):
- nearby = [p for p in self._nodes.iterkeys()
- if p.GetDistanceMeters(point) < max_radius]
- if verbosity >= 2:
- print ("Nearby points for point %d %s: %s"
- % (i + 1,
- str(point.ToLatLng()),
- ", ".join([str(n.ToLatLng()) for n in nearby])))
- if nearby:
- nearby_points.append(nearby)
- else:
- print "No nearby points found for point %s" % str(point.ToLatLng())
- return None
-
- pathToStr = lambda start, end, path: (" Best path %s -> %s: %s"
- % (str(start.ToLatLng()),
- str(end.ToLatLng()),
- path and path.GetName() or
- "None"))
- if verbosity >= 3:
- print "Step 1"
- step = 2
-
- start_points = nearby_points[0]
- end_points = nearby_points[1]
-
- for start in start_points:
- for end in end_points:
- path = self.ShortestPath(start, end)
- if verbosity >= 3:
- print pathToStr(start, end, path)
- PolyGraph._AddPathToHeap(paths_found, path, keep_best_n)
-
- for possible_points in nearby_points[2:]:
- if verbosity >= 3:
- print "\nStep %d" % step
- step += 1
- new_paths_found = []
-
- start_end_paths = {} # cache of shortest paths between (start, end) pairs
- for score, path in paths_found:
- start = path.GetPoint(-1)
- for end in possible_points:
- if (start, end) in start_end_paths:
- new_segment = start_end_paths[(start, end)]
- else:
- new_segment = self.ShortestPath(start, end)
- if verbosity >= 3:
- print pathToStr(start, end, new_segment)
- start_end_paths[(start, end)] = new_segment
-
- if new_segment:
- new_path = Poly.MergePolys([path, new_segment],
- merge_point_threshold=0)
- PolyGraph._AddPathToHeap(new_paths_found, new_path, keep_best_n)
- paths_found = new_paths_found
-
- if paths_found:
- best_score, best_path = max(paths_found)
- return best_path
- else:
- return None
-
- @staticmethod
- def _AddPathToHeap(heap, path, keep_best_n):
- if path and path.GetNumPoints():
- new_item = (-path.LengthMeters(), path)
- if new_item not in heap:
- if len(heap) < keep_best_n:
- heapq.heappush(heap, new_item)
- else:
- heapq.heapreplace(heap, new_item)
-
--- a/origin-src/transitfeed-1.2.5/build/lib/transitfeed/util.py
+++ /dev/null
@@ -1,163 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2009 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import optparse
-import sys
-
-
-class OptionParserLongError(optparse.OptionParser):
- """OptionParser subclass that includes list of options above error message."""
- def error(self, msg):
- print >>sys.stderr, self.format_help()
- print >>sys.stderr, '\n\n%s: error: %s\n\n' % (self.get_prog_name(), msg)
- sys.exit(2)
-
-
-def RunWithCrashHandler(f):
- try:
- exit_code = f()
- sys.exit(exit_code)
- except (SystemExit, KeyboardInterrupt):
- raise
- except:
- import inspect
- import traceback
-
- # Save trace and exception now. These calls look at the most recently
- # raised exception. The code that makes the report might trigger other
- # exceptions.
- original_trace = inspect.trace(3)[1:]
- formatted_exception = traceback.format_exception_only(*(sys.exc_info()[:2]))
-
- apology = """Yikes, the program threw an unexpected exception!
-
-Hopefully a complete report has been saved to transitfeedcrash.txt,
-though if you are seeing this message we've already disappointed you once
-today. Please include the report in a new issue at
-http://code.google.com/p/googletransitdatafeed/issues/entry
-or an email to the public group googletransitdatafeed@googlegroups.com. Sorry!
-
-"""
- dashes = '%s\n' % ('-' * 60)
- dump = []
- dump.append(apology)
- dump.append(dashes)
- try:
- import transitfeed
- dump.append("transitfeed version %s\n\n" % transitfeed.__version__)
- except NameError:
- # Oh well, guess we won't put the version in the report
- pass
-
- for (frame_obj, filename, line_num, fun_name, context_lines,
- context_index) in original_trace:
- dump.append('File "%s", line %d, in %s\n' % (filename, line_num,
- fun_name))
- if context_lines:
- for (i, line) in enumerate(context_lines):
- if i == context_index:
- dump.append(' --> %s' % line)
- else:
- dump.append(' %s' % line)
- for local_name, local_val in frame_obj.f_locals.items():
- try:
- truncated_val = str(local_val)[0:500]
- except Exception, e:
- dump.append(' Exception in str(%s): %s' % (local_name, e))
- else:
- if len(truncated_val) >= 500:
- truncated_val = '%s...' % truncated_val[0:499]
- dump.append(' %s = %s\n' % (local_name, truncated_val))
- dump.append('\n')
-
- dump.append(''.join(formatted_exception))
-
- open('transitfeedcrash.txt', 'w').write(''.join(dump))
-
- print ''.join(dump)
- print
- print dashes
- print apology
-
- try:
- raw_input('Press enter to continue...')
- except EOFError:
- # Ignore stdin being closed. This happens during some tests.
- pass
- sys.exit(127)
-
-
-# Pick one of two defaultdict implementations. A native version was added to
-# the collections library in python 2.5. If that is not available use Jason's
-# pure python recipe. He gave us permission to distribute it.
-
-# On Mon, Nov 30, 2009 at 07:27, jason kirtland <jek at discorporate.us> wrote:
-# >
-# > Hi Tom, sure thing! It's not easy to find on the cookbook site, but the
-# > recipe is under the Python license.
-# >
-# > Cheers,
-# > Jason
-# >
-# > On Thu, Nov 26, 2009 at 3:03 PM, Tom Brown <tom.brown.code@gmail.com> wrote:
-# >
-# >> I would like to include http://code.activestate.com/recipes/523034/ in
-# >> http://code.google.com/p/googletransitdatafeed/wiki/TransitFeedDistribution
-# >> which is distributed under the Apache License, Version 2.0 with Copyright
-# >> Google. May we include your code with a comment in the source pointing at
-# >> the original URL? Thanks, Tom Brown
-
-try:
- # Try the native implementation first
- from collections import defaultdict
-except:
- # Fallback for python2.4, which didn't include collections.defaultdict
- class defaultdict(dict):
- def __init__(self, default_factory=None, *a, **kw):
- if (default_factory is not None and
- not hasattr(default_factory, '__call__')):
- raise TypeError('first argument must be callable')
- dict.__init__(self, *a, **kw)
- self.default_factory = default_factory
- def __getitem__(self, key):
- try:
- return dict.__getitem__(self, key)
- except KeyError:
- return self.__missing__(key)
- def __missing__(self, key):
- if self.default_factory is None:
- raise KeyError(key)
- self[key] = value = self.default_factory()
- return value
- def __reduce__(self):
- if self.default_factory is None:
- args = tuple()
- else:
- args = self.default_factory,
- return type(self), args, None, None, self.items()
- def copy(self):
- return self.__copy__()
- def __copy__(self):
- return type(self)(self.default_factory, self)
- def __deepcopy__(self, memo):
- import copy
- return type(self)(self.default_factory,
- copy.deepcopy(self.items()))
- def __repr__(self):
- return 'defaultdict(%s, %s)' % (self.default_factory,
- dict.__repr__(self))
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/feedvalidator.py
+++ /dev/null
@@ -1,723 +1,1 @@
-#!/usr/bin/python
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-"""Validates a GTFS file.
-
-For usage information run feedvalidator.py --help
-"""
-
-import bisect
-import codecs
-import datetime
-from transitfeed.util import defaultdict
-import optparse
-import os
-import os.path
-import re
-import socket
-import sys
-import time
-import transitfeed
-from transitfeed import TYPE_ERROR, TYPE_WARNING
-from urllib2 import Request, urlopen, HTTPError, URLError
-from transitfeed import util
-import webbrowser
-
-SVN_TAG_URL = 'http://googletransitdatafeed.googlecode.com/svn/tags/'
-
-
-def MaybePluralizeWord(count, word):
- if count == 1:
- return word
- else:
- return word + 's'
-
-
-def PrettyNumberWord(count, word):
- return '%d %s' % (count, MaybePluralizeWord(count, word))
-
-
-def UnCamelCase(camel):
- return re.sub(r'([a-z])([A-Z])', r'\1 \2', camel)
-
-
-def ProblemCountText(error_count, warning_count):
- results = []
- if error_count:
- results.append(PrettyNumberWord(error_count, 'error'))
- if warning_count:
- results.append(PrettyNumberWord(warning_count, 'warning'))
-
- return ' and '.join(results)
-
-
-def CalendarSummary(schedule):
- today = datetime.date.today()
- summary_end_date = today + datetime.timedelta(days=60)
- start_date, end_date = schedule.GetDateRange()
-
- if not start_date or not end_date:
- return {}
-
- try:
- start_date_object = transitfeed.DateStringToDateObject(start_date)
- end_date_object = transitfeed.DateStringToDateObject(end_date)
- except ValueError:
- return {}
-
- # Get the list of trips only during the period the feed is active.
- # As such we have to check if it starts in the future and/or if
- # if it ends in less than 60 days.
- date_trips_departures = schedule.GenerateDateTripsDeparturesList(
- max(today, start_date_object),
- min(summary_end_date, end_date_object))
-
- if not date_trips_departures:
- return {}
-
- # Check that the dates which will be shown in summary agree with these
- # calculations. Failure implies a bug which should be fixed. It isn't good
- # for users to discover assertion failures but means it will likely be fixed.
- assert start_date <= date_trips_departures[0][0].strftime("%Y%m%d")
- assert end_date >= date_trips_departures[-1][0].strftime("%Y%m%d")
-
- # Generate a map from int number of trips in a day to a list of date objects
- # with that many trips. The list of dates is sorted.
- trips_dates = defaultdict(lambda: [])
- trips = 0
- for date, day_trips, day_departures in date_trips_departures:
- trips += day_trips
- trips_dates[day_trips].append(date)
- mean_trips = trips / len(date_trips_departures)
- max_trips = max(trips_dates.keys())
- min_trips = min(trips_dates.keys())
-
- calendar_summary = {}
- calendar_summary['mean_trips'] = mean_trips
- calendar_summary['max_trips'] = max_trips
- calendar_summary['max_trips_dates'] = FormatDateList(trips_dates[max_trips])
- calendar_summary['min_trips'] = min_trips
- calendar_summary['min_trips_dates'] = FormatDateList(trips_dates[min_trips])
- calendar_summary['date_trips_departures'] = date_trips_departures
- calendar_summary['date_summary_range'] = "%s to %s" % (
- date_trips_departures[0][0].strftime("%a %b %d"),
- date_trips_departures[-1][0].strftime("%a %b %d"))
-
- return calendar_summary
-
-
-def FormatDateList(dates):
- if not dates:
- return "0 service dates"
-
- formatted = [d.strftime("%a %b %d") for d in dates[0:3]]
- if len(dates) > 3:
- formatted.append("...")
- return "%s (%s)" % (PrettyNumberWord(len(dates), "service date"),
- ", ".join(formatted))
-
-
-def MaxVersion(versions):
- versions = filter(None, versions)
- versions.sort(lambda x,y: -cmp([int(item) for item in x.split('.')],
- [int(item) for item in y.split('.')]))
- if len(versions) > 0:
- return versions[0]
-
-
-class CountingConsoleProblemReporter(transitfeed.ProblemReporter):
- def __init__(self):
- transitfeed.ProblemReporter.__init__(self)
- self._error_count = 0
- self._warning_count = 0
-
- def _Report(self, e):
- transitfeed.ProblemReporter._Report(self, e)
- if e.IsError():
- self._error_count += 1
- else:
- self._warning_count += 1
-
- def ErrorCount(self):
- return self._error_count
-
- def WarningCount(self):
- return self._warning_count
-
- def FormatCount(self):
- return ProblemCountText(self.ErrorCount(), self.WarningCount())
-
- def HasIssues(self):
- return self.ErrorCount() or self.WarningCount()
-
-
-class BoundedProblemList(object):
- """A list of one type of ExceptionWithContext objects with bounded size."""
- def __init__(self, size_bound):
- self._count = 0
- self._exceptions = []
- self._size_bound = size_bound
-
- def Add(self, e):
- self._count += 1
- try:
- bisect.insort(self._exceptions, e)
- except TypeError:
- # The base class ExceptionWithContext raises this exception in __cmp__
- # to signal that an object is not comparable. Instead of keeping the most
- # significant issue keep the first reported.
- if self._count <= self._size_bound:
- self._exceptions.append(e)
- else:
- # self._exceptions is in order. Drop the least significant if the list is
- # now too long.
- if self._count > self._size_bound:
- del self._exceptions[-1]
-
- def _GetDroppedCount(self):
- return self._count - len(self._exceptions)
-
- def __repr__(self):
- return "<BoundedProblemList %s>" % repr(self._exceptions)
-
- count = property(lambda s: s._count)
- dropped_count = property(_GetDroppedCount)
- problems = property(lambda s: s._exceptions)
-
-
-class LimitPerTypeProblemReporter(transitfeed.ProblemReporter):
- def __init__(self, limit_per_type):
- transitfeed.ProblemReporter.__init__(self)
-
- # {TYPE_WARNING: {"ClassName": BoundedProblemList()}}
- self._type_to_name_to_problist = {
- TYPE_WARNING: defaultdict(lambda: BoundedProblemList(limit_per_type)),
- TYPE_ERROR: defaultdict(lambda: BoundedProblemList(limit_per_type))
- }
-
- def HasIssues(self):
- return (self._type_to_name_to_problist[TYPE_ERROR] or
- self._type_to_name_to_problist[TYPE_WARNING])
-
- def _Report(self, e):
- self._type_to_name_to_problist[e.GetType()][e.__class__.__name__].Add(e)
-
- def ErrorCount(self):
- error_sets = self._type_to_name_to_problist[TYPE_ERROR].values()
- return sum(map(lambda v: v.count, error_sets))
-
- def WarningCount(self):
- warning_sets = self._type_to_name_to_problist[TYPE_WARNING].values()
- return sum(map(lambda v: v.count, warning_sets))
-
- def ProblemList(self, problem_type, class_name):
- """Return the BoundedProblemList object for given type and class."""
- return self._type_to_name_to_problist[problem_type][class_name]
-
- def ProblemListMap(self, problem_type):
- """Return the map from class name to BoundedProblemList object."""
- return self._type_to_name_to_problist[problem_type]
-
-
-class HTMLCountingProblemReporter(LimitPerTypeProblemReporter):
- def FormatType(self, f, level_name, class_problist):
- """Write the HTML dumping all problems of one type.
-
- Args:
- f: file object open for writing
- level_name: string such as "Error" or "Warning"
- class_problist: sequence of tuples (class name,
- BoundedProblemList object)
- """
- class_problist.sort()
- output = []
- for classname, problist in class_problist:
- output.append('<h4 class="issueHeader"><a name="%s%s">%s</a></h4><ul>\n' %
- (level_name, classname, UnCamelCase(classname)))
- for e in problist.problems:
- self.FormatException(e, output)
- if problist.dropped_count:
- output.append('<li>and %d more of this type.' %
- (problist.dropped_count))
- output.append('</ul>\n')
- f.write(''.join(output))
-
- def FormatTypeSummaryTable(self, level_name, name_to_problist):
- """Return an HTML table listing the number of problems by class name.
-
- Args:
- level_name: string such as "Error" or "Warning"
- name_to_problist: dict mapping class name to an BoundedProblemList object
-
- Returns:
- HTML in a string
- """
- output = []
- output.append('<table>')
- for classname in sorted(name_to_problist.keys()):
- problist = name_to_problist[classname]
- human_name = MaybePluralizeWord(problist.count, UnCamelCase(classname))
- output.append('<tr><td>%d</td><td><a href="#%s%s">%s</a></td></tr>\n' %
- (problist.count, level_name, classname, human_name))
- output.append('</table>\n')
- return ''.join(output)
-
- def FormatException(self, e, output):
- """Append HTML version of e to list output."""
- d = e.GetDictToFormat()
- for k in ('file_name', 'feedname', 'column_name'):
- if k in d.keys():
- d[k] = '<code>%s</code>' % d[k]
- problem_text = e.FormatProblem(d).replace('\n', '<br>')
- output.append('<li>')
- output.append('<div class="problem">%s</div>' %
- transitfeed.EncodeUnicode(problem_text))
- try:
- if hasattr(e, 'row_num'):
- line_str = 'line %d of ' % e.row_num
- else:
- line_str = ''
- output.append('in %s<code>%s</code><br>\n' %
- (line_str, e.file_name))
- row = e.row
- headers = e.headers
- column_name = e.column_name
- table_header = '' # HTML
- table_data = '' # HTML
- for header, value in zip(headers, row):
- attributes = ''
- if header == column_name:
- attributes = ' class="problem"'
- table_header += '<th%s>%s</th>' % (attributes, header)
- table_data += '<td%s>%s</td>' % (attributes, value)
- # Make sure output is encoded into UTF-8
- output.append('<table class="dump"><tr>%s</tr>\n' %
- transitfeed.EncodeUnicode(table_header))
- output.append('<tr>%s</tr></table>\n' %
- transitfeed.EncodeUnicode(table_data))
- except AttributeError, e:
- pass # Hope this was getting an attribute from e ;-)
- output.append('<br></li>\n')
-
- def FormatCount(self):
- return ProblemCountText(self.ErrorCount(), self.WarningCount())
-
- def CountTable(self):
- output = []
- output.append('<table class="count_outside">\n')
- output.append('<tr>')
- if self.ProblemListMap(TYPE_ERROR):
- output.append('<td><span class="fail">%s</span></td>' %
- PrettyNumberWord(self.ErrorCount(), "error"))
- if self.ProblemListMap(TYPE_WARNING):
- output.append('<td><span class="fail">%s</span></td>' %
- PrettyNumberWord(self.WarningCount(), "warning"))
- output.append('</tr>\n<tr>')
- if self.ProblemListMap(TYPE_ERROR):
- output.append('<td>\n')
- output.append(self.FormatTypeSummaryTable("Error",
- self.ProblemListMap(TYPE_ERROR)))
- output.append('</td>\n')
- if self.ProblemListMap(TYPE_WARNING):
- output.append('<td>\n')
- output.append(self.FormatTypeSummaryTable("Warning",
- self.ProblemListMap(TYPE_WARNING)))
- output.append('</td>\n')
- output.append('</table>')
- return ''.join(output)
-
- def WriteOutput(self, feed_location, f, schedule, other_problems):
- """Write the html output to f."""
- if self.HasIssues():
- if self.ErrorCount() + self.WarningCount() == 1:
- summary = ('<span class="fail">Found this problem:</span>\n%s' %
- self.CountTable())
- else:
- summary = ('<span class="fail">Found these problems:</span>\n%s' %
- self.CountTable())
- else:
- summary = '<span class="pass">feed validated successfully</span>'
- if other_problems is not None:
- summary = ('<span class="fail">\n%s</span><br><br>' %
- other_problems) + summary
-
- basename = os.path.basename(feed_location)
- feed_path = (feed_location[:feed_location.rfind(basename)], basename)
-
- agencies = ', '.join(['<a href="%s">%s</a>' % (a.agency_url, a.agency_name)
- for a in schedule.GetAgencyList()])
- if not agencies:
- agencies = '?'
-
- dates = "No valid service dates found"
- (start, end) = schedule.GetDateRange()
- if start and end:
- def FormatDate(yyyymmdd):
- src_format = "%Y%m%d"
- dst_format = "%B %d, %Y"
- try:
- return time.strftime(dst_format,
- time.strptime(yyyymmdd, src_format))
- except ValueError:
- return yyyymmdd
-
- formatted_start = FormatDate(start)
- formatted_end = FormatDate(end)
- dates = "%s to %s" % (formatted_start, formatted_end)
-
- calendar_summary = CalendarSummary(schedule)
- if calendar_summary:
- calendar_summary_html = """<br>
-During the upcoming service dates %(date_summary_range)s:
-<table>
-<tr><th class="header">Average trips per date:</th><td class="header">%(mean_trips)s</td></tr>
-<tr><th class="header">Most trips on a date:</th><td class="header">%(max_trips)s, on %(max_trips_dates)s</td></tr>
-<tr><th class="header">Least trips on a date:</th><td class="header">%(min_trips)s, on %(min_trips_dates)s</td></tr>
-</table>""" % calendar_summary
- else:
- calendar_summary_html = ""
-
- output_prefix = """
-<html>
-<head>
-<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>FeedValidator: %(feed_file)s</title>
-<style>
-body {font-family: Georgia, serif; background-color: white}
-.path {color: gray}
-div.problem {max-width: 500px}
-table.dump td,th {background-color: khaki; padding: 2px; font-family:monospace}
-table.dump td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; font-family:monospace}
-table.count_outside td {vertical-align: top}
-table.count_outside {border-spacing: 0px; }
-table {border-spacing: 5px 0px; margin-top: 3px}
-h3.issueHeader {padding-left: 0.5em}
-h4.issueHeader {padding-left: 1em}
-.pass {background-color: lightgreen}
-.fail {background-color: yellow}
-.pass, .fail {font-size: 16pt}
-.header {background-color: white; font-family: Georgia, serif; padding: 0px}
-th.header {text-align: right; font-weight: normal; color: gray}
-.footer {font-size: 10pt}
-</style>
-</head>
-<body>
-GTFS validation results for feed:<br>
-<code><span class="path">%(feed_dir)s</span><b>%(feed_file)s</b></code>
-<br><br>
-<table>
-<tr><th class="header">Agencies:</th><td class="header">%(agencies)s</td></tr>
-<tr><th class="header">Routes:</th><td class="header">%(routes)s</td></tr>
-<tr><th class="header">Stops:</th><td class="header">%(stops)s</td></tr>
-<tr><th class="header">Trips:</th><td class="header">%(trips)s</td></tr>
-<tr><th class="header">Shapes:</th><td class="header">%(shapes)s</td></tr>
-<tr><th class="header">Effective:</th><td class="header">%(dates)s</td></tr>
-</table>
-%(calendar_summary)s
-<br>
-%(problem_summary)s
-<br><br>
-""" % { "feed_file": feed_path[1],
- "feed_dir": feed_path[0],
- "agencies": agencies,
- "routes": len(schedule.GetRouteList()),
- "stops": len(schedule.GetStopList()),
- "trips": len(schedule.GetTripList()),
- "shapes": len(schedule.GetShapeList()),
- "dates": dates,
- "problem_summary": summary,
- "calendar_summary": calendar_summary_html}
-
-# In output_suffix string
-# time.strftime() returns a regular local time string (not a Unicode one) with
-# default system encoding. And decode() will then convert this time string back
-# into a Unicode string. We use decode() here because we don't want the operating
-# system to do any system encoding (which may cause some problem if the string
-# contains some non-English characters) for the string. Therefore we decode it
-# back to its original Unicode code print.
-
- time_unicode = (time.strftime('%B %d, %Y at %I:%M %p %Z').
- decode(sys.getfilesystemencoding()))
- output_suffix = """
-<div class="footer">
-Generated by <a href="http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator">
-FeedValidator</a> version %s on %s.
-</div>
-</body>
-</html>""" % (transitfeed.__version__, time_unicode)
-
- f.write(transitfeed.EncodeUnicode(output_prefix))
- if self.ProblemListMap(TYPE_ERROR):
- f.write('<h3 class="issueHeader">Errors:</h3>')
- self.FormatType(f, "Error",
- self.ProblemListMap(TYPE_ERROR).items())
- if self.ProblemListMap(TYPE_WARNING):
- f.write('<h3 class="issueHeader">Warnings:</h3>')
- self.FormatType(f, "Warning",
- self.ProblemListMap(TYPE_WARNING).items())
- f.write(transitfeed.EncodeUnicode(output_suffix))
-
-
-def RunValidationOutputFromOptions(feed, options):
- """Validate feed, output results per options and return an exit code."""
- if options.output.upper() == "CONSOLE":
- return RunValidationOutputToConsole(feed, options)
- else:
- return RunValidationOutputToFilename(feed, options, options.output)
-
-
-def RunValidationOutputToFilename(feed, options, output_filename):
- """Validate feed, save HTML at output_filename and return an exit code."""
- try:
- output_file = open(output_filename, 'w')
- exit_code = RunValidationOutputToFile(feed, options, output_file)
- output_file.close()
- except IOError, e:
- print 'Error while writing %s: %s' % (output_filename, e)
- output_filename = None
- exit_code = 2
-
- if options.manual_entry and output_filename:
- webbrowser.open('file://%s' % os.path.abspath(output_filename))
-
- return exit_code
-
-
-def RunValidationOutputToFile(feed, options, output_file):
- """Validate feed, write HTML to output_file and return an exit code."""
- problems = HTMLCountingProblemReporter(options.limit_per_type)
- schedule, exit_code, other_problems_string = RunValidation(feed, options,
- problems)
- if isinstance(feed, basestring):
- feed_location = feed
- else:
- feed_location = getattr(feed, 'name', repr(feed))
- problems.WriteOutput(feed_location, output_file, schedule,
- other_problems_string)
- return exit_code
-
-
-def RunValidationOutputToConsole(feed, options):
- """Validate feed, print reports and return an exit code."""
- problems = CountingConsoleProblemReporter()
- _, exit_code, _ = RunValidation(feed, options, problems)
- return exit_code
-
-
-def RunValidation(feed, options, problems):
- """Validate feed, returning the loaded Schedule and exit code.
-
- Args:
- feed: GTFS file, either path of the file as a string or a file object
- options: options object returned by optparse
- problems: transitfeed.ProblemReporter instance
-
- Returns:
- a transitfeed.Schedule object, exit code and plain text string of other
- problems
- Exit code is 1 if problems are found and 0 if the Schedule is problem free.
- plain text string is '' if no other problems are found.
- """
- other_problems_string = CheckVersion(latest_version=options.latest_version)
- print 'validating %s' % feed
- loader = transitfeed.Loader(feed, problems=problems, extra_validation=False,
- memory_db=options.memory_db,
- check_duplicate_trips=\
- options.check_duplicate_trips)
- schedule = loader.Load()
- schedule.Validate(service_gap_interval=options.service_gap_interval)
-
- if feed == 'IWantMyvalidation-crash.txt':
- # See test/testfeedvalidator.py
- raise Exception('For testing the feed validator crash handler.')
-
- if other_problems_string:
- print other_problems_string
-
- if problems.HasIssues():
- print 'ERROR: %s found' % problems.FormatCount()
- return schedule, 1, other_problems_string
- else:
- print 'feed validated successfully'
- return schedule, 0, other_problems_string
-
-
-def CheckVersion(latest_version=''):
- """
- Check there is newer version of this project.
-
- Codes are based on http://www.voidspace.org.uk/python/articles/urllib2.shtml
- Already got permission from the copyright holder.
- """
- current_version = transitfeed.__version__
- if not latest_version:
- timeout = 20
- socket.setdefaulttimeout(timeout)
- request = Request(SVN_TAG_URL)
-
- try:
- response = urlopen(request)
- content = response.read()
- versions = re.findall(r'>transitfeed-([\d\.]+)\/<\/a>', content)
- latest_version = MaxVersion(versions)
-
- except HTTPError, e:
- return('The server couldn\'t fulfill the request. Error code: %s.'
- % e.code)
- except URLError, e:
- return('We failed to reach transitfeed server. Reason: %s.' % e.reason)
-
- if not latest_version:
- return('We had trouble parsing the contents of %s.' % SVN_TAG_URL)
-
- newest_version = MaxVersion([latest_version, current_version])
- if current_version != newest_version:
- return('A new version %s of transitfeed is available. Please visit '
- 'http://code.google.com/p/googletransitdatafeed and download.'
- % newest_version)
-
-
-def main():
- usage = \
-'''%prog [options] [<input GTFS.zip>]
-
-Validates GTFS file (or directory) <input GTFS.zip> and writes a HTML
-report of the results to validation-results.html.
-
-If <input GTFS.zip> is ommited the filename is read from the console. Dragging
-a file into the console may enter the filename.
-
-For more information see
-http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator
-'''
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('-n', '--noprompt', action='store_false',
- dest='manual_entry',
- help='do not prompt for feed location or load output in '
- 'browser')
- parser.add_option('-o', '--output', dest='output', metavar='FILE',
- help='write html output to FILE or --output=CONSOLE to '
- 'print all errors and warnings to the command console')
- parser.add_option('-p', '--performance', action='store_true',
- dest='performance',
- help='output memory and time performance (Availability: '
- 'Unix')
- parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true',
- help='Use in-memory sqlite db instead of a temporary file. '
- 'It is faster but uses more RAM.')
- parser.add_option('-d', '--duplicate_trip_check',
- dest='check_duplicate_trips', action='store_true',
- help='Check for duplicate trips which go through the same '
- 'stops with same service and start times')
- parser.add_option('-l', '--limit_per_type',
- dest='limit_per_type', action='store', type='int',
- help='Maximum number of errors and warnings to keep of '
- 'each type')
- parser.add_option('--latest_version', dest='latest_version',
- action='store',
- help='a version number such as 1.2.1 or None to get the '
- 'latest version from code.google.com. Output a warning if '
- 'transitfeed.py is older than this version.')
- parser.add_option('--service_gap_interval',
- dest='service_gap_interval',
- action='store',
- type='int',
- help='the number of consecutive days to search for with no '
- 'scheduled service. For each interval with no service '
- 'having this number of days or more a warning will be '
- 'issued')
-
- parser.set_defaults(manual_entry=True, output='validation-results.html',
- memory_db=False, check_duplicate_trips=False,
- limit_per_type=5, latest_version='',
- service_gap_interval=13)
- (options, args) = parser.parse_args()
-
- if not len(args) == 1:
- if options.manual_entry:
- feed = raw_input('Enter Feed Location: ')
- else:
- parser.error('You must provide the path of a single feed')
- else:
- feed = args[0]
-
- feed = feed.strip('"')
-
- if options.performance:
- return ProfileRunValidationOutputFromOptions(feed, options)
- else:
- return RunValidationOutputFromOptions(feed, options)
-
-
-def ProfileRunValidationOutputFromOptions(feed, options):
- """Run RunValidationOutputFromOptions, print profile and return exit code."""
- import cProfile
- import pstats
- # runctx will modify a dict, but not locals(). We need a way to get rv back.
- locals_for_exec = locals()
- cProfile.runctx('rv = RunValidationOutputFromOptions(feed, options)',
- globals(), locals_for_exec, 'validate-stats')
-
- # Only available on Unix, http://docs.python.org/lib/module-resource.html
- import resource
- print "Time: %d seconds" % (
- resource.getrusage(resource.RUSAGE_SELF).ru_utime +
- resource.getrusage(resource.RUSAGE_SELF).ru_stime)
-
- # http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/286222
- # http://aspn.activestate.com/ASPN/Cookbook/ "The recipes are freely
- # available for review and use."
- def _VmB(VmKey):
- """Return size from proc status in bytes."""
- _proc_status = '/proc/%d/status' % os.getpid()
- _scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
- 'KB': 1024.0, 'MB': 1024.0*1024.0}
-
- # get pseudo file /proc/<pid>/status
- try:
- t = open(_proc_status)
- v = t.read()
- t.close()
- except:
- raise Exception("no proc file %s" % _proc_status)
- return 0 # non-Linux?
- # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
- i = v.index(VmKey)
- v = v[i:].split(None, 3) # whitespace
- if len(v) < 3:
- raise Exception("%s" % v)
- return 0 # invalid format?
- # convert Vm value to bytes
- return int(float(v[1]) * _scale[v[2]])
-
- # I ran this on over a hundred GTFS files, comparing VmSize to VmRSS
- # (resident set size). The difference was always under 2% or 3MB.
- print "Virtual Memory Size: %d bytes" % _VmB('VmSize:')
-
- # Output report of where CPU time was spent.
- p = pstats.Stats('validate-stats')
- p.strip_dirs()
- p.sort_stats('cumulative').print_stats(30)
- p.sort_stats('cumulative').print_callers(30)
- return locals_for_exec['rv']
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/kmlparser.py
+++ /dev/null
@@ -1,147 +1,1 @@
-#!/usr/bin/python
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This package provides implementation of a converter from a kml
-file format into Google transit feed format.
-
-The KmlParser class is the main class implementing the parser.
-
-Currently only information about stops is extracted from a kml file.
-The extractor expects the stops to be represented as placemarks with
-a single point.
-"""
-
-import re
-import string
-import sys
-import transitfeed
-from transitfeed import util
-import xml.dom.minidom as minidom
-import zipfile
-
-
-class Placemark(object):
- def __init__(self):
- self.name = ""
- self.coordinates = []
-
- def IsPoint(self):
- return len(self.coordinates) == 1
-
- def IsLine(self):
- return len(self.coordinates) > 1
-
-class KmlParser(object):
- def __init__(self, stopNameRe = '(.*)'):
- """
- Args:
- stopNameRe - a regular expression to extract a stop name from a
- placemaker name
- """
- self.stopNameRe = re.compile(stopNameRe)
-
- def Parse(self, filename, feed):
- """
- Reads the kml file, parses it and updated the Google transit feed
- object with the extracted information.
-
- Args:
- filename - kml file name
- feed - an instance of Schedule class to be updated
- """
- dom = minidom.parse(filename)
- self.ParseDom(dom, feed)
-
- def ParseDom(self, dom, feed):
- """
- Parses the given kml dom tree and updates the Google transit feed object.
-
- Args:
- dom - kml dom tree
- feed - an instance of Schedule class to be updated
- """
- shape_num = 0
- for node in dom.getElementsByTagName('Placemark'):
- p = self.ParsePlacemark(node)
- if p.IsPoint():
- (lon, lat) = p.coordinates[0]
- m = self.stopNameRe.search(p.name)
- feed.AddStop(lat, lon, m.group(1))
- elif p.IsLine():
- shape_num = shape_num + 1
- shape = transitfeed.Shape("kml_shape_" + str(shape_num))
- for (lon, lat) in p.coordinates:
- shape.AddPoint(lat, lon)
- feed.AddShapeObject(shape)
-
- def ParsePlacemark(self, node):
- ret = Placemark()
- for child in node.childNodes:
- if child.nodeName == 'name':
- ret.name = self.ExtractText(child)
- if child.nodeName == 'Point' or child.nodeName == 'LineString':
- ret.coordinates = self.ExtractCoordinates(child)
- return ret
-
- def ExtractText(self, node):
- for child in node.childNodes:
- if child.nodeType == child.TEXT_NODE:
- return child.wholeText # is a unicode string
- return ""
-
- def ExtractCoordinates(self, node):
- coordinatesText = ""
- for child in node.childNodes:
- if child.nodeName == 'coordinates':
- coordinatesText = self.ExtractText(child)
- break
- ret = []
- for point in coordinatesText.split():
- coords = point.split(',')
- ret.append((float(coords[0]), float(coords[1])))
- return ret
-
-
-def main():
- usage = \
-"""%prog <input.kml> <output GTFS.zip>
-
-Reads KML file <input.kml> and creates GTFS file <output GTFS.zip> with
-placemarks in the KML represented as stops.
-"""
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- (options, args) = parser.parse_args()
- if len(args) != 2:
- parser.error('You did not provide all required command line arguments.')
-
- if args[0] == 'IWantMyCrash':
- raise Exception('For testCrashHandler')
-
- parser = KmlParser()
- feed = transitfeed.Schedule()
- feed.save_all_stops = True
- parser.Parse(args[0], feed)
- feed.WriteGoogleTransitFeed(args[1])
-
- print "Done."
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/kmlwriter.py
+++ /dev/null
@@ -1,648 +1,1 @@
-#!/usr/bin/python
-#
-# Copyright 2008 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""A module for writing GTFS feeds out into Google Earth KML format.
-
-For usage information run kmlwriter.py --help
-
-If no output filename is specified, the output file will be given the same
-name as the feed file (with ".kml" appended) and will be placed in the same
-directory as the input feed.
-
-The resulting KML file has a folder hierarchy which looks like this:
-
- - Stops
- * stop1
- * stop2
- - Routes
- - route1
- - Shapes
- * shape1
- * shape2
- - Patterns
- - pattern1
- - pattern2
- - Trips
- * trip1
- * trip2
- - Shapes
- * shape1
- - Shape Points
- * shape_point1
- * shape_point2
- * shape2
- - Shape Points
- * shape_point1
- * shape_point2
-
-where the hyphens represent folders and the asteriks represent placemarks.
-
-In a trip, a vehicle visits stops in a certain sequence. Such a sequence of
-stops is called a pattern. A pattern is represented by a linestring connecting
-the stops. The "Shapes" subfolder of a route folder contains placemarks for
-each shape used by a trip in the route. The "Patterns" subfolder contains a
-placemark for each unique pattern used by a trip in the route. The "Trips"
-subfolder contains a placemark for each trip in the route.
-
-Since there can be many trips and trips for the same route are usually similar,
-they are not exported unless the --showtrips option is used. There is also
-another option --splitroutes that groups the routes by vehicle type resulting
-in a folder hierarchy which looks like this at the top level:
-
- - Stops
- - Routes - Bus
- - Routes - Tram
- - Routes - Rail
- - Shapes
-"""
-
-try:
- import xml.etree.ElementTree as ET # python 2.5
-except ImportError, e:
- import elementtree.ElementTree as ET # older pythons
-import optparse
-import os.path
-import sys
-import transitfeed
-from transitfeed import util
-
-
-class KMLWriter(object):
- """This class knows how to write out a transit feed as KML.
-
- Sample usage:
- KMLWriter().Write(<transitfeed.Schedule object>, <output filename>)
-
- Attributes:
- show_trips: True if the individual trips should be included in the routes.
- show_trips: True if the individual trips should be placed on ground.
- split_routes: True if the routes should be split by type.
- shape_points: True if individual shape points should be plotted.
- """
-
- def __init__(self):
- """Initialise."""
- self.show_trips = False
- self.split_routes = False
- self.shape_points = False
- self.altitude_per_sec = 0.0
- self.date_filter = None
-
- def _SetIndentation(self, elem, level=0):
- """Indented the ElementTree DOM.
-
- This is the recommended way to cause an ElementTree DOM to be
- prettyprinted on output, as per: http://effbot.org/zone/element-lib.htm
-
- Run this on the root element before outputting the tree.
-
- Args:
- elem: The element to start indenting from, usually the document root.
- level: Current indentation level for recursion.
- """
- i = "\n" + level*" "
- if len(elem):
- if not elem.text or not elem.text.strip():
- elem.text = i + " "
- for elem in elem:
- self._SetIndentation(elem, level+1)
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- else:
- if level and (not elem.tail or not elem.tail.strip()):
- elem.tail = i
-
- def _CreateFolder(self, parent, name, visible=True, description=None):
- """Create a KML Folder element.
-
- Args:
- parent: The parent ElementTree.Element instance.
- name: The folder name as a string.
- visible: Whether the folder is initially visible or not.
- description: A description string or None.
-
- Returns:
- The folder ElementTree.Element instance.
- """
- folder = ET.SubElement(parent, 'Folder')
- name_tag = ET.SubElement(folder, 'name')
- name_tag.text = name
- if description is not None:
- desc_tag = ET.SubElement(folder, 'description')
- desc_tag.text = description
- if not visible:
- visibility = ET.SubElement(folder, 'visibility')
- visibility.text = '0'
- return folder
-
- def _CreateStyleForRoute(self, doc, route):
- """Create a KML Style element for the route.
-
- The style sets the line colour if the route colour is specified. The
- line thickness is set depending on the vehicle type.
-
- Args:
- doc: The KML Document ElementTree.Element instance.
- route: The transitfeed.Route to create the style for.
-
- Returns:
- The id of the style as a string.
- """
- style_id = 'route_%s' % route.route_id
- style = ET.SubElement(doc, 'Style', {'id': style_id})
- linestyle = ET.SubElement(style, 'LineStyle')
- width = ET.SubElement(linestyle, 'width')
- type_to_width = {0: '3', # Tram
- 1: '3', # Subway
- 2: '5', # Rail
- 3: '1'} # Bus
- width.text = type_to_width.get(route.route_type, '1')
- if route.route_color:
- color = ET.SubElement(linestyle, 'color')
- red = route.route_color[0:2].lower()
- green = route.route_color[2:4].lower()
- blue = route.route_color[4:6].lower()
- color.text = 'ff%s%s%s' % (blue, green, red)
- return style_id
-
- def _CreatePlacemark(self, parent, name, style_id=None, visible=True,
- description=None):
- """Create a KML Placemark element.
-
- Args:
- parent: The parent ElementTree.Element instance.
- name: The placemark name as a string.
- style_id: If not None, the id of a style to use for the placemark.
- visible: Whether the placemark is initially visible or not.
- description: A description string or None.
-
- Returns:
- The placemark ElementTree.Element instance.
- """
- placemark = ET.SubElement(parent, 'Placemark')
- placemark_name = ET.SubElement(placemark, 'name')
- placemark_name.text = name
- if description is not None:
- desc_tag = ET.SubElement(placemark, 'description')
- desc_tag.text = description
- if style_id is not None:
- styleurl = ET.SubElement(placemark, 'styleUrl')
- styleurl.text = '#%s' % style_id
- if not visible:
- visibility = ET.SubElement(placemark, 'visibility')
- visibility.text = '0'
- return placemark
-
- def _CreateLineString(self, parent, coordinate_list):
- """Create a KML LineString element.
-
- The points of the string are given in coordinate_list. Every element of
- coordinate_list should be one of a tuple (longitude, latitude) or a tuple
- (longitude, latitude, altitude).
-
- Args:
- parent: The parent ElementTree.Element instance.
- coordinate_list: The list of coordinates.
-
- Returns:
- The LineString ElementTree.Element instance or None if coordinate_list is
- empty.
- """
- if not coordinate_list:
- return None
- linestring = ET.SubElement(parent, 'LineString')
- tessellate = ET.SubElement(linestring, 'tessellate')
- tessellate.text = '1'
- if len(coordinate_list[0]) == 3:
- altitude_mode = ET.SubElement(linestring, 'altitudeMode')
- altitude_mode.text = 'absolute'
- coordinates = ET.SubElement(linestring, 'coordinates')
- if len(coordinate_list[0]) == 3:
- coordinate_str_list = ['%f,%f,%f' % t for t in coordinate_list]
- else:
- coordinate_str_list = ['%f,%f' % t for t in coordinate_list]
- coordinates.text = ' '.join(coordinate_str_list)
- return linestring
-
- def _CreateLineStringForShape(self, parent, shape):
- """Create a KML LineString using coordinates from a shape.
-
- Args:
- parent: The parent ElementTree.Element instance.
- shape: The transitfeed.Shape instance.
-
- Returns:
- The LineString ElementTree.Element instance or None if coordinate_list is
- empty.
- """
- coordinate_list = [(longitude, latitude) for
- (latitude, longitude, distance) in shape.points]
- return self._CreateLineString(parent, coordinate_list)
-
- def _CreateStopsFolder(self, schedule, doc):
- """Create a KML Folder containing placemarks for each stop in the schedule.
-
- If there are no stops in the schedule then no folder is created.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- doc: The KML Document ElementTree.Element instance.
-
- Returns:
- The Folder ElementTree.Element instance or None if there are no stops.
- """
- if not schedule.GetStopList():
- return None
- stop_folder = self._CreateFolder(doc, 'Stops')
- stops = list(schedule.GetStopList())
- stops.sort(key=lambda x: x.stop_name)
- for stop in stops:
- desc_items = []
- if stop.stop_desc:
- desc_items.append(stop.stop_desc)
- if stop.stop_url:
- desc_items.append('Stop info page: <a href="%s">%s</a>' % (
- stop.stop_url, stop.stop_url))
- description = '<br/>'.join(desc_items) or None
- placemark = self._CreatePlacemark(stop_folder, stop.stop_name,
- description=description)
- point = ET.SubElement(placemark, 'Point')
- coordinates = ET.SubElement(point, 'coordinates')
- coordinates.text = '%.6f,%.6f' % (stop.stop_lon, stop.stop_lat)
- return stop_folder
-
- def _CreateRoutePatternsFolder(self, parent, route,
- style_id=None, visible=True):
- """Create a KML Folder containing placemarks for each pattern in the route.
-
- A pattern is a sequence of stops used by one of the trips in the route.
-
- If there are not patterns for the route then no folder is created and None
- is returned.
-
- Args:
- parent: The parent ElementTree.Element instance.
- route: The transitfeed.Route instance.
- style_id: The id of a style to use if not None.
- visible: Whether the folder is initially visible or not.
-
- Returns:
- The Folder ElementTree.Element instance or None if there are no patterns.
- """
- pattern_id_to_trips = route.GetPatternIdTripDict()
- if not pattern_id_to_trips:
- return None
-
- # sort by number of trips using the pattern
- pattern_trips = pattern_id_to_trips.values()
- pattern_trips.sort(lambda a, b: cmp(len(b), len(a)))
-
- folder = self._CreateFolder(parent, 'Patterns', visible)
- for n, trips in enumerate(pattern_trips):
- trip_ids = [trip.trip_id for trip in trips]
- name = 'Pattern %d (trips: %d)' % (n+1, len(trips))
- description = 'Trips using this pattern (%d in total): %s' % (
- len(trips), ', '.join(trip_ids))
- placemark = self._CreatePlacemark(folder, name, style_id, visible,
- description)
- coordinates = [(stop.stop_lon, stop.stop_lat)
- for stop in trips[0].GetPattern()]
- self._CreateLineString(placemark, coordinates)
- return folder
-
- def _CreateRouteShapesFolder(self, schedule, parent, route,
- style_id=None, visible=True):
- """Create a KML Folder for the shapes of a route.
-
- The folder contains a placemark for each shape referenced by a trip in the
- route. If there are no such shapes, no folder is created and None is
- returned.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- parent: The parent ElementTree.Element instance.
- route: The transitfeed.Route instance.
- style_id: The id of a style to use if not None.
- visible: Whether the placemark is initially visible or not.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
- shape_id_to_trips = {}
- for trip in route.trips:
- if trip.shape_id:
- shape_id_to_trips.setdefault(trip.shape_id, []).append(trip)
- if not shape_id_to_trips:
- return None
-
- # sort by the number of trips using the shape
- shape_id_to_trips_items = shape_id_to_trips.items()
- shape_id_to_trips_items.sort(lambda a, b: cmp(len(b[1]), len(a[1])))
-
- folder = self._CreateFolder(parent, 'Shapes', visible)
- for shape_id, trips in shape_id_to_trips_items:
- trip_ids = [trip.trip_id for trip in trips]
- name = '%s (trips: %d)' % (shape_id, len(trips))
- description = 'Trips using this shape (%d in total): %s' % (
- len(trips), ', '.join(trip_ids))
- placemark = self._CreatePlacemark(folder, name, style_id, visible,
- description)
- self._CreateLineStringForShape(placemark, schedule.GetShape(shape_id))
- return folder
-
- def _CreateRouteTripsFolder(self, parent, route, style_id=None, schedule=None):
- """Create a KML Folder containing all the trips in the route.
-
- The folder contains a placemark for each of these trips. If there are no
- trips in the route, no folder is created and None is returned.
-
- Args:
- parent: The parent ElementTree.Element instance.
- route: The transitfeed.Route instance.
- style_id: A style id string for the placemarks or None.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
- if not route.trips:
- return None
- trips = list(route.trips)
- trips.sort(key=lambda x: x.trip_id)
- trips_folder = self._CreateFolder(parent, 'Trips', visible=False)
- for trip in trips:
- if (self.date_filter and
- not trip.service_period.IsActiveOn(self.date_filter)):
- continue
-
- if trip.trip_headsign:
- description = 'Headsign: %s' % trip.trip_headsign
- else:
- description = None
-
- coordinate_list = []
- for secs, stoptime, tp in trip.GetTimeInterpolatedStops():
- if self.altitude_per_sec > 0:
- coordinate_list.append((stoptime.stop.stop_lon, stoptime.stop.stop_lat,
- (secs - 3600 * 4) * self.altitude_per_sec))
- else:
- coordinate_list.append((stoptime.stop.stop_lon,
- stoptime.stop.stop_lat))
- placemark = self._CreatePlacemark(trips_folder,
- trip.trip_id,
- style_id=style_id,
- visible=False,
- description=description)
- self._CreateLineString(placemark, coordinate_list)
- return trips_folder
-
- def _CreateRoutesFolder(self, schedule, doc, route_type=None):
- """Create a KML Folder containing routes in a schedule.
-
- The folder contains a subfolder for each route in the schedule of type
- route_type. If route_type is None, then all routes are selected. Each
- subfolder contains a flattened graph placemark, a route shapes placemark
- and, if show_trips is True, a subfolder containing placemarks for each of
- the trips in the route.
-
- If there are no routes in the schedule then no folder is created and None
- is returned.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- doc: The KML Document ElementTree.Element instance.
- route_type: The route type integer or None.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
-
- def GetRouteName(route):
- """Return a placemark name for the route.
-
- Args:
- route: The transitfeed.Route instance.
-
- Returns:
- The name as a string.
- """
- name_parts = []
- if route.route_short_name:
- name_parts.append('<b>%s</b>' % route.route_short_name)
- if route.route_long_name:
- name_parts.append(route.route_long_name)
- return ' - '.join(name_parts) or route.route_id
-
- def GetRouteDescription(route):
- """Return a placemark description for the route.
-
- Args:
- route: The transitfeed.Route instance.
-
- Returns:
- The description as a string.
- """
- desc_items = []
- if route.route_desc:
- desc_items.append(route.route_desc)
- if route.route_url:
- desc_items.append('Route info page: <a href="%s">%s</a>' % (
- route.route_url, route.route_url))
- description = '<br/>'.join(desc_items)
- return description or None
-
- routes = [route for route in schedule.GetRouteList()
- if route_type is None or route.route_type == route_type]
- if not routes:
- return None
- routes.sort(key=lambda x: GetRouteName(x))
-
- if route_type is not None:
- route_type_names = {0: 'Tram, Streetcar or Light rail',
- 1: 'Subway or Metro',
- 2: 'Rail',
- 3: 'Bus',
- 4: 'Ferry',
- 5: 'Cable car',
- 6: 'Gondola or suspended cable car',
- 7: 'Funicular'}
- type_name = route_type_names.get(route_type, str(route_type))
- folder_name = 'Routes - %s' % type_name
- else:
- folder_name = 'Routes'
- routes_folder = self._CreateFolder(doc, folder_name, visible=False)
-
- for route in routes:
- style_id = self._CreateStyleForRoute(doc, route)
- route_folder = self._CreateFolder(routes_folder,
- GetRouteName(route),
- description=GetRouteDescription(route))
- self._CreateRouteShapesFolder(schedule, route_folder, route,
- style_id, False)
- self._CreateRoutePatternsFolder(route_folder, route, style_id, False)
- if self.show_trips:
- self._CreateRouteTripsFolder(route_folder, route, style_id, schedule)
- return routes_folder
-
- def _CreateShapesFolder(self, schedule, doc):
- """Create a KML Folder containing all the shapes in a schedule.
-
- The folder contains a placemark for each shape. If there are no shapes in
- the schedule then the folder is not created and None is returned.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- doc: The KML Document ElementTree.Element instance.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
- if not schedule.GetShapeList():
- return None
- shapes_folder = self._CreateFolder(doc, 'Shapes')
- shapes = list(schedule.GetShapeList())
- shapes.sort(key=lambda x: x.shape_id)
- for shape in shapes:
- placemark = self._CreatePlacemark(shapes_folder, shape.shape_id)
- self._CreateLineStringForShape(placemark, shape)
- if self.shape_points:
- self._CreateShapePointFolder(shapes_folder, shape)
- return shapes_folder
-
- def _CreateShapePointFolder(self, shapes_folder, shape):
- """Create a KML Folder containing all the shape points in a shape.
-
- The folder contains placemarks for each shapepoint.
-
- Args:
- shapes_folder: A KML Shape Folder ElementTree.Element instance
- shape: The shape to plot.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
-
- folder_name = shape.shape_id + ' Shape Points'
- folder = self._CreateFolder(shapes_folder, folder_name, visible=False)
- for (index, (lat, lon, dist)) in enumerate(shape.points):
- placemark = self._CreatePlacemark(folder, str(index+1))
- point = ET.SubElement(placemark, 'Point')
- coordinates = ET.SubElement(point, 'coordinates')
- coordinates.text = '%.6f,%.6f' % (lon, lat)
- return folder
-
- def Write(self, schedule, output_file):
- """Writes out a feed as KML.
-
- Args:
- schedule: A transitfeed.Schedule object containing the feed to write.
- output_file: The name of the output KML file, or file object to use.
- """
- # Generate the DOM to write
- root = ET.Element('kml')
- root.attrib['xmlns'] = 'http://earth.google.com/kml/2.1'
- doc = ET.SubElement(root, 'Document')
- open_tag = ET.SubElement(doc, 'open')
- open_tag.text = '1'
- self._CreateStopsFolder(schedule, doc)
- if self.split_routes:
- route_types = set()
- for route in schedule.GetRouteList():
- route_types.add(route.route_type)
- route_types = list(route_types)
- route_types.sort()
- for route_type in route_types:
- self._CreateRoutesFolder(schedule, doc, route_type)
- else:
- self._CreateRoutesFolder(schedule, doc)
- self._CreateShapesFolder(schedule, doc)
-
- # Make sure we pretty-print
- self._SetIndentation(root)
-
- # Now write the output
- if isinstance(output_file, file):
- output = output_file
- else:
- output = open(output_file, 'w')
- output.write("""<?xml version="1.0" encoding="UTF-8"?>\n""")
- ET.ElementTree(root).write(output, 'utf-8')
-
-
-def main():
- usage = \
-'''%prog [options] <input GTFS.zip> [<output.kml>]
-
-Reads GTFS file or directory <input GTFS.zip> and creates a KML file
-<output.kml> that contains the geographical features of the input. If
-<output.kml> is omitted a default filename is picked based on
-<input GTFS.zip>. By default the KML contains all stops and shapes.
-'''
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('-t', '--showtrips', action='store_true',
- dest='show_trips',
- help='include the individual trips for each route')
- parser.add_option('-a', '--altitude_per_sec', action='store', type='float',
- dest='altitude_per_sec',
- help='if greater than 0 trips are drawn with time axis '
- 'set to this many meters high for each second of time')
- parser.add_option('-s', '--splitroutes', action='store_true',
- dest='split_routes',
- help='split the routes by type')
- parser.add_option('-d', '--date_filter', action='store', type='string',
- dest='date_filter',
- help='Restrict to trips active on date YYYYMMDD')
- parser.add_option('-p', '--display_shape_points', action='store_true',
- dest='shape_points',
- help='shows the actual points along shapes')
-
- parser.set_defaults(altitude_per_sec=1.0)
- options, args = parser.parse_args()
-
- if len(args) < 1:
- parser.error('You must provide the path of an input GTFS file.')
-
- if args[0] == 'IWantMyCrash':
- raise Exception('For testCrashHandler')
-
- input_path = args[0]
- if len(args) >= 2:
- output_path = args[1]
- else:
- path = os.path.normpath(input_path)
- (feed_dir, feed) = os.path.split(path)
- if '.' in feed:
- feed = feed.rsplit('.', 1)[0] # strip extension
- output_filename = '%s.kml' % feed
- output_path = os.path.join(feed_dir, output_filename)
-
- loader = transitfeed.Loader(input_path,
- problems=transitfeed.ProblemReporter())
- feed = loader.Load()
- print "Writing %s" % output_path
- writer = KMLWriter()
- writer.show_trips = options.show_trips
- writer.altitude_per_sec = options.altitude_per_sec
- writer.split_routes = options.split_routes
- writer.date_filter = options.date_filter
- writer.shape_points = options.shape_points
- writer.Write(feed, output_path)
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/merge.py
+++ /dev/null
@@ -1,1766 +1,1 @@
-#!/usr/bin/python
-#
-# Copyright 2007 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""A tool for merging two Google Transit feeds.
-
-Given two Google Transit feeds intending to cover two disjoint calendar
-intervals, this tool will attempt to produce a single feed by merging as much
-of the two feeds together as possible.
-
-For example, most stops remain the same throughout the year. Therefore, many
-of the stops given in stops.txt for the first feed represent the same stops
-given in the second feed. This tool will try to merge these stops so they
-only appear once in the resultant feed.
-
-A note on terminology: The first schedule is referred to as the "old" schedule;
-the second as the "new" schedule. The resultant schedule is referred to as
-the "merged" schedule. Names of things in the old schedule are variations of
-the letter "a" while names of things from the new schedule are variations of
-"b". The objects that represents routes, agencies and so on are called
-"entities".
-
-usage: merge.py [options] old_feed_path new_feed_path merged_feed_path
-
-Run merge.py --help for a list of the possible options.
-"""
-
-
-__author__ = 'timothy.stranex@gmail.com (Timothy Stranex)'
-
-
-import datetime
-import optparse
-import os
-import re
-import sys
-import time
-import transitfeed
-from transitfeed import util
-import webbrowser
-
-
-# TODO:
-# 1. write unit tests that use actual data
-# 2. write a proper trip and stop_times merger
-# 3. add a serialised access method for stop_times and shapes to transitfeed
-# 4. add support for merging schedules which have some service period overlap
-
-
-def ApproximateDistanceBetweenPoints(pa, pb):
- """Finds the distance between two points on the Earth's surface.
-
- This is an approximate distance based on assuming that the Earth is a sphere.
- The points are specified by their lattitude and longitude.
-
- Args:
- pa: the first (lat, lon) point tuple
- pb: the second (lat, lon) point tuple
-
- Returns:
- The distance as a float in metres.
- """
- alat, alon = pa
- blat, blon = pb
- sa = transitfeed.Stop(lat=alat, lng=alon)
- sb = transitfeed.Stop(lat=blat, lng=blon)
- return transitfeed.ApproximateDistanceBetweenStops(sa, sb)
-
-
-class Error(Exception):
- """The base exception class for this module."""
-
-
-class MergeError(Error):
- """An error produced when two entities could not be merged."""
-
-
-class MergeProblemWithContext(transitfeed.ExceptionWithContext):
- """The base exception class for problem reporting in the merge module.
-
- Attributes:
- dataset_merger: The DataSetMerger that generated this problem.
- entity_type_name: The entity type of the dataset_merger. This is just
- dataset_merger.ENTITY_TYPE_NAME.
- ERROR_TEXT: The text used for generating the problem message.
- """
-
- def __init__(self, dataset_merger, problem_type=transitfeed.TYPE_WARNING,
- **kwargs):
- """Initialise the exception object.
-
- Args:
- dataset_merger: The DataSetMerger instance that generated this problem.
- problem_type: The problem severity. This should be set to one of the
- corresponding constants in transitfeed.
- kwargs: Keyword arguments to be saved as instance attributes.
- """
- kwargs['type'] = problem_type
- kwargs['entity_type_name'] = dataset_merger.ENTITY_TYPE_NAME
- transitfeed.ExceptionWithContext.__init__(self, None, None, **kwargs)
- self.dataset_merger = dataset_merger
-
- def FormatContext(self):
- return "In files '%s'" % self.dataset_merger.FILE_NAME
-
-
-class SameIdButNotMerged(MergeProblemWithContext):
- ERROR_TEXT = ("There is a %(entity_type_name)s in the old feed with id "
- "'%(id)s' and one from the new feed with the same id but "
- "they could not be merged:")
-
-
-class CalendarsNotDisjoint(MergeProblemWithContext):
- ERROR_TEXT = ("The service periods could not be merged since they are not "
- "disjoint.")
-
-
-class MergeNotImplemented(MergeProblemWithContext):
- ERROR_TEXT = ("The feed merger does not currently support merging in this "
- "file. The entries have been duplicated instead.")
-
-
-class FareRulesBroken(MergeProblemWithContext):
- ERROR_TEXT = ("The feed merger is currently unable to handle fare rules "
- "properly.")
-
-
-class MergeProblemReporterBase(transitfeed.ProblemReporterBase):
- """The base problem reporter class for the merge module."""
-
- def SameIdButNotMerged(self, dataset, entity_id, reason):
- self._Report(SameIdButNotMerged(dataset, id=entity_id, reason=reason))
-
- def CalendarsNotDisjoint(self, dataset):
- self._Report(CalendarsNotDisjoint(dataset,
- problem_type=transitfeed.TYPE_ERROR))
-
- def MergeNotImplemented(self, dataset):
- self._Report(MergeNotImplemented(dataset))
-
- def FareRulesBroken(self, dataset):
- self._Report(FareRulesBroken(dataset))
-
-
-class ExceptionProblemReporter(MergeProblemReporterBase):
- """A problem reporter that reports errors by raising exceptions."""
-
- def __init__(self, raise_warnings=False):
- """Initialise.
-
- Args:
- raise_warnings: If this is True then warnings are also raised as
- exceptions.
- """
- MergeProblemReporterBase.__init__(self)
- self._raise_warnings = raise_warnings
-
- def _Report(self, merge_problem):
- if self._raise_warnings or merge_problem.IsError():
- raise merge_problem
-
-
-class HTMLProblemReporter(MergeProblemReporterBase):
- """A problem reporter which generates HTML output."""
-
- def __init__(self):
- """Initialise."""
- MergeProblemReporterBase.__init__(self)
- self._dataset_warnings = {} # a map from DataSetMergers to their warnings
- self._dataset_errors = {}
- self._warning_count = 0
- self._error_count = 0
-
- def _Report(self, merge_problem):
- if merge_problem.IsWarning():
- dataset_problems = self._dataset_warnings
- self._warning_count += 1
- else:
- dataset_problems = self._dataset_errors
- self._error_count += 1
-
- problem_html = '<li>%s</li>' % (
- merge_problem.FormatProblem().replace('\n', '<br>'))
- dataset_problems.setdefault(merge_problem.dataset_merger, []).append(
- problem_html)
-
- def _GenerateStatsTable(self, feed_merger):
- """Generate an HTML table of merge statistics.
-
- Args:
- feed_merger: The FeedMerger instance.
-
- Returns:
- The generated HTML as a string.
- """
- rows = []
- rows.append('<tr><th class="header"/><th class="header">Merged</th>'
- '<th class="header">Copied from old feed</th>'
- '<th class="header">Copied from new feed</th></tr>')
- for merger in feed_merger.GetMergerList():
- stats = merger.GetMergeStats()
- if stats is None:
- continue
- merged, not_merged_a, not_merged_b = stats
- rows.append('<tr><th class="header">%s</th>'
- '<td class="header">%d</td>'
- '<td class="header">%d</td>'
- '<td class="header">%d</td></tr>' %
- (merger.DATASET_NAME, merged, not_merged_a, not_merged_b))
- return '<table>%s</table>' % '\n'.join(rows)
-
- def _GenerateSection(self, problem_type):
- """Generate a listing of the given type of problems.
-
- Args:
- problem_type: The type of problem. This is one of the problem type
- constants from transitfeed.
-
- Returns:
- The generated HTML as a string.
- """
- if problem_type == transitfeed.TYPE_WARNING:
- dataset_problems = self._dataset_warnings
- heading = 'Warnings'
- else:
- dataset_problems = self._dataset_errors
- heading = 'Errors'
-
- if not dataset_problems:
- return ''
-
- prefix = '<h2 class="issueHeader">%s:</h2>' % heading
- dataset_sections = []
- for dataset_merger, problems in dataset_problems.items():
- dataset_sections.append('<h3>%s</h3><ol>%s</ol>' % (
- dataset_merger.FILE_NAME, '\n'.join(problems)))
- body = '\n'.join(dataset_sections)
- return prefix + body
-
- def _GenerateSummary(self):
- """Generate a summary of the warnings and errors.
-
- Returns:
- The generated HTML as a string.
- """
- items = []
- if self._dataset_errors:
- items.append('errors: %d' % self._error_count)
- if self._dataset_warnings:
- items.append('warnings: %d' % self._warning_count)
-
- if items:
- return '<p><span class="fail">%s</span></p>' % '<br>'.join(items)
- else:
- return '<p><span class="pass">feeds merged successfully</span></p>'
-
- def WriteOutput(self, output_file, feed_merger,
- old_feed_path, new_feed_path, merged_feed_path):
- """Write the HTML output to a file.
-
- Args:
- output_file: The file object that the HTML output will be written to.
- feed_merger: The FeedMerger instance.
- old_feed_path: The path to the old feed file as a string.
- new_feed_path: The path to the new feed file as a string
- merged_feed_path: The path to the merged feed file as a string. This
- may be None if no merged feed was written.
- """
- if merged_feed_path is None:
- html_merged_feed_path = ''
- else:
- html_merged_feed_path = '<p>Merged feed created: <code>%s</code></p>' % (
- merged_feed_path)
-
- html_header = """<html>
-<head>
-<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
-<title>Feed Merger Results</title>
-<style>
- body {font-family: Georgia, serif; background-color: white}
- .path {color: gray}
- div.problem {max-width: 500px}
- td,th {background-color: khaki; padding: 2px; font-family:monospace}
- td.problem,th.problem {background-color: dc143c; color: white; padding: 2px;
- font-family:monospace}
- table {border-spacing: 5px 0px; margin-top: 3px}
- h3.issueHeader {padding-left: 1em}
- span.pass {background-color: lightgreen}
- span.fail {background-color: yellow}
- .pass, .fail {font-size: 16pt; padding: 3px}
- ol,.unused {padding-left: 40pt}
- .header {background-color: white; font-family: Georgia, serif; padding: 0px}
- th.header {text-align: right; font-weight: normal; color: gray}
- .footer {font-size: 10pt}
-</style>
-</head>
-<body>
-<h1>Feed merger results</h1>
-<p>Old feed: <code>%(old_feed_path)s</code></p>
-<p>New feed: <code>%(new_feed_path)s</code></p>
-%(html_merged_feed_path)s""" % locals()
-
- html_stats = self._GenerateStatsTable(feed_merger)
- html_summary = self._GenerateSummary()
- html_errors = self._GenerateSection(transitfeed.TYPE_ERROR)
- html_warnings = self._GenerateSection(transitfeed.TYPE_WARNING)
-
- html_footer = """
-<div class="footer">
-Generated using transitfeed version %s on %s.
-</div>
-</body>
-</html>""" % (transitfeed.__version__,
- time.strftime('%B %d, %Y at %I:%M %p %Z'))
-
- output_file.write(transitfeed.EncodeUnicode(html_header))
- output_file.write(transitfeed.EncodeUnicode(html_stats))
- output_file.write(transitfeed.EncodeUnicode(html_summary))
- output_file.write(transitfeed.EncodeUnicode(html_errors))
- output_file.write(transitfeed.EncodeUnicode(html_warnings))
- output_file.write(transitfeed.EncodeUnicode(html_footer))
-
-
-class ConsoleWarningRaiseErrorProblemReporter(transitfeed.ProblemReporterBase):
- """Problem reporter to use when loading feeds for merge."""
-
- def _Report(self, e):
- if e.IsError():
- raise e
- else:
- print transitfeed.EncodeUnicode(e.FormatProblem())
- context = e.FormatContext()
- if context:
- print context
-
-
-def LoadWithoutErrors(path, memory_db):
- """"Return a Schedule object loaded from path; sys.exit for any error."""
- loading_problem_handler = ConsoleWarningRaiseErrorProblemReporter()
- try:
- schedule = transitfeed.Loader(path,
- memory_db=memory_db,
- problems=loading_problem_handler).Load()
- except transitfeed.ExceptionWithContext, e:
- print >>sys.stderr, (
- "\n\nFeeds to merge must load without any errors.\n"
- "While loading %s the following error was found:\n%s\n%s\n" %
- (path, e.FormatContext(), transitfeed.EncodeUnicode(e.FormatProblem())))
- sys.exit(1)
- return schedule
-
-
-class DataSetMerger(object):
- """A DataSetMerger is in charge of merging a set of entities.
-
- This is an abstract class and should be subclassed for each different entity
- type.
-
- Attributes:
- ENTITY_TYPE_NAME: The name of the entity type like 'agency' or 'stop'.
- FILE_NAME: The name of the file containing this data set like 'agency.txt'.
- DATASET_NAME: A name for the dataset like 'Agencies' or 'Stops'.
- """
-
- def __init__(self, feed_merger):
- """Initialise.
-
- Args:
- feed_merger: The FeedMerger.
- """
- self.feed_merger = feed_merger
- self._num_merged = 0
- self._num_not_merged_a = 0
- self._num_not_merged_b = 0
-
- def _MergeIdentical(self, a, b):
- """Tries to merge two values. The values are required to be identical.
-
- Args:
- a: The first value.
- b: The second value.
-
- Returns:
- The trivially merged value.
-
- Raises:
- MergeError: The values were not identical.
- """
- if a != b:
- raise MergeError("values must be identical ('%s' vs '%s')" %
- (transitfeed.EncodeUnicode(a),
- transitfeed.EncodeUnicode(b)))
- return b
-
- def _MergeIdenticalCaseInsensitive(self, a, b):
- """Tries to merge two strings.
-
- The string are required to be the same ignoring case. The second string is
- always used as the merged value.
-
- Args:
- a: The first string.
- b: The second string.
-
- Returns:
- The merged string. This is equal to the second string.
-
- Raises:
- MergeError: The strings were not the same ignoring case.
- """
- if a.lower() != b.lower():
- raise MergeError("values must be the same (case insensitive) "
- "('%s' vs '%s')" % (transitfeed.EncodeUnicode(a),
- transitfeed.EncodeUnicode(b)))
- return b
-
- def _MergeOptional(self, a, b):
- """Tries to merge two values which may be None.
-
- If both values are not None, they are required to be the same and the
- merge is trivial. If one of the values is None and the other is not None,
- the merge results in the one which is not None. If both are None, the merge
- results in None.
-
- Args:
- a: The first value.
- b: The second value.
-
- Returns:
- The merged value.
-
- Raises:
- MergeError: If both values are not None and are not the same.
- """
- if a and b:
- if a != b:
- raise MergeError("values must be identical if both specified "
- "('%s' vs '%s')" % (transitfeed.EncodeUnicode(a),
- transitfeed.EncodeUnicode(b)))
- return a or b
-
- def _MergeSameAgency(self, a_agency_id, b_agency_id):
- """Merge agency ids to the corresponding agency id in the merged schedule.
-
- Args:
- a_agency_id: an agency id from the old schedule
- b_agency_id: an agency id from the new schedule
-
- Returns:
- The agency id of the corresponding merged agency.
-
- Raises:
- MergeError: If a_agency_id and b_agency_id do not correspond to the same
- merged agency.
- KeyError: Either aaid or baid is not a valid agency id.
- """
- a_agency_id = (a_agency_id or
- self.feed_merger.a_schedule.GetDefaultAgency().agency_id)
- b_agency_id = (b_agency_id or
- self.feed_merger.b_schedule.GetDefaultAgency().agency_id)
- a_agency = self.feed_merger.a_merge_map[
- self.feed_merger.a_schedule.GetAgency(a_agency_id)]
- b_agency = self.feed_merger.b_merge_map[
- self.feed_merger.b_schedule.GetAgency(b_agency_id)]
- if a_agency != b_agency:
- raise MergeError('agency must be the same')
- return a_agency.agency_id
-
- def _SchemedMerge(self, scheme, a, b):
- """Tries to merge two entities according to a merge scheme.
-
- A scheme is specified by a map where the keys are entity attributes and the
- values are merge functions like Merger._MergeIdentical or
- Merger._MergeOptional. The entity is first migrated to the merged schedule.
- Then the attributes are individually merged as specified by the scheme.
-
- Args:
- scheme: The merge scheme, a map from entity attributes to merge
- functions.
- a: The entity from the old schedule.
- b: The entity from the new schedule.
-
- Returns:
- The migrated and merged entity.
-
- Raises:
- MergeError: One of the attributes was not able to be merged.
- """
- migrated = self._Migrate(b, self.feed_merger.b_schedule, False)
- for attr, merger in scheme.items():
- a_attr = getattr(a, attr, None)
- b_attr = getattr(b, attr, None)
- try:
- merged_attr = merger(a_attr, b_attr)
- except MergeError, merge_error:
- raise MergeError("Attribute '%s' could not be merged: %s." % (
- attr, merge_error))
- if migrated is not None:
- setattr(migrated, attr, merged_attr)
- return migrated
-
- def _MergeSameId(self):
- """Tries to merge entities based on their ids.
-
- This tries to merge only the entities from the old and new schedules which
- have the same id. These are added into the merged schedule. Entities which
- do not merge or do not have the same id as another entity in the other
- schedule are simply migrated into the merged schedule.
-
- This method is less flexible than _MergeDifferentId since it only tries
- to merge entities which have the same id while _MergeDifferentId tries to
- merge everything. However, it is faster and so should be used whenever
- possible.
-
- This method makes use of various methods like _Merge and _Migrate which
- are not implemented in the abstract DataSetMerger class. These method
- should be overwritten in a subclass to allow _MergeSameId to work with
- different entity types.
-
- Returns:
- The number of merged entities.
- """
- a_not_merged = []
- b_not_merged = []
-
- for a in self._GetIter(self.feed_merger.a_schedule):
- try:
- b = self._GetById(self.feed_merger.b_schedule, self._GetId(a))
- except KeyError:
- # there was no entity in B with the same id as a
- a_not_merged.append(a)
- continue
- try:
- self._Add(a, b, self._MergeEntities(a, b))
- self._num_merged += 1
- except MergeError, merge_error:
- a_not_merged.append(a)
- b_not_merged.append(b)
- self._ReportSameIdButNotMerged(self._GetId(a), merge_error)
-
- for b in self._GetIter(self.feed_merger.b_schedule):
- try:
- a = self._GetById(self.feed_merger.a_schedule, self._GetId(b))
- except KeyError:
- # there was no entity in A with the same id as b
- b_not_merged.append(b)
-
- # migrate the remaining entities
- for a in a_not_merged:
- newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a))
- self._Add(a, None, self._Migrate(a, self.feed_merger.a_schedule, newid))
- for b in b_not_merged:
- newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b))
- self._Add(None, b, self._Migrate(b, self.feed_merger.b_schedule, newid))
-
- self._num_not_merged_a = len(a_not_merged)
- self._num_not_merged_b = len(b_not_merged)
- return self._num_merged
-
- def _MergeDifferentId(self):
- """Tries to merge all possible combinations of entities.
-
- This tries to merge every entity in the old schedule with every entity in
- the new schedule. Unlike _MergeSameId, the ids do not need to match.
- However, _MergeDifferentId is much slower than _MergeSameId.
-
- This method makes use of various methods like _Merge and _Migrate which
- are not implemented in the abstract DataSetMerger class. These method
- should be overwritten in a subclass to allow _MergeSameId to work with
- different entity types.
-
- Returns:
- The number of merged entities.
- """
- # TODO: The same entity from A could merge with multiple from B.
- # This should either generate an error or should be prevented from
- # happening.
- for a in self._GetIter(self.feed_merger.a_schedule):
- for b in self._GetIter(self.feed_merger.b_schedule):
- try:
- self._Add(a, b, self._MergeEntities(a, b))
- self._num_merged += 1
- except MergeError:
- continue
-
- for a in self._GetIter(self.feed_merger.a_schedule):
- if a not in self.feed_merger.a_merge_map:
- self._num_not_merged_a += 1
- newid = self._HasId(self.feed_merger.b_schedule, self._GetId(a))
- self._Add(a, None,
- self._Migrate(a, self.feed_merger.a_schedule, newid))
- for b in self._GetIter(self.feed_merger.b_schedule):
- if b not in self.feed_merger.b_merge_map:
- self._num_not_merged_b += 1
- newid = self._HasId(self.feed_merger.a_schedule, self._GetId(b))
- self._Add(None, b,
- self._Migrate(b, self.feed_merger.b_schedule, newid))
-
- return self._num_merged
-
- def _ReportSameIdButNotMerged(self, entity_id, reason):
- """Report that two entities have the same id but could not be merged.
-
- Args:
- entity_id: The id of the entities.
- reason: A string giving a reason why they could not be merged.
- """
- self.feed_merger.problem_reporter.SameIdButNotMerged(self,
- entity_id,
- reason)
-
- def _GetIter(self, schedule):
- """Returns an iterator of entities for this data set in the given schedule.
-
- This method usually corresponds to one of the methods from
- transitfeed.Schedule like GetAgencyList() or GetRouteList().
-
- Note: This method must be overwritten in a subclass if _MergeSameId or
- _MergeDifferentId are to be used.
-
- Args:
- schedule: Either the old or new schedule from the FeedMerger.
-
- Returns:
- An iterator of entities.
- """
- raise NotImplementedError()
-
- def _GetById(self, schedule, entity_id):
- """Returns an entity given its id.
-
- This method usually corresponds to one of the methods from
- transitfeed.Schedule like GetAgency() or GetRoute().
-
- Note: This method must be overwritten in a subclass if _MergeSameId or
- _MergeDifferentId are to be used.
-
- Args:
- schedule: Either the old or new schedule from the FeedMerger.
- entity_id: The id string of the entity.
-
- Returns:
- The entity with the given id.
-
- Raises:
- KeyError: There is not entity with the given id.
- """
- raise NotImplementedError()
-
- def _HasId(self, schedule, entity_id):
- """Check if the schedule has an entity with the given id.
-
- Args:
- schedule: The transitfeed.Schedule instance to look in.
- entity_id: The id of the entity.
-
- Returns:
- True if the schedule has an entity with the id or False if not.
- """
- try:
- self._GetById(schedule, entity_id)
- has = True
- except KeyError:
- has = False
- return has
-
- def _MergeEntities(self, a, b):
- """Tries to merge the two entities.
-
- Note: This method must be overwritten in a subclass if _MergeSameId or
- _MergeDifferentId are to be used.
-
- Args:
- a: The entity from the old schedule.
- b: The entity from the new schedule.
-
- Returns:
- The merged migrated entity.
-
- Raises:
- MergeError: The entities were not able to be merged.
- """
- raise NotImplementedError()
-
- def _Migrate(self, entity, schedule, newid):
- """Migrates the entity to the merge schedule.
-
- This involves copying the entity and updating any ids to point to the
- corresponding entities in the merged schedule. If newid is True then
- a unique id is generated for the migrated entity using the original id
- as a prefix.
-
- Note: This method must be overwritten in a subclass if _MergeSameId or
- _MergeDifferentId are to be used.
-
- Args:
- entity: The entity to migrate.
- schedule: The schedule from the FeedMerger that contains ent.
- newid: Whether to generate a new id (True) or keep the original (False).
-
- Returns:
- The migrated entity.
- """
- raise NotImplementedError()
-
- def _Add(self, a, b, migrated):
- """Adds the migrated entity to the merged schedule.
-
- If a and b are both not None, it means that a and b were merged to create
- migrated. If one of a or b is None, it means that the other was not merged
- but has been migrated. This mapping is registered with the FeedMerger.
-
- Note: This method must be overwritten in a subclass if _MergeSameId or
- _MergeDifferentId are to be used.
-
- Args:
- a: The original entity from the old schedule.
- b: The original entity from the new schedule.
- migrated: The migrated entity for the merged schedule.
- """
- raise NotImplementedError()
-
- def _GetId(self, entity):
- """Returns the id of the given entity.
-
- Note: This method must be overwritten in a subclass if _MergeSameId or
- _MergeDifferentId are to be used.
-
- Args:
- entity: The entity.
-
- Returns:
- The id of the entity as a string or None.
- """
- raise NotImplementedError()
-
- def MergeDataSets(self):
- """Merge the data sets.
-
- This method is called in FeedMerger.MergeSchedule().
-
- Note: This method must be overwritten in a subclass.
-
- Returns:
- A boolean which is False if the dataset was unable to be merged and
- as a result the entire merge should be aborted. In this case, the problem
- will have been reported using the FeedMerger's problem reporter.
- """
- raise NotImplementedError()
-
- def GetMergeStats(self):
- """Returns some merge statistics.
-
- These are given as a tuple (merged, not_merged_a, not_merged_b) where
- "merged" is the number of merged entities, "not_merged_a" is the number of
- entities from the old schedule that were not merged and "not_merged_b" is
- the number of entities from the new schedule that were not merged.
-
- The return value can also be None. This means that there are no statistics
- for this entity type.
-
- The statistics are only available after MergeDataSets() has been called.
-
- Returns:
- Either the statistics tuple or None.
- """
- return (self._num_merged, self._num_not_merged_a, self._num_not_merged_b)
-
-
-class AgencyMerger(DataSetMerger):
- """A DataSetMerger for agencies."""
-
- ENTITY_TYPE_NAME = 'agency'
- FILE_NAME = 'agency.txt'
- DATASET_NAME = 'Agencies'
-
- def _GetIter(self, schedule):
- return schedule.GetAgencyList()
-
- def _GetById(self, schedule, agency_id):
- return schedule.GetAgency(agency_id)
-
- def _MergeEntities(self, a, b):
- """Merges two agencies.
-
- To be merged, they are required to have the same id, name, url and
- timezone. The remaining language attribute is taken from the new agency.
-
- Args:
- a: The first agency.
- b: The second agency.
-
- Returns:
- The merged agency.
-
- Raises:
- MergeError: The agencies could not be merged.
- """
-
- def _MergeAgencyId(a_agency_id, b_agency_id):
- """Merge two agency ids.
-
- The only difference between this and _MergeIdentical() is that the values
- None and '' are regarded as being the same.
-
- Args:
- a_agency_id: The first agency id.
- b_agency_id: The second agency id.
-
- Returns:
- The merged agency id.
-
- Raises:
- MergeError: The agency ids could not be merged.
- """
- a_agency_id = a_agency_id or None
- b_agency_id = b_agency_id or None
- return self._MergeIdentical(a_agency_id, b_agency_id)
-
- scheme = {'agency_id': _MergeAgencyId,
- 'agency_name': self._MergeIdentical,
- 'agency_url': self._MergeIdentical,
- 'agency_timezone': self._MergeIdentical}
- return self._SchemedMerge(scheme, a, b)
-
- def _Migrate(self, entity, schedule, newid):
- a = transitfeed.Agency(field_dict=entity)
- if newid:
- a.agency_id = self.feed_merger.GenerateId(entity.agency_id)
- return a
-
- def _Add(self, a, b, migrated):
- self.feed_merger.Register(a, b, migrated)
- self.feed_merger.merged_schedule.AddAgencyObject(migrated)
-
- def _GetId(self, entity):
- return entity.agency_id
-
- def MergeDataSets(self):
- self._MergeSameId()
- return True
-
-
-class StopMerger(DataSetMerger):
- """A DataSetMerger for stops.
-
- Attributes:
- largest_stop_distance: The largest distance allowed between stops that
- will be merged in metres.
- """
-
- ENTITY_TYPE_NAME = 'stop'
- FILE_NAME = 'stops.txt'
- DATASET_NAME = 'Stops'
-
- largest_stop_distance = 10.0
-
- def __init__(self, feed_merger):
- DataSetMerger.__init__(self, feed_merger)
- self._merged = []
- self._a_not_merged = []
- self._b_not_merged = []
-
- def SetLargestStopDistance(self, distance):
- """Sets largest_stop_distance."""
- self.largest_stop_distance = distance
-
- def _GetIter(self, schedule):
- return schedule.GetStopList()
-
- def _GetById(self, schedule, stop_id):
- return schedule.GetStop(stop_id)
-
- def _MergeEntities(self, a, b):
- """Merges two stops.
-
- For the stops to be merged, they must have:
- - the same stop_id
- - the same stop_name (case insensitive)
- - the same zone_id
- - locations less than largest_stop_distance apart
- The other attributes can have arbitary changes. The merged attributes are
- taken from the new stop.
-
- Args:
- a: The first stop.
- b: The second stop.
-
- Returns:
- The merged stop.
-
- Raises:
- MergeError: The stops could not be merged.
- """
- distance = transitfeed.ApproximateDistanceBetweenStops(a, b)
- if distance > self.largest_stop_distance:
- raise MergeError("Stops are too far apart: %.1fm "
- "(largest_stop_distance is %.1fm)." %
- (distance, self.largest_stop_distance))
- scheme = {'stop_id': self._MergeIdentical,
- 'stop_name': self._MergeIdenticalCaseInsensitive,
- 'zone_id': self._MergeIdentical,
- 'location_type': self._MergeIdentical}
- return self._SchemedMerge(scheme, a, b)
-
- def _Migrate(self, entity, schedule, newid):
- migrated_stop = transitfeed.Stop(field_dict=entity)
- if newid:
- migrated_stop.stop_id = self.feed_merger.GenerateId(entity.stop_id)
- return migrated_stop
-
- def _Add(self, a, b, migrated_stop):
- self.feed_merger.Register(a, b, migrated_stop)
-
- # The migrated_stop will be added to feed_merger.merged_schedule later
- # since adding must be done after the zone_ids have been finalized.
- if a and b:
- self._merged.append((a, b, migrated_stop))
- elif a:
- self._a_not_merged.append((a, migrated_stop))
- elif b:
- self._b_not_merged.append((b, migrated_stop))
-
- def _GetId(self, entity):
- return entity.stop_id
-
- def MergeDataSets(self):
- num_merged = self._MergeSameId()
- fm = self.feed_merger
-
- # now we do all the zone_id and parent_station mapping
-
- # the zone_ids for merged stops can be preserved
- for (a, b, merged_stop) in self._merged:
- assert a.zone_id == b.zone_id
- fm.a_zone_map[a.zone_id] = a.zone_id
- fm.b_zone_map[b.zone_id] = b.zone_id
- merged_stop.zone_id = a.zone_id
- if merged_stop.parent_station:
- # Merged stop has a parent. Update it to be the parent it had in b.
- parent_in_b = fm.b_schedule.GetStop(b.parent_station)
- merged_stop.parent_station = fm.b_merge_map[parent_in_b].stop_id
- fm.merged_schedule.AddStopObject(merged_stop)
-
- self._UpdateAndMigrateUnmerged(self._a_not_merged, fm.a_zone_map,
- fm.a_merge_map, fm.a_schedule)
- self._UpdateAndMigrateUnmerged(self._b_not_merged, fm.b_zone_map,
- fm.b_merge_map, fm.b_schedule)
-
- print 'Stops merged: %d of %d, %d' % (
- num_merged,
- len(fm.a_schedule.GetStopList()),
- len(fm.b_schedule.GetStopList()))
- return True
-
- def _UpdateAndMigrateUnmerged(self, not_merged_stops, zone_map, merge_map,
- schedule):
- """Correct references in migrated unmerged stops and add to merged_schedule.
-
- For stops migrated from one of the input feeds to the output feed update the
- parent_station and zone_id references to point to objects in the output
- feed. Then add the migrated stop to the new schedule.
-
- Args:
- not_merged_stops: list of stops from one input feed that have not been
- merged
- zone_map: map from zone_id in the input feed to zone_id in the output feed
- merge_map: map from Stop objects in the input feed to Stop objects in
- the output feed
- schedule: the input Schedule object
- """
- # for the unmerged stops, we use an already mapped zone_id if possible
- # if not, we generate a new one and add it to the map
- for stop, migrated_stop in not_merged_stops:
- if stop.zone_id in zone_map:
- migrated_stop.zone_id = zone_map[stop.zone_id]
- else:
- migrated_stop.zone_id = self.feed_merger.GenerateId(stop.zone_id)
- zone_map[stop.zone_id] = migrated_stop.zone_id
- if stop.parent_station:
- parent_original = schedule.GetStop(stop.parent_station)
- migrated_stop.parent_station = merge_map[parent_original].stop_id
- self.feed_merger.merged_schedule.AddStopObject(migrated_stop)
-
-
-class RouteMerger(DataSetMerger):
- """A DataSetMerger for routes."""
-
- ENTITY_TYPE_NAME = 'route'
- FILE_NAME = 'routes.txt'
- DATASET_NAME = 'Routes'
-
- def _GetIter(self, schedule):
- return schedule.GetRouteList()
-
- def _GetById(self, schedule, route_id):
- return schedule.GetRoute(route_id)
-
- def _MergeEntities(self, a, b):
- scheme = {'route_short_name': self._MergeIdentical,
- 'route_long_name': self._MergeIdentical,
- 'agency_id': self._MergeSameAgency,
- 'route_type': self._MergeIdentical,
- 'route_id': self._MergeIdentical,
- 'route_url': self._MergeOptional,
- 'route_color': self._MergeOptional,
- 'route_text_color': self._MergeOptional}
- return self._SchemedMerge(scheme, a, b)
-
- def _Migrate(self, entity, schedule, newid):
- migrated_route = transitfeed.Route(field_dict=entity)
- if newid:
- migrated_route.route_id = self.feed_merger.GenerateId(entity.route_id)
- if entity.agency_id:
- original_agency = schedule.GetAgency(entity.agency_id)
- else:
- original_agency = schedule.GetDefaultAgency()
-
- migrated_agency = self.feed_merger.GetMergedObject(original_agency)
- migrated_route.agency_id = migrated_agency.agency_id
- return migrated_route
-
- def _Add(self, a, b, migrated_route):
- self.feed_merger.Register(a, b, migrated_route)
- self.feed_merger.merged_schedule.AddRouteObject(migrated_route)
-
- def _GetId(self, entity):
- return entity.route_id
-
- def MergeDataSets(self):
- self._MergeSameId()
- return True
-
-
-class ServicePeriodMerger(DataSetMerger):
- """A DataSetMerger for service periods.
-
- Attributes:
- require_disjoint_calendars: A boolean specifying whether to require
- disjoint calendars when merging (True) or not (False).
- """
-
- ENTITY_TYPE_NAME = 'service period'
- FILE_NAME = 'calendar.txt/calendar_dates.txt'
- DATASET_NAME = 'Service Periods'
-
- def __init__(self, feed_merger):
- DataSetMerger.__init__(self, feed_merger)
- self.require_disjoint_calendars = True
-
- def _ReportSameIdButNotMerged(self, entity_id, reason):
- pass
-
- def _GetIter(self, schedule):
- return schedule.GetServicePeriodList()
-
- def _GetById(self, schedule, service_id):
- return schedule.GetServicePeriod(service_id)
-
- def _MergeEntities(self, a, b):
- """Tries to merge two service periods.
-
- Note: Currently this just raises a MergeError since service periods cannot
- be merged.
-
- Args:
- a: The first service period.
- b: The second service period.
-
- Returns:
- The merged service period.
-
- Raises:
- MergeError: When the service periods could not be merged.
- """
- raise MergeError('Cannot merge service periods')
-
- def _Migrate(self, original_service_period, schedule, newid):
- migrated_service_period = transitfeed.ServicePeriod()
- migrated_service_period.day_of_week = list(
- original_service_period.day_of_week)
- migrated_service_period.start_date = original_service_period.start_date
- migrated_service_period.end_date = original_service_period.end_date
- migrated_service_period.date_exceptions = dict(
- original_service_period.date_exceptions)
- if newid:
- migrated_service_period.service_id = self.feed_merger.GenerateId(
- original_service_period.service_id)
- else:
- migrated_service_period.service_id = original_service_period.service_id
- return migrated_service_period
-
- def _Add(self, a, b, migrated_service_period):
- self.feed_merger.Register(a, b, migrated_service_period)
- self.feed_merger.merged_schedule.AddServicePeriodObject(
- migrated_service_period)
-
- def _GetId(self, entity):
- return entity.service_id
-
- def MergeDataSets(self):
- if self.require_disjoint_calendars and not self.CheckDisjointCalendars():
- self.feed_merger.problem_reporter.CalendarsNotDisjoint(self)
- return False
- self._MergeSameId()
- self.feed_merger.problem_reporter.MergeNotImplemented(self)
- return True
-
- def DisjoinCalendars(self, cutoff):
- """Forces the old and new calendars to be disjoint about a cutoff date.
-
- This truncates the service periods of the old schedule so that service
- stops one day before the given cutoff date and truncates the new schedule
- so that service only begins on the cutoff date.
-
- Args:
- cutoff: The cutoff date as a string in YYYYMMDD format. The timezone
- is the same as used in the calendar.txt file.
- """
-
- def TruncatePeriod(service_period, start, end):
- """Truncate the service period to into the range [start, end].
-
- Args:
- service_period: The service period to truncate.
- start: The start date as a string in YYYYMMDD format.
- end: The end date as a string in YYYYMMDD format.
- """
- service_period.start_date = max(service_period.start_date, start)
- service_period.end_date = min(service_period.end_date, end)
- dates_to_delete = []
- for k in service_period.date_exceptions:
- if (k < start) or (k > end):
- dates_to_delete.append(k)
- for k in dates_to_delete:
- del service_period.date_exceptions[k]
-
- # find the date one day before cutoff
- year = int(cutoff[:4])
- month = int(cutoff[4:6])
- day = int(cutoff[6:8])
- cutoff_date = datetime.date(year, month, day)
- one_day_delta = datetime.timedelta(days=1)
- before = (cutoff_date - one_day_delta).strftime('%Y%m%d')
-
- for a in self.feed_merger.a_schedule.GetServicePeriodList():
- TruncatePeriod(a, 0, before)
- for b in self.feed_merger.b_schedule.GetServicePeriodList():
- TruncatePeriod(b, cutoff, '9'*8)
-
- def CheckDisjointCalendars(self):
- """Check whether any old service periods intersect with any new ones.
-
- This is a rather coarse check based on
- transitfeed.SevicePeriod.GetDateRange.
-
- Returns:
- True if the calendars are disjoint or False if not.
- """
- # TODO: Do an exact check here.
-
- a_service_periods = self.feed_merger.a_schedule.GetServicePeriodList()
- b_service_periods = self.feed_merger.b_schedule.GetServicePeriodList()
-
- for a_service_period in a_service_periods:
- a_start, a_end = a_service_period.GetDateRange()
- for b_service_period in b_service_periods:
- b_start, b_end = b_service_period.GetDateRange()
- overlap_start = max(a_start, b_start)
- overlap_end = min(a_end, b_end)
- if overlap_end >= overlap_start:
- return False
- return True
-
- def GetMergeStats(self):
- return None
-
-
-class FareMerger(DataSetMerger):
- """A DataSetMerger for fares."""
-
- ENTITY_TYPE_NAME = 'fare'
- FILE_NAME = 'fare_attributes.txt'
- DATASET_NAME = 'Fares'
-
- def _GetIter(self, schedule):
- return schedule.GetFareList()
-
- def _GetById(self, schedule, fare_id):
- return schedule.GetFare(fare_id)
-
- def _MergeEntities(self, a, b):
- """Merges the fares if all the attributes are the same."""
- scheme = {'price': self._MergeIdentical,
- 'currency_type': self._MergeIdentical,
- 'payment_method': self._MergeIdentical,
- 'transfers': self._MergeIdentical,
- 'transfer_duration': self._MergeIdentical}
- return self._SchemedMerge(scheme, a, b)
-
- def _Migrate(self, original_fare, schedule, newid):
- migrated_fare = transitfeed.Fare(
- field_list=original_fare.GetFieldValuesTuple())
- if newid:
- migrated_fare.fare_id = self.feed_merger.GenerateId(
- original_fare.fare_id)
- return migrated_fare
-
- def _Add(self, a, b, migrated_fare):
- self.feed_merger.Register(a, b, migrated_fare)
- self.feed_merger.merged_schedule.AddFareObject(migrated_fare)
-
- def _GetId(self, fare):
- return fare.fare_id
-
- def MergeDataSets(self):
- num_merged = self._MergeSameId()
- print 'Fares merged: %d of %d, %d' % (
- num_merged,
- len(self.feed_merger.a_schedule.GetFareList()),
- len(self.feed_merger.b_schedule.GetFareList()))
- return True
-
-
-class ShapeMerger(DataSetMerger):
- """A DataSetMerger for shapes.
-
- In this implementation, merging shapes means just taking the new shape.
- The only conditions for a merge are that the shape_ids are the same and
- the endpoints of the old and new shapes are no further than
- largest_shape_distance apart.
-
- Attributes:
- largest_shape_distance: The largest distance between the endpoints of two
- shapes allowed for them to be merged in metres.
- """
-
- ENTITY_TYPE_NAME = 'shape'
- FILE_NAME = 'shapes.txt'
- DATASET_NAME = 'Shapes'
-
- largest_shape_distance = 10.0
-
- def SetLargestShapeDistance(self, distance):
- """Sets largest_shape_distance."""
- self.largest_shape_distance = distance
-
- def _GetIter(self, schedule):
- return schedule.GetShapeList()
-
- def _GetById(self, schedule, shape_id):
- return schedule.GetShape(shape_id)
-
- def _MergeEntities(self, a, b):
- """Merges the shapes by taking the new shape.
-
- Args:
- a: The first transitfeed.Shape instance.
- b: The second transitfeed.Shape instance.
-
- Returns:
- The merged shape.
-
- Raises:
- MergeError: If the ids are different or if the endpoints are further
- than largest_shape_distance apart.
- """
- if a.shape_id != b.shape_id:
- raise MergeError('shape_id must be the same')
-
- distance = max(ApproximateDistanceBetweenPoints(a.points[0][:2],
- b.points[0][:2]),
- ApproximateDistanceBetweenPoints(a.points[-1][:2],
- b.points[-1][:2]))
- if distance > self.largest_shape_distance:
- raise MergeError('The shape endpoints are too far away: %.1fm '
- '(largest_shape_distance is %.1fm)' %
- (distance, self.largest_shape_distance))
-
- return self._Migrate(b, self.feed_merger.b_schedule, False)
-
- def _Migrate(self, original_shape, schedule, newid):
- migrated_shape = transitfeed.Shape(original_shape.shape_id)
- if newid:
- migrated_shape.shape_id = self.feed_merger.GenerateId(
- original_shape.shape_id)
- for (lat, lon, dist) in original_shape.points:
- migrated_shape.AddPoint(lat=lat, lon=lon, distance=dist)
- return migrated_shape
-
- def _Add(self, a, b, migrated_shape):
- self.feed_merger.Register(a, b, migrated_shape)
- self.feed_merger.merged_schedule.AddShapeObject(migrated_shape)
-
- def _GetId(self, shape):
- return shape.shape_id
-
- def MergeDataSets(self):
- self._MergeSameId()
- return True
-
-
-class TripMerger(DataSetMerger):
- """A DataSetMerger for trips.
-
- This implementation makes no attempt to merge trips, it simply migrates
- them all to the merged feed.
- """
-
- ENTITY_TYPE_NAME = 'trip'
- FILE_NAME = 'trips.txt'
- DATASET_NAME = 'Trips'
-
- def _ReportSameIdButNotMerged(self, trip_id, reason):
- pass
-
- def _GetIter(self, schedule):
- return schedule.GetTripList()
-
- def _GetById(self, schedule, trip_id):
- return schedule.GetTrip(trip_id)
-
- def _MergeEntities(self, a, b):
- """Raises a MergeError because currently trips cannot be merged."""
- raise MergeError('Cannot merge trips')
-
- def _Migrate(self, original_trip, schedule, newid):
- migrated_trip = transitfeed.Trip(field_dict=original_trip)
- # Make new trip_id first. AddTripObject reports a problem if it conflicts
- # with an existing id.
- if newid:
- migrated_trip.trip_id = self.feed_merger.GenerateId(
- original_trip.trip_id)
- # Need to add trip to schedule before copying stoptimes
- self.feed_merger.merged_schedule.AddTripObject(migrated_trip,
- validate=False)
-
- if schedule == self.feed_merger.a_schedule:
- merge_map = self.feed_merger.a_merge_map
- else:
- merge_map = self.feed_merger.b_merge_map
-
- original_route = schedule.GetRoute(original_trip.route_id)
- migrated_trip.route_id = merge_map[original_route].route_id
-
- original_service_period = schedule.GetServicePeriod(
- original_trip.service_id)
- migrated_trip.service_id = merge_map[original_service_period].service_id
-
- if original_trip.block_id:
- migrated_trip.block_id = '%s_%s' % (
- self.feed_merger.GetScheduleName(schedule),
- original_trip.block_id)
-
- if original_trip.shape_id:
- original_shape = schedule.GetShape(original_trip.shape_id)
- migrated_trip.shape_id = merge_map[original_shape].shape_id
-
- for original_stop_time in original_trip.GetStopTimes():
- migrated_stop_time = transitfeed.StopTime(
- None,
- merge_map[original_stop_time.stop],
- original_stop_time.arrival_time,
- original_stop_time.departure_time,
- original_stop_time.stop_headsign,
- original_stop_time.pickup_type,
- original_stop_time.drop_off_type,
- original_stop_time.shape_dist_traveled,
- original_stop_time.arrival_secs,
- original_stop_time.departure_secs)
- migrated_trip.AddStopTimeObject(migrated_stop_time)
-
- for headway_period in original_trip.GetHeadwayPeriodTuples():
- migrated_trip.AddHeadwayPeriod(*headway_period)
-
- return migrated_trip
-
- def _Add(self, a, b, migrated_trip):
- # Validate now, since it wasn't done in _Migrate
- migrated_trip.Validate(self.feed_merger.merged_schedule.problem_reporter)
- self.feed_merger.Register(a, b, migrated_trip)
-
- def _GetId(self, trip):
- return trip.trip_id
-
- def MergeDataSets(self):
- self._MergeSameId()
- self.feed_merger.problem_reporter.MergeNotImplemented(self)
- return True
-
- def GetMergeStats(self):
- return None
-
-
-class FareRuleMerger(DataSetMerger):
- """A DataSetMerger for fare rules."""
-
- ENTITY_TYPE_NAME = 'fare rule'
- FILE_NAME = 'fare_rules.txt'
- DATASET_NAME = 'Fare Rules'
-
- def MergeDataSets(self):
- """Merge the fare rule datasets.
-
- The fare rules are first migrated. Merging is done by removing any
- duplicate rules.
-
- Returns:
- True since fare rules can always be merged.
- """
- rules = set()
- for (schedule, merge_map, zone_map) in ([self.feed_merger.a_schedule,
- self.feed_merger.a_merge_map,
- self.feed_merger.a_zone_map],
- [self.feed_merger.b_schedule,
- self.feed_merger.b_merge_map,
- self.feed_merger.b_zone_map]):
- for fare in schedule.GetFareList():
- for fare_rule in fare.GetFareRuleList():
- fare_id = merge_map[schedule.GetFare(fare_rule.fare_id)].fare_id
- route_id = (fare_rule.route_id and
- merge_map[schedule.GetRoute(fare_rule.route_id)].route_id)
- origin_id = (fare_rule.origin_id and
- zone_map[fare_rule.origin_id])
- destination_id = (fare_rule.destination_id and
- zone_map[fare_rule.destination_id])
- contains_id = (fare_rule.contains_id and
- zone_map[fare_rule.contains_id])
- rules.add((fare_id, route_id, origin_id, destination_id,
- contains_id))
- for fare_rule_tuple in rules:
- migrated_fare_rule = transitfeed.FareRule(*fare_rule_tuple)
- self.feed_merger.merged_schedule.AddFareRuleObject(migrated_fare_rule)
-
- if rules:
- self.feed_merger.problem_reporter.FareRulesBroken(self)
- print 'Fare Rules: union has %d fare rules' % len(rules)
- return True
-
- def GetMergeStats(self):
- return None
-
-
-class FeedMerger(object):
- """A class for merging two whole feeds.
-
- This class takes two instances of transitfeed.Schedule and uses
- DataSetMerger instances to merge the feeds and produce the resultant
- merged feed.
-
- Attributes:
- a_schedule: The old transitfeed.Schedule instance.
- b_schedule: The new transitfeed.Schedule instance.
- problem_reporter: The merge problem reporter.
- merged_schedule: The merged transitfeed.Schedule instance.
- a_merge_map: A map from old entities to merged entities.
- b_merge_map: A map from new entities to merged entities.
- a_zone_map: A map from old zone ids to merged zone ids.
- b_zone_map: A map from new zone ids to merged zone ids.
- """
-
- def __init__(self, a_schedule, b_schedule, merged_schedule,
- problem_reporter=None):
- """Initialise the merger.
-
- Once this initialiser has been called, a_schedule and b_schedule should
- not be modified.
-
- Args:
- a_schedule: The old schedule, an instance of transitfeed.Schedule.
- b_schedule: The new schedule, an instance of transitfeed.Schedule.
- problem_reporter: The problem reporter, an instance of
- transitfeed.ProblemReporterBase. This can be None in
- which case the ExceptionProblemReporter is used.
- """
- self.a_schedule = a_schedule
- self.b_schedule = b_schedule
- self.merged_schedule = merged_schedule
- self.a_merge_map = {}
- self.b_merge_map = {}
- self.a_zone_map = {}
- self.b_zone_map = {}
- self._mergers = []
- self._idnum = max(self._FindLargestIdPostfixNumber(self.a_schedule),
- self._FindLargestIdPostfixNumber(self.b_schedule))
-
- if problem_reporter is not None:
- self.problem_reporter = problem_reporter
- else:
- self.problem_reporter = ExceptionProblemReporter()
-
- def _FindLargestIdPostfixNumber(self, schedule):
- """Finds the largest integer used as the ending of an id in the schedule.
-
- Args:
- schedule: The schedule to check.
-
- Returns:
- The maximum integer used as an ending for an id.
- """
- postfix_number_re = re.compile('(\d+)$')
-
- def ExtractPostfixNumber(entity_id):
- """Try to extract an integer from the end of entity_id.
-
- If entity_id is None or if there is no integer ending the id, zero is
- returned.
-
- Args:
- entity_id: An id string or None.
-
- Returns:
- An integer ending the entity_id or zero.
- """
- if entity_id is None:
- return 0
- match = postfix_number_re.search(entity_id)
- if match is not None:
- return int(match.group(1))
- else:
- return 0
-
- id_data_sets = {'agency_id': schedule.GetAgencyList(),
- 'stop_id': schedule.GetStopList(),
- 'route_id': schedule.GetRouteList(),
- 'trip_id': schedule.GetTripList(),
- 'service_id': schedule.GetServicePeriodList(),
- 'fare_id': schedule.GetFareList(),
- 'shape_id': schedule.GetShapeList()}
-
- max_postfix_number = 0
- for id_name, entity_list in id_data_sets.items():
- for entity in entity_list:
- entity_id = getattr(entity, id_name)
- postfix_number = ExtractPostfixNumber(entity_id)
- max_postfix_number = max(max_postfix_number, postfix_number)
- return max_postfix_number
-
- def GetScheduleName(self, schedule):
- """Returns a single letter identifier for the schedule.
-
- This only works for the old and new schedules which return 'a' and 'b'
- respectively. The purpose of such identifiers is for generating ids.
-
- Args:
- schedule: The transitfeed.Schedule instance.
-
- Returns:
- The schedule identifier.
-
- Raises:
- KeyError: schedule is not the old or new schedule.
- """
- return {self.a_schedule: 'a', self.b_schedule: 'b'}[schedule]
-
- def GenerateId(self, entity_id=None):
- """Generate a unique id based on the given id.
-
- This is done by appending a counter which is then incremented. The
- counter is initialised at the maximum number used as an ending for
- any id in the old and new schedules.
-
- Args:
- entity_id: The base id string. This is allowed to be None.
-
- Returns:
- The generated id.
- """
- self._idnum += 1
- if entity_id:
- return '%s_merged_%d' % (entity_id, self._idnum)
- else:
- return 'merged_%d' % self._idnum
-
- def Register(self, a, b, migrated_entity):
- """Registers a merge mapping.
-
- If a and b are both not None, this means that entities a and b were merged
- to produce migrated_entity. If one of a or b are not None, then it means
- it was not merged but simply migrated.
-
- The effect of a call to register is to update a_merge_map and b_merge_map
- according to the merge.
-
- Args:
- a: The entity from the old feed or None.
- b: The entity from the new feed or None.
- migrated_entity: The migrated entity.
- """
- if a is not None: self.a_merge_map[a] = migrated_entity
- if b is not None: self.b_merge_map[b] = migrated_entity
-
- def AddMerger(self, merger):
- """Add a DataSetMerger to be run by Merge().
-
- Args:
- merger: The DataSetMerger instance.
- """
- self._mergers.append(merger)
-
- def AddDefaultMergers(self):
- """Adds the default DataSetMergers defined in this module."""
- self.AddMerger(AgencyMerger(self))
- self.AddMerger(StopMerger(self))
- self.AddMerger(RouteMerger(self))
- self.AddMerger(ServicePeriodMerger(self))
- self.AddMerger(FareMerger(self))
- self.AddMerger(ShapeMerger(self))
- self.AddMerger(TripMerger(self))
- self.AddMerger(FareRuleMerger(self))
-
- def GetMerger(self, cls):
- """Looks for an added DataSetMerger derived from the given class.
-
- Args:
- cls: A class derived from DataSetMerger.
-
- Returns:
- The matching DataSetMerger instance.
-
- Raises:
- LookupError: No matching DataSetMerger has been added.
- """
- for merger in self._mergers:
- if isinstance(merger, cls):
- return merger
- raise LookupError('No matching DataSetMerger found')
-
- def GetMergerList(self):
- """Returns the list of DataSetMerger instances that have been added."""
- return self._mergers
-
- def MergeSchedules(self):
- """Merge the schedules.
-
- This is done by running the DataSetMergers that have been added with
- AddMerger() in the order that they were added.
-
- Returns:
- True if the merge was successful.
- """
- for merger in self._mergers:
- if not merger.MergeDataSets():
- return False
- return True
-
- def GetMergedSchedule(self):
- """Returns the merged schedule.
-
- This will be empty before MergeSchedules() is called.
-
- Returns:
- The merged schedule.
- """
- return self.merged_schedule
-
- def GetMergedObject(self, original):
- """Returns an object that represents original in the merged schedule."""
- # TODO: I think this would be better implemented by adding a private
- # attribute to the objects in the original feeds
- merged = (self.a_merge_map.get(original) or
- self.b_merge_map.get(original))
- if merged:
- return merged
- else:
- raise KeyError()
-
-
-def main():
- """Run the merge driver program."""
- usage = \
-"""%prog [options] <input GTFS a.zip> <input GTFS b.zip> <output GTFS.zip>
-
-Merges <input GTFS a.zip> and <input GTFS b.zip> into a new GTFS file
-<output GTFS.zip>.
-"""
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('--cutoff_date',
- dest='cutoff_date',
- default=None,
- help='a transition date from the old feed to the new '
- 'feed in the format YYYYMMDD')
- parser.add_option('--largest_stop_distance',
- dest='largest_stop_distance',
- default=StopMerger.largest_stop_distance,
- help='the furthest distance two stops can be apart and '
- 'still be merged, in metres')
- parser.add_option('--largest_shape_distance',
- dest='largest_shape_distance',
- default=ShapeMerger.largest_shape_distance,
- help='the furthest distance the endpoints of two shapes '
- 'can be apart and the shape still be merged, in metres')
- parser.add_option('--html_output_path',
- dest='html_output_path',
- default='merge-results.html',
- help='write the html output to this file')
- parser.add_option('--no_browser',
- dest='no_browser',
- action='store_true',
- help='prevents the merge results from being opened in a '
- 'browser')
- parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true',
- help='Use in-memory sqlite db instead of a temporary file. '
- 'It is faster but uses more RAM.')
- parser.set_defaults(memory_db=False)
- (options, args) = parser.parse_args()
-
- if len(args) != 3:
- parser.error('You did not provide all required command line arguments.')
-
- old_feed_path = os.path.abspath(args[0])
- new_feed_path = os.path.abspath(args[1])
- merged_feed_path = os.path.abspath(args[2])
-
- if old_feed_path.find("IWantMyCrash") != -1:
- # See test/testmerge.py
- raise Exception('For testing the merge crash handler.')
-
- a_schedule = LoadWithoutErrors(old_feed_path, options.memory_db)
- b_schedule = LoadWithoutErrors(new_feed_path, options.memory_db)
- merged_schedule = transitfeed.Schedule(memory_db=options.memory_db)
- problem_reporter = HTMLProblemReporter()
- feed_merger = FeedMerger(a_schedule, b_schedule, merged_schedule,
- problem_reporter)
- feed_merger.AddDefaultMergers()
-
- feed_merger.GetMerger(StopMerger).SetLargestStopDistance(float(
- options.largest_stop_distance))
- feed_merger.GetMerger(ShapeMerger).SetLargestShapeDistance(float(
- options.largest_shape_distance))
-
- if options.cutoff_date is not None:
- service_period_merger = feed_merger.GetMerger(ServicePeriodMerger)
- service_period_merger.DisjoinCalendars(options.cutoff_date)
-
- if feed_merger.MergeSchedules():
- feed_merger.GetMergedSchedule().WriteGoogleTransitFeed(merged_feed_path)
- else:
- merged_feed_path = None
-
- output_file = file(options.html_output_path, 'w')
- problem_reporter.WriteOutput(output_file, feed_merger,
- old_feed_path, new_feed_path, merged_feed_path)
- output_file.close()
-
- if not options.no_browser:
- webbrowser.open('file://%s' % os.path.abspath(options.html_output_path))
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/schedule_viewer.py
+++ /dev/null
@@ -1,524 +1,1 @@
-#!/usr/bin/python
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-An example application that uses the transitfeed module.
-
-You must provide a Google Maps API key.
-"""
-
-
-import BaseHTTPServer, sys, urlparse
-import bisect
-from gtfsscheduleviewer.marey_graph import MareyGraph
-import gtfsscheduleviewer
-import mimetypes
-import os.path
-import re
-import signal
-import simplejson
-import socket
-import time
-import transitfeed
-from transitfeed import util
-import urllib
-
-
-# By default Windows kills Python with Ctrl+Break. Instead make Ctrl+Break
-# raise a KeyboardInterrupt.
-if hasattr(signal, 'SIGBREAK'):
- signal.signal(signal.SIGBREAK, signal.default_int_handler)
-
-
-mimetypes.add_type('text/plain', '.vbs')
-
-
-class ResultEncoder(simplejson.JSONEncoder):
- def default(self, obj):
- try:
- iterable = iter(obj)
- except TypeError:
- pass
- else:
- return list(iterable)
- return simplejson.JSONEncoder.default(self, obj)
-
-# Code taken from
-# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/425210/index_txt
-# An alternate approach is shown at
-# http://mail.python.org/pipermail/python-list/2003-July/212751.html
-# but it requires multiple threads. A sqlite object can only be used from one
-# thread.
-class StoppableHTTPServer(BaseHTTPServer.HTTPServer):
- def server_bind(self):
- BaseHTTPServer.HTTPServer.server_bind(self)
- self.socket.settimeout(1)
- self._run = True
-
- def get_request(self):
- while self._run:
- try:
- sock, addr = self.socket.accept()
- sock.settimeout(None)
- return (sock, addr)
- except socket.timeout:
- pass
-
- def stop(self):
- self._run = False
-
- def serve(self):
- while self._run:
- self.handle_request()
-
-
-def StopToTuple(stop):
- """Return tuple as expected by javascript function addStopMarkerFromList"""
- return (stop.stop_id, stop.stop_name, float(stop.stop_lat),
- float(stop.stop_lon), stop.location_type)
-
-
-class ScheduleRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
- def do_GET(self):
- scheme, host, path, x, params, fragment = urlparse.urlparse(self.path)
- parsed_params = {}
- for k in params.split('&'):
- k = urllib.unquote(k)
- if '=' in k:
- k, v = k.split('=', 1)
- parsed_params[k] = unicode(v, 'utf8')
- else:
- parsed_params[k] = ''
-
- if path == '/':
- return self.handle_GET_home()
-
- m = re.match(r'/json/([a-z]{1,64})', path)
- if m:
- handler_name = 'handle_json_GET_%s' % m.group(1)
- handler = getattr(self, handler_name, None)
- if callable(handler):
- return self.handle_json_wrapper_GET(handler, parsed_params)
-
- # Restrict allowable file names to prevent relative path attacks etc
- m = re.match(r'/file/([a-z0-9_-]{1,64}\.?[a-z0-9_-]{1,64})$', path)
- if m and m.group(1):
- try:
- f, mime_type = self.OpenFile(m.group(1))
- return self.handle_static_file_GET(f, mime_type)
- except IOError, e:
- print "Error: unable to open %s" % m.group(1)
- # Ignore and treat as 404
-
- m = re.match(r'/([a-z]{1,64})', path)
- if m:
- handler_name = 'handle_GET_%s' % m.group(1)
- handler = getattr(self, handler_name, None)
- if callable(handler):
- return handler(parsed_params)
-
- return self.handle_GET_default(parsed_params, path)
-
- def OpenFile(self, filename):
- """Try to open filename in the static files directory of this server.
- Return a tuple (file object, string mime_type) or raise an exception."""
- (mime_type, encoding) = mimetypes.guess_type(filename)
- assert mime_type
- # A crude guess of when we should use binary mode. Without it non-unix
- # platforms may corrupt binary files.
- if mime_type.startswith('text/'):
- mode = 'r'
- else:
- mode = 'rb'
- return open(os.path.join(self.server.file_dir, filename), mode), mime_type
-
- def handle_GET_default(self, parsed_params, path):
- self.send_error(404)
-
- def handle_static_file_GET(self, fh, mime_type):
- content = fh.read()
- self.send_response(200)
- self.send_header('Content-Type', mime_type)
- self.send_header('Content-Length', str(len(content)))
- self.end_headers()
- self.wfile.write(content)
-
- def AllowEditMode(self):
- return False
-
- def handle_GET_home(self):
- schedule = self.server.schedule
- (min_lat, min_lon, max_lat, max_lon) = schedule.GetStopBoundingBox()
- forbid_editing = ('true', 'false')[self.AllowEditMode()]
-
- agency = ', '.join(a.agency_name for a in schedule.GetAgencyList()).encode('utf-8')
-
- key = self.server.key
- host = self.server.host
-
- # A very simple template system. For a fixed set of values replace [xxx]
- # with the value of local variable xxx
- f, _ = self.OpenFile('index.html')
- content = f.read()
- for v in ('agency', 'min_lat', 'min_lon', 'max_lat', 'max_lon', 'key',
- 'host', 'forbid_editing'):
- content = content.replace('[%s]' % v, str(locals()[v]))
-
- self.send_response(200)
- self.send_header('Content-Type', 'text/html')
- self.send_header('Content-Length', str(len(content)))
- self.end_headers()
- self.wfile.write(content)
-
- def handle_json_GET_routepatterns(self, params):
- """Given a route_id generate a list of patterns of the route. For each
- pattern include some basic information and a few sample trips."""
- schedule = self.server.schedule
- route = schedule.GetRoute(params.get('route', None))
- if not route:
- self.send_error(404)
- return
- time = int(params.get('time', 0))
- sample_size = 3 # For each pattern return the start time for this many trips
-
- pattern_id_trip_dict = route.GetPatternIdTripDict()
- patterns = []
-
- for pattern_id, trips in pattern_id_trip_dict.items():
- time_stops = trips[0].GetTimeStops()
- if not time_stops:
- continue
- has_non_zero_trip_type = False;
- for trip in trips:
- if trip['trip_type'] and trip['trip_type'] != '0':
- has_non_zero_trip_type = True
- name = u'%s to %s, %d stops' % (time_stops[0][2].stop_name, time_stops[-1][2].stop_name, len(time_stops))
- transitfeed.SortListOfTripByTime(trips)
-
- num_trips = len(trips)
- if num_trips <= sample_size:
- start_sample_index = 0
- num_after_sample = 0
- else:
- # Will return sample_size trips that start after the 'time' param.
-
- # Linear search because I couldn't find a built-in way to do a binary
- # search with a custom key.
- start_sample_index = len(trips)
- for i, trip in enumerate(trips):
- if trip.GetStartTime() >= time:
- start_sample_index = i
- break
-
- num_after_sample = num_trips - (start_sample_index + sample_size)
- if num_after_sample < 0:
- # Less than sample_size trips start after 'time' so return all the
- # last sample_size trips.
- num_after_sample = 0
- start_sample_index = num_trips - sample_size
-
- sample = []
- for t in trips[start_sample_index:start_sample_index + sample_size]:
- sample.append( (t.GetStartTime(), t.trip_id) )
-
- patterns.append((name, pattern_id, start_sample_index, sample,
- num_after_sample, (0,1)[has_non_zero_trip_type]))
-
- patterns.sort()
- return patterns
-
- def handle_json_wrapper_GET(self, handler, parsed_params):
- """Call handler and output the return value in JSON."""
- schedule = self.server.schedule
- result = handler(parsed_params)
- content = ResultEncoder().encode(result)
- self.send_response(200)
- self.send_header('Content-Type', 'text/plain')
- self.send_header('Content-Length', str(len(content)))
- self.end_headers()
- self.wfile.write(content)
-
- def handle_json_GET_routes(self, params):
- """Return a list of all routes."""
- schedule = self.server.schedule
- result = []
- for r in schedule.GetRouteList():
- result.append( (r.route_id, r.route_short_name, r.route_long_name) )
- result.sort(key = lambda x: x[1:3])
- return result
-
- def handle_json_GET_routerow(self, params):
- schedule = self.server.schedule
- route = schedule.GetRoute(params.get('route', None))
- return [transitfeed.Route._FIELD_NAMES, route.GetFieldValuesTuple()]
-
- def handle_json_GET_triprows(self, params):
- """Return a list of rows from the feed file that are related to this
- trip."""
- schedule = self.server.schedule
- try:
- trip = schedule.GetTrip(params.get('trip', None))
- except KeyError:
- # if a non-existent trip is searched for, the return nothing
- return
- route = schedule.GetRoute(trip.route_id)
- trip_row = dict(trip.iteritems())
- route_row = dict(route.iteritems())
- return [['trips.txt', trip_row], ['routes.txt', route_row]]
-
- def handle_json_GET_tripstoptimes(self, params):
- schedule = self.server.schedule
- try:
- trip = schedule.GetTrip(params.get('trip'))
- except KeyError:
- # if a non-existent trip is searched for, the return nothing
- return
- time_stops = trip.GetTimeStops()
- stops = []
- times = []
- for arr,dep,stop in time_stops:
- stops.append(StopToTuple(stop))
- times.append(arr)
- return [stops, times]
-
- def handle_json_GET_tripshape(self, params):
- schedule = self.server.schedule
- try:
- trip = schedule.GetTrip(params.get('trip'))
- except KeyError:
- # if a non-existent trip is searched for, the return nothing
- return
- points = []
- if trip.shape_id:
- shape = schedule.GetShape(trip.shape_id)
- for (lat, lon, dist) in shape.points:
- points.append((lat, lon))
- else:
- time_stops = trip.GetTimeStops()
- for arr,dep,stop in time_stops:
- points.append((stop.stop_lat, stop.stop_lon))
- return points
-
- def handle_json_GET_neareststops(self, params):
- """Return a list of the nearest 'limit' stops to 'lat', 'lon'"""
- schedule = self.server.schedule
- lat = float(params.get('lat'))
- lon = float(params.get('lon'))
- limit = int(params.get('limit'))
- stops = schedule.GetNearestStops(lat=lat, lon=lon, n=limit)
- return [StopToTuple(s) for s in stops]
-
- def handle_json_GET_boundboxstops(self, params):
- """Return a list of up to 'limit' stops within bounding box with 'n','e'
- and 's','w' in the NE and SW corners. Does not handle boxes crossing
- longitude line 180."""
- schedule = self.server.schedule
- n = float(params.get('n'))
- e = float(params.get('e'))
- s = float(params.get('s'))
- w = float(params.get('w'))
- limit = int(params.get('limit'))
- stops = schedule.GetStopsInBoundingBox(north=n, east=e, south=s, west=w, n=limit)
- return [StopToTuple(s) for s in stops]
-
- def handle_json_GET_stopsearch(self, params):
- schedule = self.server.schedule
- query = params.get('q', None).lower()
- matches = []
- for s in schedule.GetStopList():
- if s.stop_id.lower().find(query) != -1 or s.stop_name.lower().find(query) != -1:
- matches.append(StopToTuple(s))
- return matches
-
- def handle_json_GET_stoptrips(self, params):
- """Given a stop_id and time in seconds since midnight return the next
- trips to visit the stop."""
- schedule = self.server.schedule
- stop = schedule.GetStop(params.get('stop', None))
- time = int(params.get('time', 0))
- time_trips = stop.GetStopTimeTrips(schedule)
- time_trips.sort() # OPT: use bisect.insort to make this O(N*ln(N)) -> O(N)
- # Keep the first 5 after param 'time'.
- # Need make a tuple to find correct bisect point
- time_trips = time_trips[bisect.bisect_left(time_trips, (time, 0)):]
- time_trips = time_trips[:5]
- # TODO: combine times for a route to show next 2 departure times
- result = []
- for time, (trip, index), tp in time_trips:
- headsign = None
- # Find the most recent headsign from the StopTime objects
- for stoptime in trip.GetStopTimes()[index::-1]:
- if stoptime.stop_headsign:
- headsign = stoptime.stop_headsign
- break
- # If stop_headsign isn't found, look for a trip_headsign
- if not headsign:
- headsign = trip.trip_headsign
- route = schedule.GetRoute(trip.route_id)
- trip_name = ''
- if route.route_short_name:
- trip_name += route.route_short_name
- if route.route_long_name:
- if len(trip_name):
- trip_name += " - "
- trip_name += route.route_long_name
- if headsign:
- trip_name += " (Direction: %s)" % headsign
-
- result.append((time, (trip.trip_id, trip_name, trip.service_id), tp))
- return result
-
- def handle_GET_ttablegraph(self,params):
- """Draw a Marey graph in SVG for a pattern (collection of trips in a route
- that visit the same sequence of stops)."""
- schedule = self.server.schedule
- marey = MareyGraph()
- trip = schedule.GetTrip(params.get('trip', None))
- route = schedule.GetRoute(trip.route_id)
- height = int(params.get('height', 300))
-
- if not route:
- print 'no such route'
- self.send_error(404)
- return
-
- pattern_id_trip_dict = route.GetPatternIdTripDict()
- pattern_id = trip.pattern_id
- if pattern_id not in pattern_id_trip_dict:
- print 'no pattern %s found in %s' % (pattern_id, pattern_id_trip_dict.keys())
- self.send_error(404)
- return
- triplist = pattern_id_trip_dict[pattern_id]
-
- pattern_start_time = min((t.GetStartTime() for t in triplist))
- pattern_end_time = max((t.GetEndTime() for t in triplist))
-
- marey.SetSpan(pattern_start_time,pattern_end_time)
- marey.Draw(triplist[0].GetPattern(), triplist, height)
-
- content = marey.Draw()
-
- self.send_response(200)
- self.send_header('Content-Type', 'image/svg+xml')
- self.send_header('Content-Length', str(len(content)))
- self.end_headers()
- self.wfile.write(content)
-
-
-def FindPy2ExeBase():
- """If this is running in py2exe return the install directory else return
- None"""
- # py2exe puts gtfsscheduleviewer in library.zip. For py2exe setup.py is
- # configured to put the data next to library.zip.
- windows_ending = gtfsscheduleviewer.__file__.find('\\library.zip\\')
- if windows_ending != -1:
- return transitfeed.__file__[:windows_ending]
- else:
- return None
-
-
-def FindDefaultFileDir():
- """Return the path of the directory containing the static files. By default
- the directory is called 'files'. The location depends on where setup.py put
- it."""
- base = FindPy2ExeBase()
- if base:
- return os.path.join(base, 'schedule_viewer_files')
- else:
- # For all other distributions 'files' is in the gtfsscheduleviewer
- # directory.
- base = os.path.dirname(gtfsscheduleviewer.__file__) # Strip __init__.py
- return os.path.join(base, 'files')
-
-
-def GetDefaultKeyFilePath():
- """In py2exe return absolute path of file in the base directory and in all
- other distributions return relative path 'key.txt'"""
- windows_base = FindPy2ExeBase()
- if windows_base:
- return os.path.join(windows_base, 'key.txt')
- else:
- return 'key.txt'
-
-
-def main(RequestHandlerClass = ScheduleRequestHandler):
- usage = \
-'''%prog [options] [<input GTFS.zip>]
-
-Runs a webserver that lets you explore a <input GTFS.zip> in your browser.
-
-If <input GTFS.zip> is omited the filename is read from the console. Dragging
-a file into the console may enter the filename.
-'''
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('--feed_filename', '--feed', dest='feed_filename',
- help='file name of feed to load')
- parser.add_option('--key', dest='key',
- help='Google Maps API key or the name '
- 'of a text file that contains an API key')
- parser.add_option('--host', dest='host', help='Host name of Google Maps')
- parser.add_option('--port', dest='port', type='int',
- help='port on which to listen')
- parser.add_option('--file_dir', dest='file_dir',
- help='directory containing static files')
- parser.add_option('-n', '--noprompt', action='store_false',
- dest='manual_entry',
- help='disable interactive prompts')
- parser.set_defaults(port=8765,
- host='maps.google.com',
- file_dir=FindDefaultFileDir(),
- manual_entry=True)
- (options, args) = parser.parse_args()
-
- if not os.path.isfile(os.path.join(options.file_dir, 'index.html')):
- print "Can't find index.html with --file_dir=%s" % options.file_dir
- exit(1)
-
- if not options.feed_filename and len(args) == 1:
- options.feed_filename = args[0]
-
- if not options.feed_filename and options.manual_entry:
- options.feed_filename = raw_input('Enter Feed Location: ').strip('"')
-
- default_key_file = GetDefaultKeyFilePath()
- if not options.key and os.path.isfile(default_key_file):
- options.key = open(default_key_file).read().strip()
-
- if options.key and os.path.isfile(options.key):
- options.key = open(options.key).read().strip()
-
- schedule = transitfeed.Schedule(problem_reporter=transitfeed.ProblemReporter())
- print 'Loading data from feed "%s"...' % options.feed_filename
- print '(this may take a few minutes for larger cities)'
- schedule.Load(options.feed_filename)
-
- server = StoppableHTTPServer(server_address=('', options.port),
- RequestHandlerClass=RequestHandlerClass)
- server.key = options.key
- server.schedule = schedule
- server.file_dir = options.file_dir
- server.host = options.host
- server.feed_path = options.feed_filename
-
- print ("To view, point your browser at http://localhost:%d/" %
- (server.server_port))
- server.serve_forever()
-
-
-if __name__ == '__main__':
- main()
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/shape_importer.py
+++ /dev/null
@@ -1,291 +1,1 @@
-#!/usr/bin/python
-#
-# Copyright 2007 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""A utility program to help add shapes to an existing GTFS feed.
-
-Requires the ogr python package.
-"""
-
-__author__ = 'chris.harrelson.code@gmail.com (Chris Harrelson)'
-
-import csv
-import glob
-import ogr
-import os
-import shutil
-import sys
-import tempfile
-import transitfeed
-from transitfeed import shapelib
-from transitfeed import util
-import zipfile
-
-
-class ShapeImporterError(Exception):
- pass
-
-
-def PrintColumns(shapefile):
- """
- Print the columns of layer 0 of the shapefile to the screen.
- """
- ds = ogr.Open(shapefile)
- layer = ds.GetLayer(0)
- if len(layer) == 0:
- raise ShapeImporterError("Layer 0 has no elements!")
-
- feature = layer.GetFeature(0)
- print "%d features" % feature.GetFieldCount()
- for j in range(0, feature.GetFieldCount()):
- print '--' + feature.GetFieldDefnRef(j).GetName() + \
- ': ' + feature.GetFieldAsString(j)
-
-
-def AddShapefile(shapefile, graph, key_cols):
- """
- Adds shapes found in the given shape filename to the given polyline
- graph object.
- """
- ds = ogr.Open(shapefile)
- layer = ds.GetLayer(0)
-
- for i in range(0, len(layer)):
- feature = layer.GetFeature(i)
-
- geometry = feature.GetGeometryRef()
-
- if key_cols:
- key_list = []
- for col in key_cols:
- key_list.append(str(feature.GetField(col)))
- shape_id = '-'.join(key_list)
- else:
- shape_id = '%s-%d' % (shapefile, i)
-
- poly = shapelib.Poly(name=shape_id)
- for j in range(0, geometry.GetPointCount()):
- (lat, lng) = (round(geometry.GetY(j), 15), round(geometry.GetX(j), 15))
- poly.AddPoint(shapelib.Point.FromLatLng(lat, lng))
- graph.AddPoly(poly)
-
- return graph
-
-
-def GetMatchingShape(pattern_poly, trip, matches, max_distance, verbosity=0):
- """
- Tries to find a matching shape for the given pattern Poly object,
- trip, and set of possibly matching Polys from which to choose a match.
- """
- if len(matches) == 0:
- print ('No matching shape found within max-distance %d for trip %s '
- % (max_distance, trip.trip_id))
- return None
-
- if verbosity >= 1:
- for match in matches:
- print "match: size %d" % match.GetNumPoints()
- scores = [(pattern_poly.GreedyPolyMatchDist(match), match)
- for match in matches]
-
- scores.sort()
-
- if scores[0][0] > max_distance:
- print ('No matching shape found within max-distance %d for trip %s '
- '(min score was %f)'
- % (max_distance, trip.trip_id, scores[0][0]))
- return None
-
- return scores[0][1]
-
-def AddExtraShapes(extra_shapes_txt, graph):
- """
- Add extra shapes into our input set by parsing them out of a GTFS-formatted
- shapes.txt file. Useful for manually adding lines to a shape file, since it's
- a pain to edit .shp files.
- """
-
- print "Adding extra shapes from %s" % extra_shapes_txt
- try:
- tmpdir = tempfile.mkdtemp()
- shutil.copy(extra_shapes_txt, os.path.join(tmpdir, 'shapes.txt'))
- loader = transitfeed.ShapeLoader(tmpdir)
- schedule = loader.Load()
- for shape in schedule.GetShapeList():
- print "Adding extra shape: %s" % shape.shape_id
- graph.AddPoly(ShapeToPoly(shape))
- finally:
- if tmpdir:
- shutil.rmtree(tmpdir)
-
-
-# Note: this method lives here to avoid cross-dependencies between
-# shapelib and transitfeed.
-def ShapeToPoly(shape):
- poly = shapelib.Poly(name=shape.shape_id)
- for lat, lng, distance in shape.points:
- point = shapelib.Point.FromLatLng(round(lat, 15), round(lng, 15))
- poly.AddPoint(point)
- return poly
-
-
-def ValidateArgs(options_parser, options, args):
- if not (args and options.source_gtfs and options.dest_gtfs):
- options_parser.error("You must specify a source and dest GTFS file, "
- "and at least one source shapefile")
-
-
-def DefineOptions():
- usage = \
-"""%prog [options] --source_gtfs=<input GTFS.zip> --dest_gtfs=<output GTFS.zip>\
- <input.shp> [<input.shp>...]
-
-Try to match shapes in one or more SHP files to trips in a GTFS file."""
- options_parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- options_parser.add_option("--print_columns",
- action="store_true",
- default=False,
- dest="print_columns",
- help="Print column names in shapefile DBF and exit")
- options_parser.add_option("--keycols",
- default="",
- dest="keycols",
- help="Comma-separated list of the column names used"
- "to index shape ids")
- options_parser.add_option("--max_distance",
- type="int",
- default=150,
- dest="max_distance",
- help="Max distance from a shape to which to match")
- options_parser.add_option("--source_gtfs",
- default="",
- dest="source_gtfs",
- metavar="FILE",
- help="Read input GTFS from FILE")
- options_parser.add_option("--dest_gtfs",
- default="",
- dest="dest_gtfs",
- metavar="FILE",
- help="Write output GTFS with shapes to FILE")
- options_parser.add_option("--extra_shapes",
- default="",
- dest="extra_shapes",
- metavar="FILE",
- help="Extra shapes.txt (CSV) formatted file")
- options_parser.add_option("--verbosity",
- type="int",
- default=0,
- dest="verbosity",
- help="Verbosity level. Higher is more verbose")
- return options_parser
-
-
-def main(key_cols):
- print 'Parsing shapefile(s)...'
- graph = shapelib.PolyGraph()
- for arg in args:
- print ' ' + arg
- AddShapefile(arg, graph, key_cols)
-
- if options.extra_shapes:
- AddExtraShapes(options.extra_shapes, graph)
-
- print 'Loading GTFS from %s...' % options.source_gtfs
- schedule = transitfeed.Loader(options.source_gtfs).Load()
- shape_count = 0
- pattern_count = 0
-
- verbosity = options.verbosity
-
- print 'Matching shapes to trips...'
- for route in schedule.GetRouteList():
- print 'Processing route', route.route_short_name
- patterns = route.GetPatternIdTripDict()
- for pattern_id, trips in patterns.iteritems():
- pattern_count += 1
- pattern = trips[0].GetPattern()
-
- poly_points = [shapelib.Point.FromLatLng(p.stop_lat, p.stop_lon)
- for p in pattern]
- if verbosity >= 2:
- print "\npattern %d, %d points:" % (pattern_id, len(poly_points))
- for i, (stop, point) in enumerate(zip(pattern, poly_points)):
- print "Stop %d '%s': %s" % (i + 1, stop.stop_name, point.ToLatLng())
-
- # First, try to find polys that run all the way from
- # the start of the trip to the end.
- matches = graph.FindMatchingPolys(poly_points[0], poly_points[-1],
- options.max_distance)
- if not matches:
- # Try to find a path through the graph, joining
- # multiple edges to find a path that covers all the
- # points in the trip. Some shape files are structured
- # this way, with a polyline for each segment between
- # stations instead of a polyline covering an entire line.
- shortest_path = graph.FindShortestMultiPointPath(poly_points,
- options.max_distance,
- verbosity=verbosity)
- if shortest_path:
- matches = [shortest_path]
- else:
- matches = []
-
- pattern_poly = shapelib.Poly(poly_points)
- shape_match = GetMatchingShape(pattern_poly, trips[0],
- matches, options.max_distance,
- verbosity=verbosity)
- if shape_match:
- shape_count += 1
- # Rename shape for readability.
- shape_match = shapelib.Poly(points=shape_match.GetPoints(),
- name="shape_%d" % shape_count)
- for trip in trips:
- try:
- shape = schedule.GetShape(shape_match.GetName())
- except KeyError:
- shape = transitfeed.Shape(shape_match.GetName())
- for point in shape_match.GetPoints():
- (lat, lng) = point.ToLatLng()
- shape.AddPoint(lat, lng)
- schedule.AddShapeObject(shape)
- trip.shape_id = shape.shape_id
-
- print "Matched %d shapes out of %d patterns" % (shape_count, pattern_count)
- schedule.WriteGoogleTransitFeed(options.dest_gtfs)
-
-
-if __name__ == '__main__':
- # Import psyco if available for better performance.
- try:
- import psyco
- psyco.full()
- except ImportError:
- pass
-
- options_parser = DefineOptions()
- (options, args) = options_parser.parse_args()
-
- ValidateArgs(options_parser, options, args)
-
- if options.print_columns:
- for arg in args:
- PrintColumns(arg)
- sys.exit(0)
-
- key_cols = options.keycols.split(',')
-
- main(key_cols)
-
--- a/origin-src/transitfeed-1.2.5/build/scripts-2.6/unusual_trip_filter.py
+++ /dev/null
@@ -1,157 +1,1 @@
-#!/usr/bin/python
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Filters out trips which are not on the defualt routes and
- set their trip_typeattribute accordingly.
-
-For usage information run unusual_trip_filter.py --help
-"""
-
-__author__ = 'Jiri Semecky <jiri.semecky@gmail.com>'
-
-import codecs
-import os
-import os.path
-import sys
-import time
-import transitfeed
-from transitfeed import util
-
-
-class UnusualTripFilter(object):
- """Class filtering trips going on unusual paths.
-
- Those are usually trips going to/from depot or changing to another route
- in the middle. Sets the 'trip_type' attribute of the trips.txt dataset
- so that non-standard trips are marked as special (value 1)
- instead of regular (default value 0).
- """
-
- def __init__ (self, threshold=0.1, force=False, quiet=False, route_type=None):
- self._threshold = threshold
- self._quiet = quiet
- self._force = force
- if route_type in transitfeed.Route._ROUTE_TYPE_NAMES:
- self._route_type = transitfeed.Route._ROUTE_TYPE_NAMES[route_type]
- elif route_type is None:
- self._route_type = None
- else:
- self._route_type = int(route_type)
-
- def filter_line(self, route):
- """Mark unusual trips for the given route."""
- if self._route_type is not None and self._route_type != route.route_type:
- self.info('Skipping route %s due to different route_type value (%s)' %
- (route['route_id'], route['route_type']))
- return
- self.info('Filtering infrequent trips for route %s.' % route.route_id)
- trip_count = len(route.trips)
- for pattern_id, pattern in route.GetPatternIdTripDict().items():
- ratio = float(1.0 * len(pattern) / trip_count)
- if not self._force:
- if (ratio < self._threshold):
- self.info("\t%d trips on route %s with headsign '%s' recognized "
- "as unusual (ratio %f)" %
- (len(pattern),
- route['route_short_name'],
- pattern[0]['trip_headsign'],
- ratio))
- for trip in pattern:
- trip.trip_type = 1 # special
- self.info("\t\tsetting trip_type of trip %s as special" %
- trip.trip_id)
- else:
- self.info("\t%d trips on route %s with headsign '%s' recognized "
- "as %s (ratio %f)" %
- (len(pattern),
- route['route_short_name'],
- pattern[0]['trip_headsign'],
- ('regular', 'unusual')[ratio < self._threshold],
- ratio))
- for trip in pattern:
- trip.trip_type = ('0','1')[ratio < self._threshold]
- self.info("\t\tsetting trip_type of trip %s as %s" %
- (trip.trip_id,
- ('regular', 'unusual')[ratio < self._threshold]))
-
- def filter(self, dataset):
- """Mark unusual trips for all the routes in the dataset."""
- self.info('Going to filter infrequent routes in the dataset')
- for route in dataset.routes.values():
- self.filter_line(route)
-
- def info(self, text):
- if not self._quiet:
- print text.encode("utf-8")
-
-
-def main():
- usage = \
-'''%prog [options] <GTFS.zip>
-
-Filters out trips which do not follow the most common stop sequences and
-sets their trip_type attribute accordingly. <GTFS.zip> is overwritten with
-the modifed GTFS file unless the --output option is used.
-'''
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('-o', '--output', dest='output', metavar='FILE',
- help='Name of the output GTFS file (writing to input feed if omitted).')
- parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true',
- help='Force use of in-memory sqlite db.')
- parser.add_option('-t', '--threshold', default=0.1,
- dest='threshold', type='float',
- help='Frequency threshold for considering pattern as non-regular.')
- parser.add_option('-r', '--route_type', default=None,
- dest='route_type', type='string',
- help='Filter only selected route type (specified by number'
- 'or one of the following names: ' + \
- ', '.join(transitfeed.Route._ROUTE_TYPE_NAMES) + ').')
- parser.add_option('-f', '--override_trip_type', default=False,
- dest='override_trip_type', action='store_true',
- help='Forces overwrite of current trip_type values.')
- parser.add_option('-q', '--quiet', dest='quiet',
- default=False, action='store_true',
- help='Suppress information output.')
-
- (options, args) = parser.parse_args()
- if len(args) != 1:
- parser.error('You must provide the path of a single feed.')
-
- filter = UnusualTripFilter(float(options.threshold),
- force=options.override_trip_type,
- quiet=options.quiet,
- route_type=options.route_type)
- feed_name = args[0]
- feed_name = feed_name.strip()
- filter.info('Loading %s' % feed_name)
- loader = transitfeed.Loader(feed_name, extra_validation=True,
- memory_db=options.memory_db)
- data = loader.Load()
- filter.filter(data)
- print 'Saving data'
-
- # Write the result
- if options.output is None:
- data.WriteGoogleTransitFeed(feed_name)
- else:
- data.WriteGoogleTransitFeed(options.output)
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/examples/filter_unused_stops.py
+++ /dev/null
@@ -1,63 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-"""Filter the unused stops out of a transit feed file."""
-
-import optparse
-import sys
-import transitfeed
-
-
-def main():
- parser = optparse.OptionParser(
- usage="usage: %prog [options] input_feed output_feed",
- version="%prog "+transitfeed.__version__)
- parser.add_option("-l", "--list_removed", dest="list_removed",
- default=False,
- action="store_true",
- help="Print removed stops to stdout")
- (options, args) = parser.parse_args()
- if len(args) != 2:
- print >>sys.stderr, parser.format_help()
- print >>sys.stderr, "\n\nYou must provide input_feed and output_feed\n\n"
- sys.exit(2)
- input_path = args[0]
- output_path = args[1]
-
- loader = transitfeed.Loader(input_path)
- schedule = loader.Load()
-
- print "Removing unused stops..."
- removed = 0
- for stop_id, stop in schedule.stops.items():
- if not stop.GetTrips(schedule):
- removed += 1
- del schedule.stops[stop_id]
- if options.list_removed:
- print "Removing %s (%s)" % (stop_id, stop.stop_name)
- if removed == 0:
- print "No unused stops."
- elif removed == 1:
- print "Removed 1 stop"
- else:
- print "Removed %d stops" % removed
-
- schedule.WriteGoogleTransitFeed(output_path)
-
-if __name__ == "__main__":
- main()
-
--- a/origin-src/transitfeed-1.2.5/examples/google_random_queries.py
+++ /dev/null
@@ -1,225 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-"""Output Google Transit URLs for queries near stops.
-
-The output can be used to speed up manual testing. Load the output from this
-file and then open many of the links in new tabs. In each result check that the
-polyline looks okay (no unnecassary loops, no jumps to a far away location) and
-look at the time of each leg. Also check the route names and headsigns are
-formatted correctly and not redundant.
-"""
-
-from datetime import datetime
-from datetime import timedelta
-import math
-import optparse
-import os.path
-import random
-import sys
-import transitfeed
-import urllib
-import urlparse
-
-
-def Distance(lat0, lng0, lat1, lng1):
- """
- Compute the geodesic distance in meters between two points on the
- surface of the Earth. The latitude and longitude angles are in
- degrees.
-
- Approximate geodesic distance function (Haversine Formula) assuming
- a perfect sphere of radius 6367 km (see "What are some algorithms
- for calculating the distance between 2 points?" in the GIS Faq at
- http://www.census.gov/geo/www/faq-index.html). The approximate
- radius is adequate for our needs here, but a more sophisticated
- geodesic function should be used if greater accuracy is required
- (see "When is it NOT okay to assume the Earth is a sphere?" in the
- same faq).
- """
- deg2rad = math.pi / 180.0
- lat0 = lat0 * deg2rad
- lng0 = lng0 * deg2rad
- lat1 = lat1 * deg2rad
- lng1 = lng1 * deg2rad
- dlng = lng1 - lng0
- dlat = lat1 - lat0
- a = math.sin(dlat*0.5)
- b = math.sin(dlng*0.5)
- a = a * a + math.cos(lat0) * math.cos(lat1) * b * b
- c = 2.0 * math.atan2(math.sqrt(a), math.sqrt(1.0 - a))
- return 6367000.0 * c
-
-
-def AddNoiseToLatLng(lat, lng):
- """Add up to 500m of error to each coordinate of lat, lng."""
- m_per_tenth_lat = Distance(lat, lng, lat + 0.1, lng)
- m_per_tenth_lng = Distance(lat, lng, lat, lng + 0.1)
- lat_per_100m = 1 / m_per_tenth_lat * 10
- lng_per_100m = 1 / m_per_tenth_lng * 10
- return (lat + (lat_per_100m * 5 * (random.random() * 2 - 1)),
- lng + (lng_per_100m * 5 * (random.random() * 2 - 1)))
-
-
-def GetRandomLocationsNearStops(schedule):
- """Return a list of (lat, lng) tuples."""
- locations = []
- for s in schedule.GetStopList():
- locations.append(AddNoiseToLatLng(s.stop_lat, s.stop_lon))
- return locations
-
-
-def GetRandomDatetime():
- """Return a datetime in the next week."""
- seconds_offset = random.randint(0, 60 * 60 * 24 * 7)
- dt = datetime.today() + timedelta(seconds=seconds_offset)
- return dt.replace(second=0, microsecond=0)
-
-
-def FormatLatLng(lat_lng):
- """Format a (lat, lng) tuple into a string for maps.google.com."""
- return "%0.6f,%0.6f" % lat_lng
-
-
-def LatLngsToGoogleUrl(source, destination, dt):
- """Return a URL for routing between two (lat, lng) at a datetime."""
- params = {"saddr": FormatLatLng(source),
- "daddr": FormatLatLng(destination),
- "time": dt.strftime("%I:%M%p"),
- "date": dt.strftime("%Y-%m-%d"),
- "dirflg": "r",
- "ie": "UTF8",
- "oe": "UTF8"}
- url = urlparse.urlunsplit(("http", "maps.google.com", "/maps",
- urllib.urlencode(params), ""))
- return url
-
-
-def LatLngsToGoogleLink(source, destination):
- """Return a string "<a ..." for a trip at a random time."""
- dt = GetRandomDatetime()
- return "<a href='%s'>from:%s to:%s on %s</a>" % (
- LatLngsToGoogleUrl(source, destination, dt),
- FormatLatLng(source), FormatLatLng(destination),
- dt.ctime())
-
-
-def WriteOutput(title, locations, limit, f):
- """Write html to f for up to limit trips between locations.
-
- Args:
- title: String used in html title
- locations: list of (lat, lng) tuples
- limit: maximum number of queries in the html
- f: a file object
- """
- output_prefix = """
-<html>
-<head>
-<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>%(title)s</title>
-</head>
-<body>
-Random queries for %(title)s<p>
-This list of random queries should speed up important manual testing. Here are
-some things to check when looking at the results of a query.
-<ul>
- <li> Check the agency attribution under the trip results:
- <ul>
- <li> has correct name and spelling of the agency
- <li> opens a page with general information about the service
- </ul>
- <li> For each alternate trip check that each of these is reasonable:
- <ul>
- <li> the total time of the trip
- <li> the time for each leg. Bad data frequently results in a leg going a long
- way in a few minutes.
- <li> the icons and mode names (Tram, Bus, etc) are correct for each leg
- <li> the route names and headsigns are correctly formatted and not
- redundant.
- For a good example see <a
- href="http://code.google.com/transit/spec/transit_feed_specification.html#transitScreenshots">the
- screenshots in the Google Transit Feed Specification</a>.
- <li> the shape line on the map looks correct. Make sure the polyline does
- not zig-zag, loop, skip stops or jump far away unless the trip does the
- same thing.
- <li> the route is active on the day the trip planner returns
- </ul>
-</ul>
-If you find a problem be sure to save the URL. This file is generated randomly.
-<ol>
-""" % locals()
-
- output_suffix = """
-</ol>
-</body>
-</html>
-""" % locals()
-
- f.write(transitfeed.EncodeUnicode(output_prefix))
- for source, destination in zip(locations[0:limit], locations[1:limit + 1]):
- f.write(transitfeed.EncodeUnicode("<li>%s\n" %
- LatLngsToGoogleLink(source, destination)))
- f.write(transitfeed.EncodeUnicode(output_suffix))
-
-
-def ParentAndBaseName(path):
- """Given a path return only the parent name and file name as a string."""
- dirname, basename = os.path.split(path)
- dirname = dirname.rstrip(os.path.sep)
- if os.path.altsep:
- dirname = dirname.rstrip(os.path.altsep)
- _, parentname = os.path.split(dirname)
- return os.path.join(parentname, basename)
-
-
-def main():
- parser = optparse.OptionParser(
- usage="usage: %prog [options] feed_filename output_filename",
- version="%prog "+transitfeed.__version__)
- parser.add_option("-l", "--limit", dest="limit", type="int",
- help="Maximum number of URLs to generate")
- parser.add_option('-o', '--output', dest='output', metavar='FILE',
- help='write html output to FILE')
- parser.set_defaults(output="google_random_queries.html", limit=50)
- (options, args) = parser.parse_args()
- if len(args) != 1:
- print >>sys.stderr, parser.format_help()
- print >>sys.stderr, "\n\nYou must provide the path of a single feed\n\n"
- sys.exit(2)
- feed_path = args[0]
-
- # ProblemReporter prints problems on console.
- loader = transitfeed.Loader(feed_path, problems=transitfeed.ProblemReporter(),
- load_stop_times=False)
- schedule = loader.Load()
- locations = GetRandomLocationsNearStops(schedule)
- random.shuffle(locations)
- agencies = ", ".join([a.agency_name for a in schedule.GetAgencyList()])
- title = "%s (%s)" % (agencies, ParentAndBaseName(feed_path))
-
- WriteOutput(title,
- locations,
- options.limit,
- open(options.output, "w"))
- print ("Load %s in your web browser. It contains more instructions." %
- options.output)
-
-
-if __name__ == "__main__":
- main()
-
--- a/origin-src/transitfeed-1.2.5/examples/shuttle_from_xmlfeed.py
+++ /dev/null
@@ -1,134 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Google has a homegrown database for managing the company shuttle. The
-database dumps its contents in XML. This scripts converts the proprietary XML
-format into a Google Transit Feed Specification file.
-"""
-
-import datetime
-from optparse import OptionParser
-import os.path
-import re
-import transitfeed
-import urllib
-
-try:
- import xml.etree.ElementTree as ET # python 2.5
-except ImportError, e:
- import elementtree.ElementTree as ET # older pythons
-
-
-class NoUnusedStopExceptionProblemReporter(
- transitfeed.ExceptionProblemReporter):
- """The company shuttle database has a few unused stops for reasons unrelated
- to this script. Ignore them.
- """
- def UnusedStop(self, stop_id, stop_name):
- pass
-
-def SaveFeed(input, output):
- tree = ET.parse(urllib.urlopen(input))
-
- schedule = transitfeed.Schedule()
- service_period = schedule.GetDefaultServicePeriod()
- service_period.SetWeekdayService()
- service_period.SetStartDate('20070314')
- service_period.SetEndDate('20071231')
- # Holidays for 2007
- service_period.SetDateHasService('20070528', has_service=False)
- service_period.SetDateHasService('20070704', has_service=False)
- service_period.SetDateHasService('20070903', has_service=False)
- service_period.SetDateHasService('20071122', has_service=False)
- service_period.SetDateHasService('20071123', has_service=False)
- service_period.SetDateHasService('20071224', has_service=False)
- service_period.SetDateHasService('20071225', has_service=False)
- service_period.SetDateHasService('20071226', has_service=False)
- service_period.SetDateHasService('20071231', has_service=False)
-
- stops = {} # Map from xml stop id to python Stop object
- agency = schedule.NewDefaultAgency(name='GBus', url='http://shuttle/',
- timezone='America/Los_Angeles')
-
- for xml_stop in tree.getiterator('stop'):
- stop = schedule.AddStop(lat=float(xml_stop.attrib['lat']),
- lng=float(xml_stop.attrib['lng']),
- name=xml_stop.attrib['name'])
- stops[xml_stop.attrib['id']] = stop
-
- for xml_shuttleGroup in tree.getiterator('shuttleGroup'):
- if xml_shuttleGroup.attrib['name'] == 'Test':
- continue
- r = schedule.AddRoute(short_name="",
- long_name=xml_shuttleGroup.attrib['name'], route_type='Bus')
- for xml_route in xml_shuttleGroup.getiterator('route'):
- t = r.AddTrip(schedule=schedule, headsign=xml_route.attrib['name'],
- trip_id=xml_route.attrib['id'])
- trip_stops = [] # Build a list of (time, Stop) tuples
- for xml_schedule in xml_route.getiterator('schedule'):
- trip_stops.append( (int(xml_schedule.attrib['time']) / 1000,
- stops[xml_schedule.attrib['stopId']]) )
- trip_stops.sort() # Sort by time
- for (time, stop) in trip_stops:
- t.AddStopTime(stop=stop, arrival_secs=time, departure_secs=time)
-
- schedule.Validate(problems=NoUnusedStopExceptionProblemReporter())
- schedule.WriteGoogleTransitFeed(output)
-
-
-def main():
- parser = OptionParser()
- parser.add_option('--input', dest='input',
- help='Path or URL of input')
- parser.add_option('--output', dest='output',
- help='Path of output file. Should end in .zip and if it '
- 'contains the substring YYYYMMDD it will be replaced with '
- 'today\'s date. It is impossible to include the literal '
- 'string YYYYYMMDD in the path of the output file.')
- parser.add_option('--execute', dest='execute',
- help='Commands to run to copy the output. %(path)s is '
- 'replaced with full path of the output and %(name)s is '
- 'replaced with name part of the path. Try '
- 'scp %(path)s myhost:www/%(name)s',
- action='append')
- parser.set_defaults(input=None, output=None, execute=[])
- (options, args) = parser.parse_args()
-
- today = datetime.date.today().strftime('%Y%m%d')
- options.output = re.sub(r'YYYYMMDD', today, options.output)
- (_, name) = os.path.split(options.output)
- path = options.output
-
- SaveFeed(options.input, options.output)
-
- for command in options.execute:
- import subprocess
- def check_call(cmd):
- """Convenience function that is in the docs for subprocess but not
- installed on my system."""
- retcode = subprocess.call(cmd, shell=True)
- if retcode < 0:
- raise Exception("Child '%s' was terminated by signal %d" % (cmd,
- -retcode))
- elif retcode != 0:
- raise Exception("Child '%s' returned %d" % (cmd, retcode))
-
- # path_output and filename_current can be used to run arbitrary commands
- check_call(command % locals())
-
-if __name__ == '__main__':
- main()
-
--- a/origin-src/transitfeed-1.2.5/examples/shuttle_from_xmlfeed.xml
+++ /dev/null
@@ -1,30 +1,1 @@
-<shuttle><office id="us-nye" name="US Nye County">
-<stops>
-<stop id="1" name="Stagecoach Hotel and Casino" shortName="Stagecoach" lat="36.915682" lng="-116.751677" />
-<stop id="2" name="North Ave / N A Ave" shortName="N Ave / A Ave N" lat="36.914944" lng="-116.761472" />
-<stop id="3" name="North Ave / D Ave N" shortName="N Ave / D Ave N" lat="36.914893" lng="-116.76821" />
-<stop id="4" name="Doing Ave / D Ave N" shortName="Doing / D Ave N" lat="36.909489" lng="-116.768242" />
-<stop id="5" name="E Main St / S Irving St" shortName="E Main / S Irving" lat="36.905697" lng="-116.76218" />
-</stops>
-<shuttleGroups>
-<shuttleGroup id="4" name="Bar Circle Loop" >
-<routes>
-<route id="1" name="Outbound">
-<schedules>
-<schedule id="164" stopId="1" time="60300000"/>
-<schedule id="165" stopId="2" time="60600000"/>
-<schedule id="166" stopId="3" time="60720000"/>
-<schedule id="167" stopId="4" time="60780000"/>
-<schedule id="168" stopId="5" time="60900000"/>
-</schedules><meta></meta></route>
-<route id="2" name="Inbound">
-<schedules>
-<schedule id="260" stopId="5" time="30000000"/>
-<schedule id="261" stopId="4" time="30120000"/>
-<schedule id="262" stopId="3" time="30180000"/>
-<schedule id="263" stopId="2" time="30300000"/>
-<schedule id="264" stopId="1" time="30600000"/>
-</schedules><meta></meta></route></routes>
-</shuttleGroup>
-</shuttleGroups></office></shuttle>
--- a/origin-src/transitfeed-1.2.5/examples/small_builder.py
+++ /dev/null
@@ -1,40 +1,1 @@
-#!/usr/bin/python2.5
-# A really simple example of using transitfeed to build a Google Transit
-# Feed Specification file.
-
-import transitfeed
-from optparse import OptionParser
-
-
-parser = OptionParser()
-parser.add_option('--output', dest='output',
- help='Path of output file. Should end in .zip')
-parser.set_defaults(output='google_transit.zip')
-(options, args) = parser.parse_args()
-
-schedule = transitfeed.Schedule()
-schedule.AddAgency("Fly Agency", "http://iflyagency.com",
- "America/Los_Angeles")
-
-service_period = schedule.GetDefaultServicePeriod()
-service_period.SetWeekdayService(True)
-service_period.SetDateHasService('20070704')
-
-stop1 = schedule.AddStop(lng=-122, lat=37.2, name="Suburbia")
-stop2 = schedule.AddStop(lng=-122.001, lat=37.201, name="Civic Center")
-
-route = schedule.AddRoute(short_name="22", long_name="Civic Center Express",
- route_type="Bus")
-
-trip = route.AddTrip(schedule, headsign="To Downtown")
-trip.AddStopTime(stop1, stop_time='09:00:00')
-trip.AddStopTime(stop2, stop_time='09:15:00')
-
-trip = route.AddTrip(schedule, headsign="To Suburbia")
-trip.AddStopTime(stop1, stop_time='17:30:00')
-trip.AddStopTime(stop2, stop_time='17:45:00')
-
-schedule.Validate()
-schedule.WriteGoogleTransitFeed(options.output)
-
--- a/origin-src/transitfeed-1.2.5/examples/table.py
+++ /dev/null
@@ -1,177 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# An example script that demonstrates converting a proprietary format to a
-# Google Transit Feed Specification file.
-#
-# You can load table.txt, the example input, in Excel. It contains three
-# sections:
-# 1) A list of global options, starting with a line containing the word
-# 'options'. Each option has an name in the first column and most options
-# have a value in the second column.
-# 2) A table of stops, starting with a line containing the word 'stops'. Each
-# row of the table has 3 columns: name, latitude, longitude
-# 3) A list of routes. There is an empty row between each route. The first row
-# for a route lists the short_name and long_name. After the first row the
-# left-most column lists the stop names visited by the route. Each column
-# contains the times a single trip visits the stops.
-#
-# This is very simple example which you could use as a base for your own
-# transit feed builder.
-
-import transitfeed
-from optparse import OptionParser
-import re
-
-stops = {}
-
-# table is a list of lists in this form
-# [ ['Short Name', 'Long Name'],
-# ['Stop 1', 'Stop 2', ...]
-# [time_at_1, time_at_2, ...] # times for trip 1
-# [time_at_1, time_at_2, ...] # times for trip 2
-# ... ]
-def AddRouteToSchedule(schedule, table):
- if len(table) >= 2:
- r = schedule.AddRoute(short_name=table[0][0], long_name=table[0][1], route_type='Bus')
- for trip in table[2:]:
- if len(trip) > len(table[1]):
- print "ignoring %s" % trip[len(table[1]):]
- trip = trip[0:len(table[1])]
- t = r.AddTrip(schedule, headsign='My headsign')
- trip_stops = [] # Build a list of (time, stopname) tuples
- for i in range(0, len(trip)):
- if re.search(r'\S', trip[i]):
- trip_stops.append( (transitfeed.TimeToSecondsSinceMidnight(trip[i]), table[1][i]) )
- trip_stops.sort() # Sort by time
- for (time, stopname) in trip_stops:
- t.AddStopTime(stop=stops[stopname.lower()], arrival_secs=time,
- departure_secs=time)
-
-def TransposeTable(table):
- """Transpose a list of lists, using None to extend all input lists to the
- same length.
-
- For example:
- >>> TransposeTable(
- [ [11, 12, 13],
- [21, 22],
- [31, 32, 33, 34]])
-
- [ [11, 21, 31],
- [12, 22, 32],
- [13, None, 33],
- [None, None, 34]]
- """
- transposed = []
- rows = len(table)
- cols = max(len(row) for row in table)
- for x in range(cols):
- transposed.append([])
- for y in range(rows):
- if x < len(table[y]):
- transposed[x].append(table[y][x])
- else:
- transposed[x].append(None)
- return transposed
-
-def ProcessOptions(schedule, table):
- service_period = schedule.GetDefaultServicePeriod()
- agency_name, agency_url, agency_timezone = (None, None, None)
-
- for row in table[1:]:
- command = row[0].lower()
- if command == 'weekday':
- service_period.SetWeekdayService()
- elif command == 'start_date':
- service_period.SetStartDate(row[1])
- elif command == 'end_date':
- service_period.SetEndDate(row[1])
- elif command == 'add_date':
- service_period.SetDateHasService(date=row[1])
- elif command == 'remove_date':
- service_period.SetDateHasService(date=row[1], has_service=False)
- elif command == 'agency_name':
- agency_name = row[1]
- elif command == 'agency_url':
- agency_url = row[1]
- elif command == 'agency_timezone':
- agency_timezone = row[1]
-
- if not (agency_name and agency_url and agency_timezone):
- print "You must provide agency information"
-
- schedule.NewDefaultAgency(agency_name=agency_name, agency_url=agency_url,
- agency_timezone=agency_timezone)
-
-
-def AddStops(schedule, table):
- for name, lat_str, lng_str in table[1:]:
- stop = schedule.AddStop(lat=float(lat_str), lng=float(lng_str), name=name)
- stops[name.lower()] = stop
-
-
-def ProcessTable(schedule, table):
- if table[0][0].lower() == 'options':
- ProcessOptions(schedule, table)
- elif table[0][0].lower() == 'stops':
- AddStops(schedule, table)
- else:
- transposed = [table[0]] # Keep route_short_name and route_long_name on first row
-
- # Transpose rest of table. Input contains the stop names in table[x][0], x
- # >= 1 with trips found in columns, so we need to transpose table[1:].
- # As a diagram Transpose from
- # [['stop 1', '10:00', '11:00', '12:00'],
- # ['stop 2', '10:10', '11:10', '12:10'],
- # ['stop 3', '10:20', '11:20', '12:20']]
- # to
- # [['stop 1', 'stop 2', 'stop 3'],
- # ['10:00', '10:10', '10:20'],
- # ['11:00', '11:11', '11:20'],
- # ['12:00', '12:12', '12:20']]
- transposed.extend(TransposeTable(table[1:]))
- AddRouteToSchedule(schedule, transposed)
-
-
-def main():
- parser = OptionParser()
- parser.add_option('--input', dest='input',
- help='Path of input file')
- parser.add_option('--output', dest='output',
- help='Path of output file, should end in .zip')
- parser.set_defaults(output='feed.zip')
- (options, args) = parser.parse_args()
-
- schedule = transitfeed.Schedule()
-
- table = []
- for line in open(options.input):
- line = line.rstrip()
- if not line:
- ProcessTable(schedule, table)
- table = []
- else:
- table.append(line.split('\t'))
-
- ProcessTable(schedule, table)
-
- schedule.WriteGoogleTransitFeed(options.output)
-
-
-if __name__ == '__main__':
- main()
-
--- a/origin-src/transitfeed-1.2.5/examples/table.txt
+++ /dev/null
@@ -1,30 +1,1 @@
-options
-weekday
-start_date 20070315
-end_date 20071215
-remove_date 20070704
-agency_name Gbus
-agency_url http://shuttle/
-agency_timezone America/Los_Angeles
-stops
-Stagecoach 36.915682 -116.751677
-N Ave / A Ave N 36.914944 -116.761472
-N Ave / D Ave N 36.914893 -116.76821
-Doing / D Ave N 36.909489 -116.768242
-E Main / S Irving 36.905697 -116.76218
-
-O in Bar Circle Inbound
-Stagecoach 9:00:00 9:30:00 10:00:00 12:00:00
-N Ave / A Ave N 9:05:00 9:35:00 10:05:00 12:05:00
-N Ave / D Ave N 9:07:00 9:37:00 10:07:00 12:07:00
-Doing / D Ave N 9:09:00 9:39:00 10:09:00 12:09:00
-E Main / S Irving 9:11:00 9:41:00 10:11:00 12:11:00
-
-O out Bar Circle Outbound
-E Main / S Irving 15:00:00 15:30:00 16:00:00 18:00:00
-Doing / D Ave N 15:05:00 15:35:00 16:05:00 18:05:00
-N Ave / D Ave N 15:07:00 15:37:00 16:07:00 18:07:00
-N Ave / A Ave N 15:09:00 15:39:00 16:09:00 18:09:00
-Stagecoach 15:11:00 15:41:00 16:11:00 18:11:00
-
--- a/origin-src/transitfeed-1.2.5/feedvalidator.py
+++ /dev/null
@@ -1,723 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-"""Validates a GTFS file.
-
-For usage information run feedvalidator.py --help
-"""
-
-import bisect
-import codecs
-import datetime
-from transitfeed.util import defaultdict
-import optparse
-import os
-import os.path
-import re
-import socket
-import sys
-import time
-import transitfeed
-from transitfeed import TYPE_ERROR, TYPE_WARNING
-from urllib2 import Request, urlopen, HTTPError, URLError
-from transitfeed import util
-import webbrowser
-
-SVN_TAG_URL = 'http://googletransitdatafeed.googlecode.com/svn/tags/'
-
-
-def MaybePluralizeWord(count, word):
- if count == 1:
- return word
- else:
- return word + 's'
-
-
-def PrettyNumberWord(count, word):
- return '%d %s' % (count, MaybePluralizeWord(count, word))
-
-
-def UnCamelCase(camel):
- return re.sub(r'([a-z])([A-Z])', r'\1 \2', camel)
-
-
-def ProblemCountText(error_count, warning_count):
- results = []
- if error_count:
- results.append(PrettyNumberWord(error_count, 'error'))
- if warning_count:
- results.append(PrettyNumberWord(warning_count, 'warning'))
-
- return ' and '.join(results)
-
-
-def CalendarSummary(schedule):
- today = datetime.date.today()
- summary_end_date = today + datetime.timedelta(days=60)
- start_date, end_date = schedule.GetDateRange()
-
- if not start_date or not end_date:
- return {}
-
- try:
- start_date_object = transitfeed.DateStringToDateObject(start_date)
- end_date_object = transitfeed.DateStringToDateObject(end_date)
- except ValueError:
- return {}
-
- # Get the list of trips only during the period the feed is active.
- # As such we have to check if it starts in the future and/or if
- # if it ends in less than 60 days.
- date_trips_departures = schedule.GenerateDateTripsDeparturesList(
- max(today, start_date_object),
- min(summary_end_date, end_date_object))
-
- if not date_trips_departures:
- return {}
-
- # Check that the dates which will be shown in summary agree with these
- # calculations. Failure implies a bug which should be fixed. It isn't good
- # for users to discover assertion failures but means it will likely be fixed.
- assert start_date <= date_trips_departures[0][0].strftime("%Y%m%d")
- assert end_date >= date_trips_departures[-1][0].strftime("%Y%m%d")
-
- # Generate a map from int number of trips in a day to a list of date objects
- # with that many trips. The list of dates is sorted.
- trips_dates = defaultdict(lambda: [])
- trips = 0
- for date, day_trips, day_departures in date_trips_departures:
- trips += day_trips
- trips_dates[day_trips].append(date)
- mean_trips = trips / len(date_trips_departures)
- max_trips = max(trips_dates.keys())
- min_trips = min(trips_dates.keys())
-
- calendar_summary = {}
- calendar_summary['mean_trips'] = mean_trips
- calendar_summary['max_trips'] = max_trips
- calendar_summary['max_trips_dates'] = FormatDateList(trips_dates[max_trips])
- calendar_summary['min_trips'] = min_trips
- calendar_summary['min_trips_dates'] = FormatDateList(trips_dates[min_trips])
- calendar_summary['date_trips_departures'] = date_trips_departures
- calendar_summary['date_summary_range'] = "%s to %s" % (
- date_trips_departures[0][0].strftime("%a %b %d"),
- date_trips_departures[-1][0].strftime("%a %b %d"))
-
- return calendar_summary
-
-
-def FormatDateList(dates):
- if not dates:
- return "0 service dates"
-
- formatted = [d.strftime("%a %b %d") for d in dates[0:3]]
- if len(dates) > 3:
- formatted.append("...")
- return "%s (%s)" % (PrettyNumberWord(len(dates), "service date"),
- ", ".join(formatted))
-
-
-def MaxVersion(versions):
- versions = filter(None, versions)
- versions.sort(lambda x,y: -cmp([int(item) for item in x.split('.')],
- [int(item) for item in y.split('.')]))
- if len(versions) > 0:
- return versions[0]
-
-
-class CountingConsoleProblemReporter(transitfeed.ProblemReporter):
- def __init__(self):
- transitfeed.ProblemReporter.__init__(self)
- self._error_count = 0
- self._warning_count = 0
-
- def _Report(self, e):
- transitfeed.ProblemReporter._Report(self, e)
- if e.IsError():
- self._error_count += 1
- else:
- self._warning_count += 1
-
- def ErrorCount(self):
- return self._error_count
-
- def WarningCount(self):
- return self._warning_count
-
- def FormatCount(self):
- return ProblemCountText(self.ErrorCount(), self.WarningCount())
-
- def HasIssues(self):
- return self.ErrorCount() or self.WarningCount()
-
-
-class BoundedProblemList(object):
- """A list of one type of ExceptionWithContext objects with bounded size."""
- def __init__(self, size_bound):
- self._count = 0
- self._exceptions = []
- self._size_bound = size_bound
-
- def Add(self, e):
- self._count += 1
- try:
- bisect.insort(self._exceptions, e)
- except TypeError:
- # The base class ExceptionWithContext raises this exception in __cmp__
- # to signal that an object is not comparable. Instead of keeping the most
- # significant issue keep the first reported.
- if self._count <= self._size_bound:
- self._exceptions.append(e)
- else:
- # self._exceptions is in order. Drop the least significant if the list is
- # now too long.
- if self._count > self._size_bound:
- del self._exceptions[-1]
-
- def _GetDroppedCount(self):
- return self._count - len(self._exceptions)
-
- def __repr__(self):
- return "<BoundedProblemList %s>" % repr(self._exceptions)
-
- count = property(lambda s: s._count)
- dropped_count = property(_GetDroppedCount)
- problems = property(lambda s: s._exceptions)
-
-
-class LimitPerTypeProblemReporter(transitfeed.ProblemReporter):
- def __init__(self, limit_per_type):
- transitfeed.ProblemReporter.__init__(self)
-
- # {TYPE_WARNING: {"ClassName": BoundedProblemList()}}
- self._type_to_name_to_problist = {
- TYPE_WARNING: defaultdict(lambda: BoundedProblemList(limit_per_type)),
- TYPE_ERROR: defaultdict(lambda: BoundedProblemList(limit_per_type))
- }
-
- def HasIssues(self):
- return (self._type_to_name_to_problist[TYPE_ERROR] or
- self._type_to_name_to_problist[TYPE_WARNING])
-
- def _Report(self, e):
- self._type_to_name_to_problist[e.GetType()][e.__class__.__name__].Add(e)
-
- def ErrorCount(self):
- error_sets = self._type_to_name_to_problist[TYPE_ERROR].values()
- return sum(map(lambda v: v.count, error_sets))
-
- def WarningCount(self):
- warning_sets = self._type_to_name_to_problist[TYPE_WARNING].values()
- return sum(map(lambda v: v.count, warning_sets))
-
- def ProblemList(self, problem_type, class_name):
- """Return the BoundedProblemList object for given type and class."""
- return self._type_to_name_to_problist[problem_type][class_name]
-
- def ProblemListMap(self, problem_type):
- """Return the map from class name to BoundedProblemList object."""
- return self._type_to_name_to_problist[problem_type]
-
-
-class HTMLCountingProblemReporter(LimitPerTypeProblemReporter):
- def FormatType(self, f, level_name, class_problist):
- """Write the HTML dumping all problems of one type.
-
- Args:
- f: file object open for writing
- level_name: string such as "Error" or "Warning"
- class_problist: sequence of tuples (class name,
- BoundedProblemList object)
- """
- class_problist.sort()
- output = []
- for classname, problist in class_problist:
- output.append('<h4 class="issueHeader"><a name="%s%s">%s</a></h4><ul>\n' %
- (level_name, classname, UnCamelCase(classname)))
- for e in problist.problems:
- self.FormatException(e, output)
- if problist.dropped_count:
- output.append('<li>and %d more of this type.' %
- (problist.dropped_count))
- output.append('</ul>\n')
- f.write(''.join(output))
-
- def FormatTypeSummaryTable(self, level_name, name_to_problist):
- """Return an HTML table listing the number of problems by class name.
-
- Args:
- level_name: string such as "Error" or "Warning"
- name_to_problist: dict mapping class name to an BoundedProblemList object
-
- Returns:
- HTML in a string
- """
- output = []
- output.append('<table>')
- for classname in sorted(name_to_problist.keys()):
- problist = name_to_problist[classname]
- human_name = MaybePluralizeWord(problist.count, UnCamelCase(classname))
- output.append('<tr><td>%d</td><td><a href="#%s%s">%s</a></td></tr>\n' %
- (problist.count, level_name, classname, human_name))
- output.append('</table>\n')
- return ''.join(output)
-
- def FormatException(self, e, output):
- """Append HTML version of e to list output."""
- d = e.GetDictToFormat()
- for k in ('file_name', 'feedname', 'column_name'):
- if k in d.keys():
- d[k] = '<code>%s</code>' % d[k]
- problem_text = e.FormatProblem(d).replace('\n', '<br>')
- output.append('<li>')
- output.append('<div class="problem">%s</div>' %
- transitfeed.EncodeUnicode(problem_text))
- try:
- if hasattr(e, 'row_num'):
- line_str = 'line %d of ' % e.row_num
- else:
- line_str = ''
- output.append('in %s<code>%s</code><br>\n' %
- (line_str, e.file_name))
- row = e.row
- headers = e.headers
- column_name = e.column_name
- table_header = '' # HTML
- table_data = '' # HTML
- for header, value in zip(headers, row):
- attributes = ''
- if header == column_name:
- attributes = ' class="problem"'
- table_header += '<th%s>%s</th>' % (attributes, header)
- table_data += '<td%s>%s</td>' % (attributes, value)
- # Make sure output is encoded into UTF-8
- output.append('<table class="dump"><tr>%s</tr>\n' %
- transitfeed.EncodeUnicode(table_header))
- output.append('<tr>%s</tr></table>\n' %
- transitfeed.EncodeUnicode(table_data))
- except AttributeError, e:
- pass # Hope this was getting an attribute from e ;-)
- output.append('<br></li>\n')
-
- def FormatCount(self):
- return ProblemCountText(self.ErrorCount(), self.WarningCount())
-
- def CountTable(self):
- output = []
- output.append('<table class="count_outside">\n')
- output.append('<tr>')
- if self.ProblemListMap(TYPE_ERROR):
- output.append('<td><span class="fail">%s</span></td>' %
- PrettyNumberWord(self.ErrorCount(), "error"))
- if self.ProblemListMap(TYPE_WARNING):
- output.append('<td><span class="fail">%s</span></td>' %
- PrettyNumberWord(self.WarningCount(), "warning"))
- output.append('</tr>\n<tr>')
- if self.ProblemListMap(TYPE_ERROR):
- output.append('<td>\n')
- output.append(self.FormatTypeSummaryTable("Error",
- self.ProblemListMap(TYPE_ERROR)))
- output.append('</td>\n')
- if self.ProblemListMap(TYPE_WARNING):
- output.append('<td>\n')
- output.append(self.FormatTypeSummaryTable("Warning",
- self.ProblemListMap(TYPE_WARNING)))
- output.append('</td>\n')
- output.append('</table>')
- return ''.join(output)
-
- def WriteOutput(self, feed_location, f, schedule, other_problems):
- """Write the html output to f."""
- if self.HasIssues():
- if self.ErrorCount() + self.WarningCount() == 1:
- summary = ('<span class="fail">Found this problem:</span>\n%s' %
- self.CountTable())
- else:
- summary = ('<span class="fail">Found these problems:</span>\n%s' %
- self.CountTable())
- else:
- summary = '<span class="pass">feed validated successfully</span>'
- if other_problems is not None:
- summary = ('<span class="fail">\n%s</span><br><br>' %
- other_problems) + summary
-
- basename = os.path.basename(feed_location)
- feed_path = (feed_location[:feed_location.rfind(basename)], basename)
-
- agencies = ', '.join(['<a href="%s">%s</a>' % (a.agency_url, a.agency_name)
- for a in schedule.GetAgencyList()])
- if not agencies:
- agencies = '?'
-
- dates = "No valid service dates found"
- (start, end) = schedule.GetDateRange()
- if start and end:
- def FormatDate(yyyymmdd):
- src_format = "%Y%m%d"
- dst_format = "%B %d, %Y"
- try:
- return time.strftime(dst_format,
- time.strptime(yyyymmdd, src_format))
- except ValueError:
- return yyyymmdd
-
- formatted_start = FormatDate(start)
- formatted_end = FormatDate(end)
- dates = "%s to %s" % (formatted_start, formatted_end)
-
- calendar_summary = CalendarSummary(schedule)
- if calendar_summary:
- calendar_summary_html = """<br>
-During the upcoming service dates %(date_summary_range)s:
-<table>
-<tr><th class="header">Average trips per date:</th><td class="header">%(mean_trips)s</td></tr>
-<tr><th class="header">Most trips on a date:</th><td class="header">%(max_trips)s, on %(max_trips_dates)s</td></tr>
-<tr><th class="header">Least trips on a date:</th><td class="header">%(min_trips)s, on %(min_trips_dates)s</td></tr>
-</table>""" % calendar_summary
- else:
- calendar_summary_html = ""
-
- output_prefix = """
-<html>
-<head>
-<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>FeedValidator: %(feed_file)s</title>
-<style>
-body {font-family: Georgia, serif; background-color: white}
-.path {color: gray}
-div.problem {max-width: 500px}
-table.dump td,th {background-color: khaki; padding: 2px; font-family:monospace}
-table.dump td.problem,th.problem {background-color: dc143c; color: white; padding: 2px; font-family:monospace}
-table.count_outside td {vertical-align: top}
-table.count_outside {border-spacing: 0px; }
-table {border-spacing: 5px 0px; margin-top: 3px}
-h3.issueHeader {padding-left: 0.5em}
-h4.issueHeader {padding-left: 1em}
-.pass {background-color: lightgreen}
-.fail {background-color: yellow}
-.pass, .fail {font-size: 16pt}
-.header {background-color: white; font-family: Georgia, serif; padding: 0px}
-th.header {text-align: right; font-weight: normal; color: gray}
-.footer {font-size: 10pt}
-</style>
-</head>
-<body>
-GTFS validation results for feed:<br>
-<code><span class="path">%(feed_dir)s</span><b>%(feed_file)s</b></code>
-<br><br>
-<table>
-<tr><th class="header">Agencies:</th><td class="header">%(agencies)s</td></tr>
-<tr><th class="header">Routes:</th><td class="header">%(routes)s</td></tr>
-<tr><th class="header">Stops:</th><td class="header">%(stops)s</td></tr>
-<tr><th class="header">Trips:</th><td class="header">%(trips)s</td></tr>
-<tr><th class="header">Shapes:</th><td class="header">%(shapes)s</td></tr>
-<tr><th class="header">Effective:</th><td class="header">%(dates)s</td></tr>
-</table>
-%(calendar_summary)s
-<br>
-%(problem_summary)s
-<br><br>
-""" % { "feed_file": feed_path[1],
- "feed_dir": feed_path[0],
- "agencies": agencies,
- "routes": len(schedule.GetRouteList()),
- "stops": len(schedule.GetStopList()),
- "trips": len(schedule.GetTripList()),
- "shapes": len(schedule.GetShapeList()),
- "dates": dates,
- "problem_summary": summary,
- "calendar_summary": calendar_summary_html}
-
-# In output_suffix string
-# time.strftime() returns a regular local time string (not a Unicode one) with
-# default system encoding. And decode() will then convert this time string back
-# into a Unicode string. We use decode() here because we don't want the operating
-# system to do any system encoding (which may cause some problem if the string
-# contains some non-English characters) for the string. Therefore we decode it
-# back to its original Unicode code print.
-
- time_unicode = (time.strftime('%B %d, %Y at %I:%M %p %Z').
- decode(sys.getfilesystemencoding()))
- output_suffix = """
-<div class="footer">
-Generated by <a href="http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator">
-FeedValidator</a> version %s on %s.
-</div>
-</body>
-</html>""" % (transitfeed.__version__, time_unicode)
-
- f.write(transitfeed.EncodeUnicode(output_prefix))
- if self.ProblemListMap(TYPE_ERROR):
- f.write('<h3 class="issueHeader">Errors:</h3>')
- self.FormatType(f, "Error",
- self.ProblemListMap(TYPE_ERROR).items())
- if self.ProblemListMap(TYPE_WARNING):
- f.write('<h3 class="issueHeader">Warnings:</h3>')
- self.FormatType(f, "Warning",
- self.ProblemListMap(TYPE_WARNING).items())
- f.write(transitfeed.EncodeUnicode(output_suffix))
-
-
-def RunValidationOutputFromOptions(feed, options):
- """Validate feed, output results per options and return an exit code."""
- if options.output.upper() == "CONSOLE":
- return RunValidationOutputToConsole(feed, options)
- else:
- return RunValidationOutputToFilename(feed, options, options.output)
-
-
-def RunValidationOutputToFilename(feed, options, output_filename):
- """Validate feed, save HTML at output_filename and return an exit code."""
- try:
- output_file = open(output_filename, 'w')
- exit_code = RunValidationOutputToFile(feed, options, output_file)
- output_file.close()
- except IOError, e:
- print 'Error while writing %s: %s' % (output_filename, e)
- output_filename = None
- exit_code = 2
-
- if options.manual_entry and output_filename:
- webbrowser.open('file://%s' % os.path.abspath(output_filename))
-
- return exit_code
-
-
-def RunValidationOutputToFile(feed, options, output_file):
- """Validate feed, write HTML to output_file and return an exit code."""
- problems = HTMLCountingProblemReporter(options.limit_per_type)
- schedule, exit_code, other_problems_string = RunValidation(feed, options,
- problems)
- if isinstance(feed, basestring):
- feed_location = feed
- else:
- feed_location = getattr(feed, 'name', repr(feed))
- problems.WriteOutput(feed_location, output_file, schedule,
- other_problems_string)
- return exit_code
-
-
-def RunValidationOutputToConsole(feed, options):
- """Validate feed, print reports and return an exit code."""
- problems = CountingConsoleProblemReporter()
- _, exit_code, _ = RunValidation(feed, options, problems)
- return exit_code
-
-
-def RunValidation(feed, options, problems):
- """Validate feed, returning the loaded Schedule and exit code.
-
- Args:
- feed: GTFS file, either path of the file as a string or a file object
- options: options object returned by optparse
- problems: transitfeed.ProblemReporter instance
-
- Returns:
- a transitfeed.Schedule object, exit code and plain text string of other
- problems
- Exit code is 1 if problems are found and 0 if the Schedule is problem free.
- plain text string is '' if no other problems are found.
- """
- other_problems_string = CheckVersion(latest_version=options.latest_version)
- print 'validating %s' % feed
- loader = transitfeed.Loader(feed, problems=problems, extra_validation=False,
- memory_db=options.memory_db,
- check_duplicate_trips=\
- options.check_duplicate_trips)
- schedule = loader.Load()
- schedule.Validate(service_gap_interval=options.service_gap_interval)
-
- if feed == 'IWantMyvalidation-crash.txt':
- # See test/testfeedvalidator.py
- raise Exception('For testing the feed validator crash handler.')
-
- if other_problems_string:
- print other_problems_string
-
- if problems.HasIssues():
- print 'ERROR: %s found' % problems.FormatCount()
- return schedule, 1, other_problems_string
- else:
- print 'feed validated successfully'
- return schedule, 0, other_problems_string
-
-
-def CheckVersion(latest_version=''):
- """
- Check there is newer version of this project.
-
- Codes are based on http://www.voidspace.org.uk/python/articles/urllib2.shtml
- Already got permission from the copyright holder.
- """
- current_version = transitfeed.__version__
- if not latest_version:
- timeout = 20
- socket.setdefaulttimeout(timeout)
- request = Request(SVN_TAG_URL)
-
- try:
- response = urlopen(request)
- content = response.read()
- versions = re.findall(r'>transitfeed-([\d\.]+)\/<\/a>', content)
- latest_version = MaxVersion(versions)
-
- except HTTPError, e:
- return('The server couldn\'t fulfill the request. Error code: %s.'
- % e.code)
- except URLError, e:
- return('We failed to reach transitfeed server. Reason: %s.' % e.reason)
-
- if not latest_version:
- return('We had trouble parsing the contents of %s.' % SVN_TAG_URL)
-
- newest_version = MaxVersion([latest_version, current_version])
- if current_version != newest_version:
- return('A new version %s of transitfeed is available. Please visit '
- 'http://code.google.com/p/googletransitdatafeed and download.'
- % newest_version)
-
-
-def main():
- usage = \
-'''%prog [options] [<input GTFS.zip>]
-
-Validates GTFS file (or directory) <input GTFS.zip> and writes a HTML
-report of the results to validation-results.html.
-
-If <input GTFS.zip> is ommited the filename is read from the console. Dragging
-a file into the console may enter the filename.
-
-For more information see
-http://code.google.com/p/googletransitdatafeed/wiki/FeedValidator
-'''
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('-n', '--noprompt', action='store_false',
- dest='manual_entry',
- help='do not prompt for feed location or load output in '
- 'browser')
- parser.add_option('-o', '--output', dest='output', metavar='FILE',
- help='write html output to FILE or --output=CONSOLE to '
- 'print all errors and warnings to the command console')
- parser.add_option('-p', '--performance', action='store_true',
- dest='performance',
- help='output memory and time performance (Availability: '
- 'Unix')
- parser.add_option('-m', '--memory_db', dest='memory_db', action='store_true',
- help='Use in-memory sqlite db instead of a temporary file. '
- 'It is faster but uses more RAM.')
- parser.add_option('-d', '--duplicate_trip_check',
- dest='check_duplicate_trips', action='store_true',
- help='Check for duplicate trips which go through the same '
- 'stops with same service and start times')
- parser.add_option('-l', '--limit_per_type',
- dest='limit_per_type', action='store', type='int',
- help='Maximum number of errors and warnings to keep of '
- 'each type')
- parser.add_option('--latest_version', dest='latest_version',
- action='store',
- help='a version number such as 1.2.1 or None to get the '
- 'latest version from code.google.com. Output a warning if '
- 'transitfeed.py is older than this version.')
- parser.add_option('--service_gap_interval',
- dest='service_gap_interval',
- action='store',
- type='int',
- help='the number of consecutive days to search for with no '
- 'scheduled service. For each interval with no service '
- 'having this number of days or more a warning will be '
- 'issued')
-
- parser.set_defaults(manual_entry=True, output='validation-results.html',
- memory_db=False, check_duplicate_trips=False,
- limit_per_type=5, latest_version='',
- service_gap_interval=13)
- (options, args) = parser.parse_args()
-
- if not len(args) == 1:
- if options.manual_entry:
- feed = raw_input('Enter Feed Location: ')
- else:
- parser.error('You must provide the path of a single feed')
- else:
- feed = args[0]
-
- feed = feed.strip('"')
-
- if options.performance:
- return ProfileRunValidationOutputFromOptions(feed, options)
- else:
- return RunValidationOutputFromOptions(feed, options)
-
-
-def ProfileRunValidationOutputFromOptions(feed, options):
- """Run RunValidationOutputFromOptions, print profile and return exit code."""
- import cProfile
- import pstats
- # runctx will modify a dict, but not locals(). We need a way to get rv back.
- locals_for_exec = locals()
- cProfile.runctx('rv = RunValidationOutputFromOptions(feed, options)',
- globals(), locals_for_exec, 'validate-stats')
-
- # Only available on Unix, http://docs.python.org/lib/module-resource.html
- import resource
- print "Time: %d seconds" % (
- resource.getrusage(resource.RUSAGE_SELF).ru_utime +
- resource.getrusage(resource.RUSAGE_SELF).ru_stime)
-
- # http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/286222
- # http://aspn.activestate.com/ASPN/Cookbook/ "The recipes are freely
- # available for review and use."
- def _VmB(VmKey):
- """Return size from proc status in bytes."""
- _proc_status = '/proc/%d/status' % os.getpid()
- _scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
- 'KB': 1024.0, 'MB': 1024.0*1024.0}
-
- # get pseudo file /proc/<pid>/status
- try:
- t = open(_proc_status)
- v = t.read()
- t.close()
- except:
- raise Exception("no proc file %s" % _proc_status)
- return 0 # non-Linux?
- # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
- i = v.index(VmKey)
- v = v[i:].split(None, 3) # whitespace
- if len(v) < 3:
- raise Exception("%s" % v)
- return 0 # invalid format?
- # convert Vm value to bytes
- return int(float(v[1]) * _scale[v[2]])
-
- # I ran this on over a hundred GTFS files, comparing VmSize to VmRSS
- # (resident set size). The difference was always under 2% or 3MB.
- print "Virtual Memory Size: %d bytes" % _VmB('VmSize:')
-
- # Output report of where CPU time was spent.
- p = pstats.Stats('validate-stats')
- p.strip_dirs()
- p.sort_stats('cumulative').print_stats(30)
- p.sort_stats('cumulative').print_callers(30)
- return locals_for_exec['rv']
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/__init__.py
+++ /dev/null
@@ -1,9 +1,1 @@
-__doc__ = """
-Package holding files for Google Transit Feed Specification Schedule Viewer.
-"""
-# This package contains the data files for schedule_viewer.py, a script that
-# comes with the transitfeed distribution. According to the thread
-# "[Distutils] distutils data_files and setuptools.pkg_resources are driving
-# me crazy" this is the easiest way to include data files. My experience
-# agrees. - Tom 2007-05-29
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/__init__.pyc and /dev/null differ
--- a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/index.html
+++ /dev/null
@@ -1,706 +1,1 @@
-<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
- "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
-<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml">
- <head>
- <meta http-equiv="content-type" content="text/html; charset=utf-8"/>
- <title>[agency]</title>
- <link href="file/style.css" rel="stylesheet" type="text/css" />
- <style type="text/css">
- v\:* {
- behavior:url(#default#VML);
- }
- </style>
- <script src="http://[host]/maps?file=api&v=2&key=[key]" type="text/javascript"></script>
- <script src="/file/labeled_marker.js" type="text/javascript"></script>
- <script language="VBScript" src="/file/svgcheck.vbs"></script>
- <script type="text/javascript">
- //<![CDATA[
- var map;
- // Set to true when debugging for log statements about HTTP requests.
- var log = false;
- var twelveHourTime = false; // set to true to see AM/PM
- var selectedRoute = null;
- var forbid_editing = [forbid_editing];
- function load() {
- if (GBrowserIsCompatible()) {
- sizeRouteList();
- var map_dom = document.getElementById("map");
- map = new GMap2(map_dom);
- map.addControl(new GLargeMapControl());
- map.addControl(new GMapTypeControl());
- map.addControl(new GOverviewMapControl());
- map.enableScrollWheelZoom();
- var bb = new GLatLngBounds(new GLatLng([min_lat], [min_lon]),new GLatLng([max_lat], [max_lon]));
- map.setCenter(bb.getCenter(), map.getBoundsZoomLevel(bb));
- map.enableDoubleClickZoom();
- initIcons();
- GEvent.addListener(map, "moveend", callbackMoveEnd);
- GEvent.addListener(map, "zoomend", callbackZoomEnd);
- callbackMoveEnd(); // Pretend we just moved to current center
- fetchRoutes();
- }
- }
-
- function callbackZoomEnd() {
- }
-
- function callbackMoveEnd() {
- // Map moved, search for stops near the center
- fetchStopsInBounds(map.getBounds());
- }
-
- /**
- * Fetch a sample of stops in the bounding box.
- */
- function fetchStopsInBounds(bounds) {
- url = "/json/boundboxstops?n=" + bounds.getNorthEast().lat()
- + "&e=" + bounds.getNorthEast().lng()
- + "&s=" + bounds.getSouthWest().lat()
- + "&w=" + bounds.getSouthWest().lng()
- + "&limit=50";
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayStopsBackground);
- }
-
- /**
- * Displays stops returned by the server on the map. Expected to be called
- * when GDownloadUrl finishes.
- *
- * @param {String} data JSON encoded list of list, each
- * containing a row of stops.txt
- * @param {Number} responseCode Response code from server
- */
- function callbackDisplayStops(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- clearMap();
- var stops = eval(data);
- if (stops.length == 1) {
- var marker = addStopMarkerFromList(stops[0], true);
- fetchStopInfoWindow(marker);
- } else {
- for (var i=0; i<stops.length; ++i) {
- addStopMarkerFromList(stops[i], true);
- }
- }
- }
-
- function stopTextSearchSubmit() {
- var text = document.getElementById("stopTextSearchInput").value;
- var url = "/json/stopsearch?q=" + text; // TODO URI escape
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayStops);
- }
-
- function tripTextSearchSubmit() {
- var text = document.getElementById("tripTextSearchInput").value;
- selectTrip(text);
- }
-
- /**
- * Add stops markers to the map and remove stops no longer in the
- * background.
- */
- function callbackDisplayStopsBackground(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var stops = eval(data);
- // Make a list of all background markers
- var oldStopMarkers = {};
- for (var stopId in stopMarkersBackground) {
- oldStopMarkers[stopId] = 1;
- }
- // Add new markers to the map and remove from oldStopMarkers
- for (var i=0; i<stops.length; ++i) {
- var marker = addStopMarkerFromList(stops[i], false);
- if (oldStopMarkers[marker.stopId]) {
- delete oldStopMarkers[marker.stopId];
- }
- }
- // Delete all markers that remain in oldStopMarkers
- for (var stopId in oldStopMarkers) {
- GEvent.removeListener(stopMarkersBackground[stopId].clickListener);
- map.removeOverlay(stopMarkersBackground[stopId]);
- delete stopMarkersBackground[stopId]
- }
- }
-
- /**
- * Remove all overlays from the map
- */
- function clearMap() {
- boundsOfPolyLine = null;
- for (var stopId in stopMarkersSelected) {
- GEvent.removeListener(stopMarkersSelected[stopId].clickListener);
- }
- for (var stopId in stopMarkersBackground) {
- GEvent.removeListener(stopMarkersBackground[stopId].clickListener);
- }
- stopMarkersSelected = {};
- stopMarkersBackground = {};
- map.clearOverlays();
- }
-
- /**
- * Return a new GIcon used for stops
- */
- function makeStopIcon() {
- var icon = new GIcon();
- icon.iconSize = new GSize(12, 20);
- icon.shadowSize = new GSize(22, 20);
- icon.iconAnchor = new GPoint(6, 20);
- icon.infoWindowAnchor = new GPoint(5, 1);
- return icon;
- }
-
- /**
- * Initialize icons. Call once during load.
- */
- function initIcons() {
- iconSelected = makeStopIcon();
- iconSelected.image = "/file/mm_20_yellow.png";
- iconSelected.shadow = "/file/mm_20_shadow.png";
- iconBackground = makeStopIcon();
- iconBackground.image = "/file/mm_20_blue_trans.png";
- iconBackground.shadow = "/file/mm_20_shadow_trans.png";
- iconBackgroundStation = makeStopIcon();
- iconBackgroundStation.image = "/file/mm_20_red_trans.png";
- iconBackgroundStation.shadow = "/file/mm_20_shadow_trans.png";
- }
-
- var iconSelected;
- var iconBackground;
- var iconBackgroundStation;
- // Map from stopId to GMarker object for stops selected because they are
- // part of a trip, etc
- var stopMarkersSelected = {};
- // Map from stopId to GMarker object for stops found by the background
- // passive search
- var stopMarkersBackground = {};
- /**
- * Add a stop to the map, given a row from stops.txt.
- */
- function addStopMarkerFromList(list, selected, text) {
- return addStopMarker(list[0], list[1], list[2], list[3], list[4], selected, text);
- }
-
- /**
- * Add a stop to the map, returning the new marker
- */
- function addStopMarker(stopId, stopName, stopLat, stopLon, locationType, selected, text) {
- if (stopMarkersSelected[stopId]) {
- // stop was selected
- var marker = stopMarkersSelected[stopId];
- if (text) {
- oldText = marker.getText();
- if (oldText) {
- oldText = oldText + "<br>";
- }
- marker.setText(oldText + text);
- }
- return marker;
- }
- if (stopMarkersBackground[stopId]) {
- // Stop was in the background. Either delete it from the background or
- // leave it where it is.
- if (selected) {
- map.removeOverlay(stopMarkersBackground[stopId]);
- delete stopMarkersBackground[stopId];
- } else {
- return stopMarkersBackground[stopId];
- }
- }
-
- var icon;
- if (selected) {
- icon = iconSelected;
- } else if (locationType == 1) {
- icon = iconBackgroundStation
- } else {
- icon = iconBackground;
- }
- var ll = new GLatLng(stopLat,stopLon);
- var marker;
- if (selected || text) {
- if (!text) {
- text = ""; // Make sure every selected icon has a text box, even if empty
- }
- var markerOpts = new Object();
- markerOpts.icon = icon;
- markerOpts.labelText = text;
- markerOpts.labelClass = "tooltip";
- markerOpts.labelOffset = new GSize(6, -20);
- marker = new LabeledMarker(ll, markerOpts);
- } else {
- marker = new GMarker(ll, {icon: icon, draggable: !forbid_editing});
- }
- marker.stopName = stopName;
- marker.stopId = stopId;
- if (selected) {
- stopMarkersSelected[stopId] = marker;
- } else {
- stopMarkersBackground[stopId] = marker;
- }
- map.addOverlay(marker);
- marker.clickListener = GEvent.addListener(marker, "click", function() {fetchStopInfoWindow(marker);});
- GEvent.addListener(marker, "dragend", function() {
-
- document.getElementById("edit").style.visibility = "visible";
- document.getElementById("edit_status").innerHTML = "updating..."
- changeStopLocation(marker);
- });
- return marker;
- }
-
- /**
- * Sends new location of a stop to server.
- */
- function changeStopLocation(marker) {
- var url = "/json/setstoplocation?id=" +
- encodeURIComponent(marker.stopId) +
- "&lat=" + encodeURIComponent(marker.getLatLng().lat()) +
- "&lng=" + encodeURIComponent(marker.getLatLng().lng());
- GDownloadUrl(url, function(data, responseCode) {
- document.getElementById("edit_status").innerHTML = unescape(data);
- } );
- if (log)
- GLog.writeUrl(url);
- }
-
- /**
- * Saves the current state of the data file opened at server side to file.
- */
- function saveData() {
- var url = "/json/savedata";
- GDownloadUrl(url, function(data, responseCode) {
- document.getElementById("edit_status").innerHTML = data;} );
- if (log)
- GLog.writeUrl(url);
- }
-
- /**
- * Fetch the next departing trips from the stop for display in an info
- * window.
- */
- function fetchStopInfoWindow(marker) {
- var url = "/json/stoptrips?stop=" + encodeURIComponent(marker.stopId) + "&time=" + parseTimeInput();
- GDownloadUrl(url, function(data, responseCode) {
- callbackDisplayStopInfoWindow(marker, data, responseCode); } );
- if (log)
- GLog.writeUrl(url);
- }
-
- function callbackDisplayStopInfoWindow(marker, data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var timeTrips = eval(data);
- var html = "<b>" + marker.stopName + "</b> (" + marker.stopId + ")<br>";
- var latLng = marker.getLatLng();
- html = html + "(" + latLng.lat() + ", " + latLng.lng() + ")<br>";
- html = html + "<table><tr><th>service_id<th>time<th>name</tr>";
- for (var i=0; i < timeTrips.length; ++i) {
- var time = timeTrips[i][0];
- var tripid = timeTrips[i][1][0];
- var tripname = timeTrips[i][1][1];
- var service_id = timeTrips[i][1][2];
- var timepoint = timeTrips[i][2];
- html = html + "<tr onClick='map.closeInfoWindow();selectTrip(\"" +
- tripid + "\")'>" +
- "<td>" + service_id +
- "<td align='right'>" + (timepoint ? "" : "~") +
- formatTime(time) + "<td>" + tripname + "</tr>";
- }
- html = html + "</table>";
- marker.openInfoWindowHtml(html);
- }
-
- function leadingZero(digit) {
- if (digit < 10)
- return "0" + digit;
- else
- return "" + digit;
- }
-
- function formatTime(secSinceMidnight) {
- var hours = Math.floor(secSinceMidnight / 3600);
- var suffix = "";
-
- if (twelveHourTime) {
- suffix = (hours >= 12) ? "p" : "a";
- suffix += (hours >= 24) ? " next day" : "";
- hours = hours % 12;
- if (hours == 0)
- hours = 12;
- }
- var minutes = Math.floor(secSinceMidnight / 60) % 60;
- var seconds = secSinceMidnight % 60;
- if (seconds == 0) {
- return hours + ":" + leadingZero(minutes) + suffix;
- } else {
- return hours + ":" + leadingZero(minutes) + ":" + leadingZero(seconds) + suffix;
- }
- }
-
- function parseTimeInput() {
- var text = document.getElementById("timeInput").value;
- var m = text.match(/([012]?\d):([012345]?\d)(:([012345]?\d))?/);
- if (m) {
- var seconds = parseInt(m[1], 10) * 3600;
- seconds += parseInt(m[2], 10) * 60;
- if (m[4]) {
- second += parseInt(m[4], 10);
- }
- return seconds;
- } else {
- if (log)
- GLog.write("Couldn't match " + text);
- }
- }
-
- /**
- * Create a string of dots that gets longer with the log of count.
- */
- function countToRepeatedDots(count) {
- // Find ln_2(count) + 1
- var logCount = Math.ceil(Math.log(count) / 0.693148) + 1;
- return new Array(logCount + 1).join(".");
- }
-
- function fetchRoutes() {
- url = "/json/routes";
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayRoutes);
- }
-
- function callbackDisplayRoutes(data, responseCode) {
- if (responseCode != 200) {
- patternDiv.appendChild(div);
- }
- var routes = eval(data);
- var routesList = document.getElementById("routeList");
- while (routesList.hasChildNodes()) {
- routesList.removeChild(routesList.firstChild);
- }
- for (i = 0; i < routes.length; ++i) {
- var routeId = routes[i][0];
- var shortName = document.createElement("span");
- shortName.className = "shortName";
- shortName.appendChild(document.createTextNode(routes[i][1] + " "));
- var routeName = routes[i][2];
- var elem = document.createElement("div");
- elem.appendChild(shortName);
- elem.appendChild(document.createTextNode(routeName));
- elem.id = "route_" + routeId;
- elem.className = "routeChoice";
- elem.title = routeName;
- GEvent.addDomListener(elem, "click", makeClosure(selectRoute, routeId));
-
- var routeContainer = document.createElement("div");
- routeContainer.id = "route_container_" + routeId;
- routeContainer.className = "routeContainer";
- routeContainer.appendChild(elem);
- routesList.appendChild(routeContainer);
- }
- }
-
- function selectRoute(routeId) {
- var routesList = document.getElementById("routeList");
- routeSpans = routesList.getElementsByTagName("div");
- for (var i = 0; i < routeSpans.length; ++i) {
- if (routeSpans[i].className == "routeChoiceSelected") {
- routeSpans[i].className = "routeChoice";
- }
- }
-
- // remove any previously-expanded route
- var tripInfo = document.getElementById("tripInfo");
- if (tripInfo)
- tripInfo.parentNode.removeChild(tripInfo);
-
- selectedRoute = routeId;
- var span = document.getElementById("route_" + routeId);
- span.className = "routeChoiceSelected";
- fetchPatterns(routeId);
- }
-
- function fetchPatterns(routeId) {
- url = "/json/routepatterns?route=" + encodeURIComponent(routeId) + "&time=" + parseTimeInput();
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayPatterns);
- }
-
- function callbackDisplayPatterns(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var div = document.createElement("div");
- div.className = "tripSection";
- div.id = "tripInfo";
- var firstTrip = null;
- var patterns = eval(data);
- clearMap();
- for (i = 0; i < patterns.length; ++i) {
- patternDiv = document.createElement("div")
- patternDiv.className = 'patternSection';
- div.appendChild(patternDiv)
- var pat = patterns[i]; // [patName, patId, len(early trips), trips, len(later trips), has_non_zero_trip_type]
- if (pat[5] == '1') {
- patternDiv.className += " unusualPattern"
- }
- patternDiv.appendChild(document.createTextNode(pat[0]));
- patternDiv.appendChild(document.createTextNode(", " + (pat[2] + pat[3].length + pat[4]) + " trips: "));
- if (pat[2] > 0) {
- patternDiv.appendChild(document.createTextNode(countToRepeatedDots(pat[2]) + " "));
- }
- for (j = 0; j < pat[3].length; ++j) {
- var trip = pat[3][j];
- var tripId = trip[1];
- if ((i == 0) && (j == 0))
- firstTrip = tripId;
- patternDiv.appendChild(document.createTextNode(" "));
- var span = document.createElement("span");
- span.appendChild(document.createTextNode(formatTime(trip[0])));
- span.id = "trip_" + tripId;
- GEvent.addDomListener(span, "click", makeClosure(selectTrip, tripId));
- patternDiv.appendChild(span)
- span.className = "tripChoice";
- }
- if (pat[4] > 0) {
- patternDiv.appendChild(document.createTextNode(" " + countToRepeatedDots(pat[4])));
- }
- patternDiv.appendChild(document.createElement("br"));
- }
- route = document.getElementById("route_container_" + selectedRoute);
- route.appendChild(div);
- if (tripId != null)
- selectTrip(firstTrip);
- }
-
- // Needed to get around limitation in javascript scope rules.
- // See http://calculist.blogspot.com/2005/12/gotcha-gotcha.html
- function makeClosure(f, a, b, c) {
- return function() { f(a, b, c); };
- }
- function make1ArgClosure(f, a, b, c) {
- return function(x) { f(x, a, b, c); };
- }
- function make2ArgClosure(f, a, b, c) {
- return function(x, y) { f(x, y, a, b, c); };
- }
-
- function selectTrip(tripId) {
- var tripInfo = document.getElementById("tripInfo");
- if (tripInfo) {
- tripSpans = tripInfo.getElementsByTagName('span');
- for (var i = 0; i < tripSpans.length; ++i) {
- tripSpans[i].className = 'tripChoice';
- }
- }
- var span = document.getElementById("trip_" + tripId);
- // Won't find the span if a different route is selected
- if (span) {
- span.className = 'tripChoiceSelected';
- }
- clearMap();
- url = "/json/tripstoptimes?trip=" + encodeURIComponent(tripId);
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayTripStopTimes);
- fetchTripPolyLine(tripId);
- fetchTripRows(tripId);
- }
-
- function callbackDisplayTripStopTimes(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var stopsTimes = eval(data);
- if (!stopsTimes) return;
- displayTripStopTimes(stopsTimes[0], stopsTimes[1]);
- }
-
- function fetchTripPolyLine(tripId) {
- url = "/json/tripshape?trip=" + encodeURIComponent(tripId);
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, callbackDisplayTripPolyLine);
- }
-
- function callbackDisplayTripPolyLine(data, responseCode) {
- if (responseCode != 200) {
- return;
- }
- var points = eval(data);
- if (!points) return;
- displayPolyLine(points);
- }
-
- var boundsOfPolyLine = null;
- function expandBoundingBox(latLng) {
- if (boundsOfPolyLine == null) {
- boundsOfPolyLine = new GLatLngBounds(latLng, latLng);
- } else {
- boundsOfPolyLine.extend(latLng);
- }
- }
-
- /**
- * Display a line given a list of points
- *
- * @param {Array} List of lat,lng pairs
- */
- function displayPolyLine(points) {
- var linePoints = Array();
- for (i = 0; i < points.length; ++i) {
- var ll = new GLatLng(points[i][0], points[i][1]);
- expandBoundingBox(ll);
- linePoints[linePoints.length] = ll;
- }
- var polyline = new GPolyline(linePoints, "#FF0000", 4);
- map.addOverlay(polyline);
- map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine));
- }
-
- function displayTripStopTimes(stops, times) {
- for (i = 0; i < stops.length; ++i) {
- var marker;
- if (times && times[i] != null) {
- marker = addStopMarkerFromList(stops[i], true, formatTime(times[i]));
- } else {
- marker = addStopMarkerFromList(stops[i], true);
- }
- expandBoundingBox(marker.getPoint());
- }
- map.setCenter(boundsOfPolyLine.getCenter(), map.getBoundsZoomLevel(boundsOfPolyLine));
- }
-
- function fetchTripRows(tripId) {
- url = "/json/triprows?trip=" + encodeURIComponent(tripId);
- if (log)
- GLog.writeUrl(url);
- GDownloadUrl(url, make2ArgClosure(callbackDisplayTripRows, tripId));
- }
-
- function callbackDisplayTripRows(data, responseCode, tripId) {
- if (responseCode != 200) {
- return;
- }
- var rows = eval(data);
- if (!rows) return;
- var html = "";
- for (var i = 0; i < rows.length; ++i) {
- var filename = rows[i][0];
- var row = rows[i][1];
- html += "<b>" + filename + "</b>: " + formatDictionary(row) + "<br>";
- }
- html += svgTag("/ttablegraph?height=100&trip=" + tripId, "height='115' width='100%'");
- var bottombarDiv = document.getElementById("bottombar");
- bottombarDiv.style.display = "block";
- bottombarDiv.style.height = "175px";
- bottombarDiv.innerHTML = html;
- sizeRouteList();
- }
-
- /**
- * Return HTML to embed a SVG object in this page. src is the location of
- * the SVG and attributes is inserted directly into the object or embed
- * tag.
- */
- function svgTag(src, attributes) {
- if (navigator.userAgent.toLowerCase().indexOf("msie") != -1) {
- if (isSVGControlInstalled()) {
- return "<embed pluginspage='http://www.adobe.com/svg/viewer/install/' src='" + src + "' " + attributes +"></embed>";
- } else {
- return "<p>Please install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a> to get SVG support in IE</p>";
- }
- } else {
- return "<object data='" + src + "' type='image/svg+xml' " + attributes + "><p>No SVG support in your browser. Try Firefox 1.5 or newer or install the <a href='http://www.adobe.com/svg/viewer/install/'>Adobe SVG Viewer</a></p></object>";
- }
- }
-
- /**
- * Format an Array object containing key-value pairs into a human readable
- * string.
- */
- function formatDictionary(d) {
- var output = "";
- var first = 1;
- for (var k in d) {
- if (first) {
- first = 0;
- } else {
- output += " ";
- }
- output += "<b>" + k + "</b>=" + d[k];
- }
- return output;
- }
-
-
- function windowHeight() {
- // Standard browsers (Mozilla, Safari, etc.)
- if (self.innerHeight)
- return self.innerHeight;
- // IE 6
- if (document.documentElement && document.documentElement.clientHeight)
- return document.documentElement.clientHeight;
- // IE 5
- if (document.body)
- return document.body.clientHeight;
- // Just in case.
- return 0;
- }
-
- function sizeRouteList() {
- var bottombarHeight = 0;
- var bottombarDiv = document.getElementById('bottombar');
- if (bottombarDiv.style.display != 'none') {
- bottombarHeight = document.getElementById('bottombar').offsetHeight
- + document.getElementById('bottombar').style.marginTop;
- }
- var height = windowHeight() - document.getElementById('topbar').offsetHeight - 15 - bottombarHeight;
- document.getElementById('content').style.height = height + 'px';
- if (map) {
- // Without this displayPolyLine does not use the correct map size
- map.checkResize();
- }
- }
-
- //]]>
- </script>
- </head>
-
-<body class='sidebar-left' onload="load();" onunload="GUnload()" onresize="sizeRouteList()">
-<div id='topbar'>
-<div id="edit">
- <span id="edit_status">...</span>
- <form onSubmit="saveData(); return false;"><input value="Save" type="submit">
-</div>
-<div id="agencyHeader">[agency]</div>
-</div>
-<div id='content'>
- <div id='sidebar-wrapper'><div id='sidebar'>
- Time: <input type="text" value="8:00" width="9" id="timeInput"><br>
- <form onSubmit="stopTextSearchSubmit(); return false;">
- Find Station: <input type="text" id="stopTextSearchInput"><input value="Search" type="submit"></form><br>
- <form onSubmit="tripTextSearchSubmit(); return false;">
- Find Trip ID: <input type="text" id="tripTextSearchInput"><input value="Search" type="submit"></form><br>
- <div id="routeList">routelist</div>
- </div></div>
-
- <div id='map-wrapper'> <div id='map'></div> </div>
-</div>
-
-<div id='bottombar'>bottom bar</div>
-
-</body>
-</html>
-
--- a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/labeled_marker.js
+++ /dev/null
@@ -1,186 +1,1 @@
-/*
-* LabeledMarker Class
-*
-* Copyright 2007 Mike Purvis (http://uwmike.com)
-*
-* Licensed under the Apache License, Version 2.0 (the "License");
-* you may not use this file except in compliance with the License.
-* You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*
-* This class extends the Maps API's standard GMarker class with the ability
-* to support markers with textual labels. Please see articles here:
-*
-* http://googlemapsbook.com/2007/01/22/extending-gmarker/
-* http://googlemapsbook.com/2007/03/06/clickable-labeledmarker/
-*/
-/**
- * Constructor for LabeledMarker, which picks up on strings from the GMarker
- * options array, and then calls the GMarker constructor.
- *
- * @param {GLatLng} latlng
- * @param {GMarkerOptions} Named optional arguments:
- * opt_opts.labelText {String} text to place in the overlay div.
- * opt_opts.labelClass {String} class to use for the overlay div.
- * (default "markerLabel")
- * opt_opts.labelOffset {GSize} label offset, the x- and y-distance between
- * the marker's latlng and the upper-left corner of the text div.
- */
-function LabeledMarker(latlng, opt_opts){
- this.latlng_ = latlng;
- this.opts_ = opt_opts;
-
- this.initText_ = opt_opts.labelText || "";
- this.labelClass_ = opt_opts.labelClass || "markerLabel";
- this.labelOffset_ = opt_opts.labelOffset || new GSize(0, 0);
-
- this.clickable_ = opt_opts.clickable || true;
-
- if (opt_opts.draggable) {
- // This version of LabeledMarker doesn't support dragging.
- opt_opts.draggable = false;
- }
-
- GMarker.apply(this, arguments);
-}
-
-
-// It's a limitation of JavaScript inheritance that we can't conveniently
-// inherit from GMarker without having to run its constructor. In order for
-// the constructor to run, it requires some dummy GLatLng.
-LabeledMarker.prototype = new GMarker(new GLatLng(0, 0));
-
-/**
- * Is called by GMap2's addOverlay method. Creates the text div and adds it
- * to the relevant parent div.
- *
- * @param {GMap2} map the map that has had this labeledmarker added to it.
- */
-LabeledMarker.prototype.initialize = function(map) {
- // Do the GMarker constructor first.
- GMarker.prototype.initialize.apply(this, arguments);
-
- this.map_ = map;
- this.setText(this.initText_);
-}
-
-/**
- * Create a new div for this label.
- */
-LabeledMarker.prototype.makeDiv_ = function(map) {
- if (this.div_) {
- return;
- }
- this.div_ = document.createElement("div");
- this.div_.className = this.labelClass_;
- this.div_.style.position = "absolute";
- this.div_.style.cursor = "pointer";
- this.map_.getPane(G_MAP_MARKER_PANE).appendChild(this.div_);
-
- if (this.clickable_) {
- /**
- * Creates a closure for passing events through to the source marker
- * This is located in here to avoid cluttering the global namespace.
- * The downside is that the local variables from initialize() continue
- * to occupy space on the stack.
- *
- * @param {Object} object to receive event trigger.
- * @param {GEventListener} event to be triggered.
- */
- function newEventPassthru(obj, event) {
- return function() {
- GEvent.trigger(obj, event);
- };
- }
-
- // Pass through events fired on the text div to the marker.
- var eventPassthrus = ['click', 'dblclick', 'mousedown', 'mouseup', 'mouseover', 'mouseout'];
- for(var i = 0; i < eventPassthrus.length; i++) {
- var name = eventPassthrus[i];
- GEvent.addDomListener(this.div_, name, newEventPassthru(this, name));
- }
- }
-}
-
-/**
- * Return the html in the div of this label, or "" if none is set
- */
-LabeledMarker.prototype.getText = function(text) {
- if (this.div_) {
- return this.div_.innerHTML;
- } else {
- return "";
- }
-}
-
-/**
- * Set the html in the div of this label to text. If text is "" or null remove
- * the div.
- */
-LabeledMarker.prototype.setText = function(text) {
- if (this.div_) {
- if (text) {
- this.div_.innerHTML = text;
- } else {
- // remove div
- GEvent.clearInstanceListeners(this.div_);
- this.div_.parentNode.removeChild(this.div_);
- this.div_ = null;
- }
- } else {
- if (text) {
- this.makeDiv_();
- this.div_.innerHTML = text;
- this.redraw();
- }
- }
-}
-
-/**
- * Move the text div based on current projection and zoom level, call the redraw()
- * handler in GMarker.
- *
- * @param {Boolean} force will be true when pixel coordinates need to be recomputed.
- */
-LabeledMarker.prototype.redraw = function(force) {
- GMarker.prototype.redraw.apply(this, arguments);
-
- if (this.div_) {
- // Calculate the DIV coordinates of two opposite corners of our bounds to
- // get the size and position of our rectangle
- var p = this.map_.fromLatLngToDivPixel(this.latlng_);
- var z = GOverlay.getZIndex(this.latlng_.lat());
-
- // Now position our div based on the div coordinates of our bounds
- this.div_.style.left = (p.x + this.labelOffset_.width) + "px";
- this.div_.style.top = (p.y + this.labelOffset_.height) + "px";
- this.div_.style.zIndex = z; // in front of the marker
- }
-}
-
-/**
- * Remove the text div from the map pane, destroy event passthrus, and calls the
- * default remove() handler in GMarker.
- */
- LabeledMarker.prototype.remove = function() {
- this.setText(null);
- GMarker.prototype.remove.apply(this, arguments);
-}
-
-/**
- * Return a copy of this overlay, for the parent Map to duplicate itself in full. This
- * is part of the Overlay interface and is used, for example, to copy everything in the
- * main view into the mini-map.
- */
-LabeledMarker.prototype.copy = function() {
- return new LabeledMarker(this.latlng_, this.opt_opts_);
-}
-
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_blue.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_blue_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_red_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_shadow.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_shadow_trans.png and /dev/null differ
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/mm_20_yellow.png and /dev/null differ
--- a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/style.css
+++ /dev/null
@@ -1,162 +1,1 @@
-html { overflow: hidden; }
-html, body {
- margin: 0;
- padding: 0;
- height: 100%;
-}
-
-body { margin: 5px; }
-
-#content {
- position: relative;
- margin-top: 5px;
-}
-
-#map-wrapper {
- position: relative;
- height: 100%;
- width: auto;
- left: 0;
- top: 0;
- z-index: 100;
-}
-
-#map {
- position: relative;
- height: 100%;
- width: auto;
- border: 1px solid #aaa;
-}
-
-#sidebar-wrapper {
- position: absolute;
- height: 100%;
- width: 220px;
- top: 0;
- border: 1px solid #aaa;
- overflow: auto;
- z-index: 300;
-}
-
-#sidebar {
- position: relative;
- width: auto;
- padding: 4px;
- overflow: hidden;
-}
-
-#topbar {
- position: relative;
- padding: 2px;
- border: 1px solid #aaa;
- margin: 0;
-}
-
-#topbar h1 {
- white-space: nowrap;
- overflow: hidden;
- font-size: 14pt;
- font-weight: bold;
- font-face:
- margin: 0;
-}
-
-
-body.sidebar-right #map-wrapper { margin-right: 229px; }
-body.sidebar-right #sidebar-wrapper { right: 0; }
-
-body.sidebar-left #map { margin-left: 229px; }
-body.sidebar-left #sidebar { left: 0; }
-
-body.nosidebar #map { margin: 0; }
-body.nosidebar #sidebar { display: none; }
-
-#bottombar {
- position: relative;
- padding: 2px;
- border: 1px solid #aaa;
- margin-top: 5px;
- display: none;
-}
-
-/* holly hack for IE to get position:bottom right
- see: http://www.positioniseverything.net/abs_relbugs.html
- \*/
-* html #topbar { height: 1px; }
-/* */
-
-body {
- font-family:helvetica,arial,sans, sans-serif;
-}
-h1 {
- margin-top: 0.5em;
- margin-bottom: 0.5em;
-}
-h2 {
- margin-top: 0.2em;
- margin-bottom: 0.2em;
-}
-h3 {
- margin-top: 0.2em;
- margin-bottom: 0.2em;
-}
-.tooltip {
- white-space: nowrap;
- padding: 2px;
- color: black;
- font-size: 12px;
- background-color: white;
- border: 1px solid black;
- cursor: pointer;
- filter:alpha(opacity=60);
- -moz-opacity: 0.6;
- opacity: 0.6;
-}
-#routeList {
- border: 1px solid black;
- overflow: auto;
-}
-.shortName {
- font-size: bigger;
- font-weight: bold;
-}
-.routeChoice,.tripChoice,.routeChoiceSelected,.tripChoiceSelected {
- white-space: nowrap;
- cursor: pointer;
- padding: 0px 2px;
- color: black;
- line-height: 1.4em;
- font-size: smaller;
- overflow: hidden;
-}
-.tripChoice {
- color: blue;
-}
-.routeChoiceSelected,.tripChoiceSelected {
- background-color: blue;
- color: white;
-}
-.tripSection {
- padding-left: 0px;
- font-size: 10pt;
- background-color: lightblue;
-}
-.patternSection {
- margin-left: 8px;
- padding-left: 2px;
- border-bottom: 1px solid grey;
-}
-.unusualPattern {
- background-color: #aaa;
- color: #444;
-}
-/* Following styles are used by location_editor.py */
-#edit {
- visibility: hidden;
- float: right;
- font-size: 80%;
-}
-#edit form {
- display: inline;
-}
--- a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/files/svgcheck.vbs
+++ /dev/null
@@ -1,8 +1,1 @@
-' Copyright 1999-2000 Adobe Systems Inc. All rights reserved. Permission to redistribute
-' granted provided that this file is not modified in any way. This file is provided with
-' absolutely no warranties of any kind.
-Function isSVGControlInstalled()
- on error resume next
- isSVGControlInstalled = IsObject(CreateObject("Adobe.SVGCtl"))
-end Function
--- a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/marey_graph.py
+++ /dev/null
@@ -1,470 +1,1 @@
-#!/usr/bin/python2.5
-#
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Output svg/xml data for a marey graph
-
-Marey graphs are a visualization form typically used for timetables. Time
-is on the x-axis and position on the y-axis. This module reads data from a
-transitfeed.Schedule and creates a marey graph in svg/xml format. The graph
-shows the speed between stops for each trip of a route.
-
-TODO: This module was taken from an internal Google tool. It works but is not
-well intergrated into transitfeed and schedule_viewer. Also, it has lots of
-ugly hacks to compensate set canvas size and so on which could be cleaned up.
-
-For a little more information see (I didn't make this URL ;-)
-http://transliteracies.english.ucsb.edu/post/research-project/research-clearinghouse-individual/research-reports/the-indexical-imagination-marey%e2%80%99s-graphic-method-and-the-technological-transformation-of-writing-in-the-nineteenth-century
-
- MareyGraph: Class, keeps cache of graph data and graph properties
- and draws marey graphs in svg/xml format on request.
-
-"""
-
-import itertools
-import transitfeed
-
-
-class MareyGraph:
- """Produces and caches marey graph from transit feed data."""
-
- _MAX_ZOOM = 5.0 # change docstring of ChangeScaleFactor if this changes
- _DUMMY_SEPARATOR = 10 #pixel
-
- def __init__(self):
- # Timetablerelated state
- self._cache = str()
- self._stoplist = []
- self._tlist = []
- self._stations = []
- self._decorators = []
-
- # TODO: Initialize default values via constructor parameters
- # or via a class constants
-
- # Graph properties
- self._tspan = 30 # number of hours to display
- self._offset = 0 # starting hour
- self._hour_grid = 60 # number of pixels for an hour
- self._min_grid = 5 # number of pixels between subhour lines
-
- # Canvas properties
- self._zoomfactor = 0.9 # svg Scaling factor
- self._xoffset = 0 # move graph horizontally
- self._yoffset = 0 # move graph veritcally
- self._bgcolor = "lightgrey"
-
- # height/width of graph canvas before transform
- self._gwidth = self._tspan * self._hour_grid
-
- def Draw(self, stoplist=None, triplist=None, height=520):
- """Main interface for drawing the marey graph.
-
- If called without arguments, the data generated in the previous call
- will be used. New decorators can be added between calls.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stoplist: [Stop, Stop, ...]
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
-
- Returns:
- # A string that contain a svg/xml web-page with a marey graph.
- " <svg width="1440" height="520" version="1.1" ... "
- """
- output = str()
- if not triplist:
- triplist = []
- if not stoplist:
- stoplist = []
-
- if not self._cache or triplist or stoplist:
- self._gheight = height
- self._tlist=triplist
- self._slist=stoplist
- self._decorators = []
- self._stations = self._BuildStations(stoplist)
- self._cache = "%s %s %s %s" % (self._DrawBox(),
- self._DrawHours(),
- self._DrawStations(),
- self._DrawTrips(triplist))
-
-
-
- output = "%s %s %s %s" % (self._DrawHeader(),
- self._cache,
- self._DrawDecorators(),
- self._DrawFooter())
- return output
-
- def _DrawHeader(self):
- svg_header = """
- <svg width="%s" height="%s" version="1.1"
- xmlns="http://www.w3.org/2000/svg">
- <script type="text/ecmascript"><![CDATA[
- function init(evt) {
- if ( window.svgDocument == null )
- svgDocument = evt.target.ownerDocument;
- }
- var oldLine = 0;
- var oldStroke = 0;
- var hoffset= %s; // Data from python
-
- function parseLinePoints(pointnode){
- var wordlist = pointnode.split(" ");
- var xlist = new Array();
- var h;
- var m;
- // TODO: add linebreaks as appropriate
- var xstr = " Stop Times :";
- for (i=0;i<wordlist.length;i=i+2){
- var coord = wordlist[i].split(",");
- h = Math.floor(parseInt((coord[0])-20)/60);
- m = parseInt((coord[0]-20))%%60;
- xstr = xstr +" "+ (hoffset+h) +":"+m;
- }
-
- return xstr;
- }
-
- function LineClick(tripid, x) {
- var line = document.getElementById(tripid);
- if (oldLine)
- oldLine.setAttribute("stroke",oldStroke);
- oldLine = line;
- oldStroke = line.getAttribute("stroke");
-
- line.setAttribute("stroke","#fff");
-
- var dynTxt = document.getElementById("dynamicText");
- var tripIdTxt = document.createTextNode(x);
- while (dynTxt.hasChildNodes()){
- dynTxt.removeChild(dynTxt.firstChild);
- }
- dynTxt.appendChild(tripIdTxt);
- }
- ]]> </script>
- <style type="text/css"><![CDATA[
- .T { fill:none; stroke-width:1.5 }
- .TB { fill:none; stroke:#e20; stroke-width:2 }
- .Station { fill:none; stroke-width:1 }
- .Dec { fill:none; stroke-width:1.5 }
- .FullHour { fill:none; stroke:#eee; stroke-width:1 }
- .SubHour { fill:none; stroke:#ddd; stroke-width:1 }
- .Label { fill:#aaa; font-family:Helvetica,Arial,sans;
- text-anchor:middle }
- .Info { fill:#111; font-family:Helvetica,Arial,sans;
- text-anchor:start; }
- ]]></style>
- <text class="Info" id="dynamicText" x="0" y="%d"></text>
- <g id="mcanvas" transform="translate(%s,%s)">
- <g id="zcanvas" transform="scale(%s)">
-
- """ % (self._gwidth + self._xoffset + 20, self._gheight + 15,
- self._offset, self._gheight + 10,
- self._xoffset, self._yoffset, self._zoomfactor)
-
- return svg_header
-
- def _DrawFooter(self):
- return "</g></g></svg>"
-
- def _DrawDecorators(self):
- """Used to draw fancy overlays on trip graphs."""
- return " ".join(self._decorators)
-
- def _DrawBox(self):
- tmpstr = """<rect x="%s" y="%s" width="%s" height="%s"
- fill="lightgrey" stroke="%s" stroke-width="2" />
- """ % (0, 0, self._gwidth + 20, self._gheight, self._bgcolor)
- return tmpstr
-
- def _BuildStations(self, stoplist):
- """Dispatches the best algorithm for calculating station line position.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stoplist: [Stop, Stop, ...]
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
-
- Returns:
- # One integer y-coordinate for each station normalized between
- # 0 and X, where X is the height of the graph in pixels
- [0, 33, 140, ... , X]
- """
- stations = []
- dists = self._EuclidianDistances(stoplist)
- stations = self._CalculateYLines(dists)
- return stations
-
- def _EuclidianDistances(self,slist):
- """Calculate euclidian distances between stops.
-
- Uses the stoplists long/lats to approximate distances
- between stations and build a list with y-coordinates for the
- horizontal lines in the graph.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stoplist: [Stop, Stop, ...]
-
- Returns:
- # One integer for each pair of stations
- # indicating the approximate distance
- [0,33,140, ... ,X]
- """
- e_dists2 = [transitfeed.ApproximateDistanceBetweenStops(stop, tail) for
- (stop,tail) in itertools.izip(slist, slist[1:])]
-
- return e_dists2
-
- def _CalculateYLines(self, dists):
- """Builds a list with y-coordinates for the horizontal lines in the graph.
-
- Args:
- # One integer for each pair of stations
- # indicating the approximate distance
- dists: [0,33,140, ... ,X]
-
- Returns:
- # One integer y-coordinate for each station normalized between
- # 0 and X, where X is the height of the graph in pixels
- [0, 33, 140, ... , X]
- """
- tot_dist = sum(dists)
- if tot_dist > 0:
- pixel_dist = [float(d * (self._gheight-20))/tot_dist for d in dists]
- pixel_grid = [0]+[int(pd + sum(pixel_dist[0:i])) for i,pd in
- enumerate(pixel_dist)]
- else:
- pixel_grid = []
-
- return pixel_grid
-
- def _TravelTimes(self,triplist,index=0):
- """ Calculate distances and plot stops.
-
- Uses a timetable to approximate distances
- between stations
-
- Args:
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
- # (Optional) Index of Triplist prefered for timetable Calculation
- index: 3
-
- Returns:
- # One integer for each pair of stations
- # indicating the approximate distance
- [0,33,140, ... ,X]
- """
-
- def DistanceInTravelTime(dep_secs, arr_secs):
- t_dist = arr_secs-dep_secs
- if t_dist<0:
- t_dist = self._DUMMY_SEPARATOR # min separation
- return t_dist
-
- if not triplist:
- return []
-
- if 0 < index < len(triplist):
- trip = triplist[index]
- else:
- trip = triplist[0]
-
- t_dists2 = [DistanceInTravelTime(stop[3],tail[2]) for (stop,tail)
- in itertools.izip(trip.GetTimeStops(),trip.GetTimeStops()[1:])]
- return t_dists2
-
- def _AddWarning(self, str):
- print str
-
- def _DrawTrips(self,triplist,colpar=""):
- """Generates svg polylines for each transit trip.
-
- Args:
- # Class Trip is defined in transitfeed.py
- [Trip, Trip, ...]
-
- Returns:
- # A string containing a polyline tag for each trip
- ' <polyline class="T" stroke="#336633" points="433,0 ...'
- """
-
- stations = []
- if not self._stations and triplist:
- self._stations = self._CalculateYLines(self._TravelTimes(triplist))
- if not self._stations:
- self._AddWarning("Failed to use traveltimes for graph")
- self._stations = self._CalculateYLines(self._Uniform(triplist))
- if not self._stations:
- self._AddWarning("Failed to calculate station distances")
- return
-
- stations = self._stations
- tmpstrs = []
- servlist = []
- for t in triplist:
- if not colpar:
- if t.service_id not in servlist:
- servlist.append(t.service_id)
- shade = int(servlist.index(t.service_id) * (200/len(servlist))+55)
- color = "#00%s00" % hex(shade)[2:4]
- else:
- color=colpar
-
- start_offsets = [0]
- first_stop = t.GetTimeStops()[0]
-
- for j,freq_offset in enumerate(start_offsets):
- if j>0 and not colpar:
- color="purple"
- scriptcall = 'onmouseover="LineClick(\'%s\',\'Trip %s starting %s\')"' % (t.trip_id,
- t.trip_id, transitfeed.FormatSecondsSinceMidnight(t.GetStartTime()))
- tmpstrhead = '<polyline class="T" id="%s" stroke="%s" %s points="' % \
- (str(t.trip_id),color, scriptcall)
- tmpstrs.append(tmpstrhead)
-
- for i, s in enumerate(t.GetTimeStops()):
- arr_t = s[0]
- dep_t = s[1]
- if arr_t is None or dep_t is None:
- continue
- arr_x = int(arr_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset
- dep_x = int(dep_t/3600.0 * self._hour_grid) - self._hour_grid * self._offset
- tmpstrs.append("%s,%s " % (int(arr_x+20), int(stations[i]+20)))
- tmpstrs.append("%s,%s " % (int(dep_x+20), int(stations[i]+20)))
- tmpstrs.append('" />')
- return "".join(tmpstrs)
-
- def _Uniform(self, triplist):
- """Fallback to assuming uniform distance between stations"""
- # This should not be neseccary, but we are in fallback mode
- longest = max([len(t.GetTimeStops()) for t in triplist])
- return [100] * longest
-
- def _DrawStations(self, color="#aaa"):
- """Generates svg with a horizontal line for each station/stop.
-
- Args:
- # Class Stop is defined in transitfeed.py
- stations: [Stop, Stop, ...]
-
- Returns:
- # A string containing a polyline tag for each stop
- " <polyline class="Station" stroke="#336633" points="20,0 ..."
- """
- stations=self._stations
- tmpstrs = []
- for y in stations:
- tmpstrs.append(' <polyline class="Station" stroke="%s" \
- points="%s,%s, %s,%s" />' %(color,20,20+y+.5,self._gwidth+20,20+y+.5))
- return "".join(tmpstrs)
-
- def _DrawHours(self):
- """Generates svg to show a vertical hour and sub-hour grid
-
- Returns:
- # A string containing a polyline tag for each grid line
- " <polyline class="FullHour" points="20,0 ..."
- """
- tmpstrs = []
- for i in range(0, self._gwidth, self._min_grid):
- if i % self._hour_grid == 0:
- tmpstrs.append('<polyline class="FullHour" points="%d,%d, %d,%d" />' \
- % (i + .5 + 20, 20, i + .5 + 20, self._gheight))
- tmpstrs.append('<text class="Label" x="%d" y="%d">%d</text>'
- % (i + 20, 20,
- (i / self._hour_grid + self._offset) % 24))
- else:
- tmpstrs.append('<polyline class="SubHour" points="%d,%d,%d,%d" />' \
- % (i + .5 + 20, 20, i + .5 + 20, self._gheight))
- return "".join(tmpstrs)
-
- def AddStationDecoration(self, index, color="#f00"):
- """Flushes existing decorations and highlights the given station-line.
-
- Args:
- # Integer, index of stop to be highlighted.
- index: 4
- # An optional string with a html color code
- color: "#fff"
- """
- tmpstr = str()
- num_stations = len(self._stations)
- ind = int(index)
- if self._stations:
- if 0<ind<num_stations:
- y = self._stations[ind]
- tmpstr = '<polyline class="Dec" stroke="%s" points="%s,%s,%s,%s" />' \
- % (color, 20, 20+y+.5, self._gwidth+20, 20+y+.5)
- self._decorators.append(tmpstr)
-
- def AddTripDecoration(self, triplist, color="#f00"):
- """Flushes existing decorations and highlights the given trips.
-
- Args:
- # Class Trip is defined in transitfeed.py
- triplist: [Trip, Trip, ...]
- # An optional string with a html color code
- color: "#fff"
- """
- tmpstr = self._DrawTrips(triplist,color)
- self._decorators.append(tmpstr)
-
- def ChangeScaleFactor(self, newfactor):
- """Changes the zoom of the graph manually.
-
- 1.0 is the original canvas size.
-
- Args:
- # float value between 0.0 and 5.0
- newfactor: 0.7
- """
- if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM:
- self._zoomfactor = newfactor
-
- def ScaleLarger(self):
- """Increases the zoom of the graph one step (0.1 units)."""
- newfactor = self._zoomfactor + 0.1
- if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM:
- self._zoomfactor = newfactor
-
- def ScaleSmaller(self):
- """Decreases the zoom of the graph one step(0.1 units)."""
- newfactor = self._zoomfactor - 0.1
- if float(newfactor) > 0 and float(newfactor) < self._MAX_ZOOM:
- self._zoomfactor = newfactor
-
- def ClearDecorators(self):
- """Removes all the current decorators.
- """
- self._decorators = []
-
- def AddTextStripDecoration(self,txtstr):
- tmpstr = '<text class="Info" x="%d" y="%d">%s</text>' % (0,
- 20 + self._gheight, txtstr)
- self._decorators.append(tmpstr)
-
- def SetSpan(self, first_arr, last_arr, mint=5 ,maxt=30):
- s_hour = (first_arr / 3600) - 1
- e_hour = (last_arr / 3600) + 1
- self._offset = max(min(s_hour, 23), 0)
- self._tspan = max(min(e_hour - s_hour, maxt), mint)
- self._gwidth = self._tspan * self._hour_grid
-
Binary files a/origin-src/transitfeed-1.2.5/gtfsscheduleviewer/marey_graph.pyc and /dev/null differ
--- a/origin-src/transitfeed-1.2.5/kmlparser.py
+++ /dev/null
@@ -1,147 +1,1 @@
-#!/usr/bin/python2.5
-# Copyright (C) 2007 Google Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This package provides implementation of a converter from a kml
-file format into Google transit feed format.
-
-The KmlParser class is the main class implementing the parser.
-
-Currently only information about stops is extracted from a kml file.
-The extractor expects the stops to be represented as placemarks with
-a single point.
-"""
-
-import re
-import string
-import sys
-import transitfeed
-from transitfeed import util
-import xml.dom.minidom as minidom
-import zipfile
-
-
-class Placemark(object):
- def __init__(self):
- self.name = ""
- self.coordinates = []
-
- def IsPoint(self):
- return len(self.coordinates) == 1
-
- def IsLine(self):
- return len(self.coordinates) > 1
-
-class KmlParser(object):
- def __init__(self, stopNameRe = '(.*)'):
- """
- Args:
- stopNameRe - a regular expression to extract a stop name from a
- placemaker name
- """
- self.stopNameRe = re.compile(stopNameRe)
-
- def Parse(self, filename, feed):
- """
- Reads the kml file, parses it and updated the Google transit feed
- object with the extracted information.
-
- Args:
- filename - kml file name
- feed - an instance of Schedule class to be updated
- """
- dom = minidom.parse(filename)
- self.ParseDom(dom, feed)
-
- def ParseDom(self, dom, feed):
- """
- Parses the given kml dom tree and updates the Google transit feed object.
-
- Args:
- dom - kml dom tree
- feed - an instance of Schedule class to be updated
- """
- shape_num = 0
- for node in dom.getElementsByTagName('Placemark'):
- p = self.ParsePlacemark(node)
- if p.IsPoint():
- (lon, lat) = p.coordinates[0]
- m = self.stopNameRe.search(p.name)
- feed.AddStop(lat, lon, m.group(1))
- elif p.IsLine():
- shape_num = shape_num + 1
- shape = transitfeed.Shape("kml_shape_" + str(shape_num))
- for (lon, lat) in p.coordinates:
- shape.AddPoint(lat, lon)
- feed.AddShapeObject(shape)
-
- def ParsePlacemark(self, node):
- ret = Placemark()
- for child in node.childNodes:
- if child.nodeName == 'name':
- ret.name = self.ExtractText(child)
- if child.nodeName == 'Point' or child.nodeName == 'LineString':
- ret.coordinates = self.ExtractCoordinates(child)
- return ret
-
- def ExtractText(self, node):
- for child in node.childNodes:
- if child.nodeType == child.TEXT_NODE:
- return child.wholeText # is a unicode string
- return ""
-
- def ExtractCoordinates(self, node):
- coordinatesText = ""
- for child in node.childNodes:
- if child.nodeName == 'coordinates':
- coordinatesText = self.ExtractText(child)
- break
- ret = []
- for point in coordinatesText.split():
- coords = point.split(',')
- ret.append((float(coords[0]), float(coords[1])))
- return ret
-
-
-def main():
- usage = \
-"""%prog <input.kml> <output GTFS.zip>
-
-Reads KML file <input.kml> and creates GTFS file <output GTFS.zip> with
-placemarks in the KML represented as stops.
-"""
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- (options, args) = parser.parse_args()
- if len(args) != 2:
- parser.error('You did not provide all required command line arguments.')
-
- if args[0] == 'IWantMyCrash':
- raise Exception('For testCrashHandler')
-
- parser = KmlParser()
- feed = transitfeed.Schedule()
- feed.save_all_stops = True
- parser.Parse(args[0], feed)
- feed.WriteGoogleTransitFeed(args[1])
-
- print "Done."
-
-
-if __name__ == '__main__':
- util.RunWithCrashHandler(main)
-
--- a/origin-src/transitfeed-1.2.5/kmlwriter.py
+++ /dev/null
@@ -1,648 +1,1 @@
-#!/usr/bin/python2.5
-#
-# Copyright 2008 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""A module for writing GTFS feeds out into Google Earth KML format.
-
-For usage information run kmlwriter.py --help
-
-If no output filename is specified, the output file will be given the same
-name as the feed file (with ".kml" appended) and will be placed in the same
-directory as the input feed.
-
-The resulting KML file has a folder hierarchy which looks like this:
-
- - Stops
- * stop1
- * stop2
- - Routes
- - route1
- - Shapes
- * shape1
- * shape2
- - Patterns
- - pattern1
- - pattern2
- - Trips
- * trip1
- * trip2
- - Shapes
- * shape1
- - Shape Points
- * shape_point1
- * shape_point2
- * shape2
- - Shape Points
- * shape_point1
- * shape_point2
-
-where the hyphens represent folders and the asteriks represent placemarks.
-
-In a trip, a vehicle visits stops in a certain sequence. Such a sequence of
-stops is called a pattern. A pattern is represented by a linestring connecting
-the stops. The "Shapes" subfolder of a route folder contains placemarks for
-each shape used by a trip in the route. The "Patterns" subfolder contains a
-placemark for each unique pattern used by a trip in the route. The "Trips"
-subfolder contains a placemark for each trip in the route.
-
-Since there can be many trips and trips for the same route are usually similar,
-they are not exported unless the --showtrips option is used. There is also
-another option --splitroutes that groups the routes by vehicle type resulting
-in a folder hierarchy which looks like this at the top level:
-
- - Stops
- - Routes - Bus
- - Routes - Tram
- - Routes - Rail
- - Shapes
-"""
-
-try:
- import xml.etree.ElementTree as ET # python 2.5
-except ImportError, e:
- import elementtree.ElementTree as ET # older pythons
-import optparse
-import os.path
-import sys
-import transitfeed
-from transitfeed import util
-
-
-class KMLWriter(object):
- """This class knows how to write out a transit feed as KML.
-
- Sample usage:
- KMLWriter().Write(<transitfeed.Schedule object>, <output filename>)
-
- Attributes:
- show_trips: True if the individual trips should be included in the routes.
- show_trips: True if the individual trips should be placed on ground.
- split_routes: True if the routes should be split by type.
- shape_points: True if individual shape points should be plotted.
- """
-
- def __init__(self):
- """Initialise."""
- self.show_trips = False
- self.split_routes = False
- self.shape_points = False
- self.altitude_per_sec = 0.0
- self.date_filter = None
-
- def _SetIndentation(self, elem, level=0):
- """Indented the ElementTree DOM.
-
- This is the recommended way to cause an ElementTree DOM to be
- prettyprinted on output, as per: http://effbot.org/zone/element-lib.htm
-
- Run this on the root element before outputting the tree.
-
- Args:
- elem: The element to start indenting from, usually the document root.
- level: Current indentation level for recursion.
- """
- i = "\n" + level*" "
- if len(elem):
- if not elem.text or not elem.text.strip():
- elem.text = i + " "
- for elem in elem:
- self._SetIndentation(elem, level+1)
- if not elem.tail or not elem.tail.strip():
- elem.tail = i
- else:
- if level and (not elem.tail or not elem.tail.strip()):
- elem.tail = i
-
- def _CreateFolder(self, parent, name, visible=True, description=None):
- """Create a KML Folder element.
-
- Args:
- parent: The parent ElementTree.Element instance.
- name: The folder name as a string.
- visible: Whether the folder is initially visible or not.
- description: A description string or None.
-
- Returns:
- The folder ElementTree.Element instance.
- """
- folder = ET.SubElement(parent, 'Folder')
- name_tag = ET.SubElement(folder, 'name')
- name_tag.text = name
- if description is not None:
- desc_tag = ET.SubElement(folder, 'description')
- desc_tag.text = description
- if not visible:
- visibility = ET.SubElement(folder, 'visibility')
- visibility.text = '0'
- return folder
-
- def _CreateStyleForRoute(self, doc, route):
- """Create a KML Style element for the route.
-
- The style sets the line colour if the route colour is specified. The
- line thickness is set depending on the vehicle type.
-
- Args:
- doc: The KML Document ElementTree.Element instance.
- route: The transitfeed.Route to create the style for.
-
- Returns:
- The id of the style as a string.
- """
- style_id = 'route_%s' % route.route_id
- style = ET.SubElement(doc, 'Style', {'id': style_id})
- linestyle = ET.SubElement(style, 'LineStyle')
- width = ET.SubElement(linestyle, 'width')
- type_to_width = {0: '3', # Tram
- 1: '3', # Subway
- 2: '5', # Rail
- 3: '1'} # Bus
- width.text = type_to_width.get(route.route_type, '1')
- if route.route_color:
- color = ET.SubElement(linestyle, 'color')
- red = route.route_color[0:2].lower()
- green = route.route_color[2:4].lower()
- blue = route.route_color[4:6].lower()
- color.text = 'ff%s%s%s' % (blue, green, red)
- return style_id
-
- def _CreatePlacemark(self, parent, name, style_id=None, visible=True,
- description=None):
- """Create a KML Placemark element.
-
- Args:
- parent: The parent ElementTree.Element instance.
- name: The placemark name as a string.
- style_id: If not None, the id of a style to use for the placemark.
- visible: Whether the placemark is initially visible or not.
- description: A description string or None.
-
- Returns:
- The placemark ElementTree.Element instance.
- """
- placemark = ET.SubElement(parent, 'Placemark')
- placemark_name = ET.SubElement(placemark, 'name')
- placemark_name.text = name
- if description is not None:
- desc_tag = ET.SubElement(placemark, 'description')
- desc_tag.text = description
- if style_id is not None:
- styleurl = ET.SubElement(placemark, 'styleUrl')
- styleurl.text = '#%s' % style_id
- if not visible:
- visibility = ET.SubElement(placemark, 'visibility')
- visibility.text = '0'
- return placemark
-
- def _CreateLineString(self, parent, coordinate_list):
- """Create a KML LineString element.
-
- The points of the string are given in coordinate_list. Every element of
- coordinate_list should be one of a tuple (longitude, latitude) or a tuple
- (longitude, latitude, altitude).
-
- Args:
- parent: The parent ElementTree.Element instance.
- coordinate_list: The list of coordinates.
-
- Returns:
- The LineString ElementTree.Element instance or None if coordinate_list is
- empty.
- """
- if not coordinate_list:
- return None
- linestring = ET.SubElement(parent, 'LineString')
- tessellate = ET.SubElement(linestring, 'tessellate')
- tessellate.text = '1'
- if len(coordinate_list[0]) == 3:
- altitude_mode = ET.SubElement(linestring, 'altitudeMode')
- altitude_mode.text = 'absolute'
- coordinates = ET.SubElement(linestring, 'coordinates')
- if len(coordinate_list[0]) == 3:
- coordinate_str_list = ['%f,%f,%f' % t for t in coordinate_list]
- else:
- coordinate_str_list = ['%f,%f' % t for t in coordinate_list]
- coordinates.text = ' '.join(coordinate_str_list)
- return linestring
-
- def _CreateLineStringForShape(self, parent, shape):
- """Create a KML LineString using coordinates from a shape.
-
- Args:
- parent: The parent ElementTree.Element instance.
- shape: The transitfeed.Shape instance.
-
- Returns:
- The LineString ElementTree.Element instance or None if coordinate_list is
- empty.
- """
- coordinate_list = [(longitude, latitude) for
- (latitude, longitude, distance) in shape.points]
- return self._CreateLineString(parent, coordinate_list)
-
- def _CreateStopsFolder(self, schedule, doc):
- """Create a KML Folder containing placemarks for each stop in the schedule.
-
- If there are no stops in the schedule then no folder is created.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- doc: The KML Document ElementTree.Element instance.
-
- Returns:
- The Folder ElementTree.Element instance or None if there are no stops.
- """
- if not schedule.GetStopList():
- return None
- stop_folder = self._CreateFolder(doc, 'Stops')
- stops = list(schedule.GetStopList())
- stops.sort(key=lambda x: x.stop_name)
- for stop in stops:
- desc_items = []
- if stop.stop_desc:
- desc_items.append(stop.stop_desc)
- if stop.stop_url:
- desc_items.append('Stop info page: <a href="%s">%s</a>' % (
- stop.stop_url, stop.stop_url))
- description = '<br/>'.join(desc_items) or None
- placemark = self._CreatePlacemark(stop_folder, stop.stop_name,
- description=description)
- point = ET.SubElement(placemark, 'Point')
- coordinates = ET.SubElement(point, 'coordinates')
- coordinates.text = '%.6f,%.6f' % (stop.stop_lon, stop.stop_lat)
- return stop_folder
-
- def _CreateRoutePatternsFolder(self, parent, route,
- style_id=None, visible=True):
- """Create a KML Folder containing placemarks for each pattern in the route.
-
- A pattern is a sequence of stops used by one of the trips in the route.
-
- If there are not patterns for the route then no folder is created and None
- is returned.
-
- Args:
- parent: The parent ElementTree.Element instance.
- route: The transitfeed.Route instance.
- style_id: The id of a style to use if not None.
- visible: Whether the folder is initially visible or not.
-
- Returns:
- The Folder ElementTree.Element instance or None if there are no patterns.
- """
- pattern_id_to_trips = route.GetPatternIdTripDict()
- if not pattern_id_to_trips:
- return None
-
- # sort by number of trips using the pattern
- pattern_trips = pattern_id_to_trips.values()
- pattern_trips.sort(lambda a, b: cmp(len(b), len(a)))
-
- folder = self._CreateFolder(parent, 'Patterns', visible)
- for n, trips in enumerate(pattern_trips):
- trip_ids = [trip.trip_id for trip in trips]
- name = 'Pattern %d (trips: %d)' % (n+1, len(trips))
- description = 'Trips using this pattern (%d in total): %s' % (
- len(trips), ', '.join(trip_ids))
- placemark = self._CreatePlacemark(folder, name, style_id, visible,
- description)
- coordinates = [(stop.stop_lon, stop.stop_lat)
- for stop in trips[0].GetPattern()]
- self._CreateLineString(placemark, coordinates)
- return folder
-
- def _CreateRouteShapesFolder(self, schedule, parent, route,
- style_id=None, visible=True):
- """Create a KML Folder for the shapes of a route.
-
- The folder contains a placemark for each shape referenced by a trip in the
- route. If there are no such shapes, no folder is created and None is
- returned.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- parent: The parent ElementTree.Element instance.
- route: The transitfeed.Route instance.
- style_id: The id of a style to use if not None.
- visible: Whether the placemark is initially visible or not.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
- shape_id_to_trips = {}
- for trip in route.trips:
- if trip.shape_id:
- shape_id_to_trips.setdefault(trip.shape_id, []).append(trip)
- if not shape_id_to_trips:
- return None
-
- # sort by the number of trips using the shape
- shape_id_to_trips_items = shape_id_to_trips.items()
- shape_id_to_trips_items.sort(lambda a, b: cmp(len(b[1]), len(a[1])))
-
- folder = self._CreateFolder(parent, 'Shapes', visible)
- for shape_id, trips in shape_id_to_trips_items:
- trip_ids = [trip.trip_id for trip in trips]
- name = '%s (trips: %d)' % (shape_id, len(trips))
- description = 'Trips using this shape (%d in total): %s' % (
- len(trips), ', '.join(trip_ids))
- placemark = self._CreatePlacemark(folder, name, style_id, visible,
- description)
- self._CreateLineStringForShape(placemark, schedule.GetShape(shape_id))
- return folder
-
- def _CreateRouteTripsFolder(self, parent, route, style_id=None, schedule=None):
- """Create a KML Folder containing all the trips in the route.
-
- The folder contains a placemark for each of these trips. If there are no
- trips in the route, no folder is created and None is returned.
-
- Args:
- parent: The parent ElementTree.Element instance.
- route: The transitfeed.Route instance.
- style_id: A style id string for the placemarks or None.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
- if not route.trips:
- return None
- trips = list(route.trips)
- trips.sort(key=lambda x: x.trip_id)
- trips_folder = self._CreateFolder(parent, 'Trips', visible=False)
- for trip in trips:
- if (self.date_filter and
- not trip.service_period.IsActiveOn(self.date_filter)):
- continue
-
- if trip.trip_headsign:
- description = 'Headsign: %s' % trip.trip_headsign
- else:
- description = None
-
- coordinate_list = []
- for secs, stoptime, tp in trip.GetTimeInterpolatedStops():
- if self.altitude_per_sec > 0:
- coordinate_list.append((stoptime.stop.stop_lon, stoptime.stop.stop_lat,
- (secs - 3600 * 4) * self.altitude_per_sec))
- else:
- coordinate_list.append((stoptime.stop.stop_lon,
- stoptime.stop.stop_lat))
- placemark = self._CreatePlacemark(trips_folder,
- trip.trip_id,
- style_id=style_id,
- visible=False,
- description=description)
- self._CreateLineString(placemark, coordinate_list)
- return trips_folder
-
- def _CreateRoutesFolder(self, schedule, doc, route_type=None):
- """Create a KML Folder containing routes in a schedule.
-
- The folder contains a subfolder for each route in the schedule of type
- route_type. If route_type is None, then all routes are selected. Each
- subfolder contains a flattened graph placemark, a route shapes placemark
- and, if show_trips is True, a subfolder containing placemarks for each of
- the trips in the route.
-
- If there are no routes in the schedule then no folder is created and None
- is returned.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- doc: The KML Document ElementTree.Element instance.
- route_type: The route type integer or None.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
-
- def GetRouteName(route):
- """Return a placemark name for the route.
-
- Args:
- route: The transitfeed.Route instance.
-
- Returns:
- The name as a string.
- """
- name_parts = []
- if route.route_short_name:
- name_parts.append('<b>%s</b>' % route.route_short_name)
- if route.route_long_name:
- name_parts.append(route.route_long_name)
- return ' - '.join(name_parts) or route.route_id
-
- def GetRouteDescription(route):
- """Return a placemark description for the route.
-
- Args:
- route: The transitfeed.Route instance.
-
- Returns:
- The description as a string.
- """
- desc_items = []
- if route.route_desc:
- desc_items.append(route.route_desc)
- if route.route_url:
- desc_items.append('Route info page: <a href="%s">%s</a>' % (
- route.route_url, route.route_url))
- description = '<br/>'.join(desc_items)
- return description or None
-
- routes = [route for route in schedule.GetRouteList()
- if route_type is None or route.route_type == route_type]
- if not routes:
- return None
- routes.sort(key=lambda x: GetRouteName(x))
-
- if route_type is not None:
- route_type_names = {0: 'Tram, Streetcar or Light rail',
- 1: 'Subway or Metro',
- 2: 'Rail',
- 3: 'Bus',
- 4: 'Ferry',
- 5: 'Cable car',
- 6: 'Gondola or suspended cable car',
- 7: 'Funicular'}
- type_name = route_type_names.get(route_type, str(route_type))
- folder_name = 'Routes - %s' % type_name
- else:
- folder_name = 'Routes'
- routes_folder = self._CreateFolder(doc, folder_name, visible=False)
-
- for route in routes:
- style_id = self._CreateStyleForRoute(doc, route)
- route_folder = self._CreateFolder(routes_folder,
- GetRouteName(route),
- description=GetRouteDescription(route))
- self._CreateRouteShapesFolder(schedule, route_folder, route,
- style_id, False)
- self._CreateRoutePatternsFolder(route_folder, route, style_id, False)
- if self.show_trips:
- self._CreateRouteTripsFolder(route_folder, route, style_id, schedule)
- return routes_folder
-
- def _CreateShapesFolder(self, schedule, doc):
- """Create a KML Folder containing all the shapes in a schedule.
-
- The folder contains a placemark for each shape. If there are no shapes in
- the schedule then the folder is not created and None is returned.
-
- Args:
- schedule: The transitfeed.Schedule instance.
- doc: The KML Document ElementTree.Element instance.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
- if not schedule.GetShapeList():
- return None
- shapes_folder = self._CreateFolder(doc, 'Shapes')
- shapes = list(schedule.GetShapeList())
- shapes.sort(key=lambda x: x.shape_id)
- for shape in shapes:
- placemark = self._CreatePlacemark(shapes_folder, shape.shape_id)
- self._CreateLineStringForShape(placemark, shape)
- if self.shape_points:
- self._CreateShapePointFolder(shapes_folder, shape)
- return shapes_folder
-
- def _CreateShapePointFolder(self, shapes_folder, shape):
- """Create a KML Folder containing all the shape points in a shape.
-
- The folder contains placemarks for each shapepoint.
-
- Args:
- shapes_folder: A KML Shape Folder ElementTree.Element instance
- shape: The shape to plot.
-
- Returns:
- The Folder ElementTree.Element instance or None.
- """
-
- folder_name = shape.shape_id + ' Shape Points'
- folder = self._CreateFolder(shapes_folder, folder_name, visible=False)
- for (index, (lat, lon, dist)) in enumerate(shape.points):
- placemark = self._CreatePlacemark(folder, str(index+1))
- point = ET.SubElement(placemark, 'Point')
- coordinates = ET.SubElement(point, 'coordinates')
- coordinates.text = '%.6f,%.6f' % (lon, lat)
- return folder
-
- def Write(self, schedule, output_file):
- """Writes out a feed as KML.
-
- Args:
- schedule: A transitfeed.Schedule object containing the feed to write.
- output_file: The name of the output KML file, or file object to use.
- """
- # Generate the DOM to write
- root = ET.Element('kml')
- root.attrib['xmlns'] = 'http://earth.google.com/kml/2.1'
- doc = ET.SubElement(root, 'Document')
- open_tag = ET.SubElement(doc, 'open')
- open_tag.text = '1'
- self._CreateStopsFolder(schedule, doc)
- if self.split_routes:
- route_types = set()
- for route in schedule.GetRouteList():
- route_types.add(route.route_type)
- route_types = list(route_types)
- route_types.sort()
- for route_type in route_types:
- self._CreateRoutesFolder(schedule, doc, route_type)
- else:
- self._CreateRoutesFolder(schedule, doc)
- self._CreateShapesFolder(schedule, doc)
-
- # Make sure we pretty-print
- self._SetIndentation(root)
-
- # Now write the output
- if isinstance(output_file, file):
- output = output_file
- else:
- output = open(output_file, 'w')
- output.write("""<?xml version="1.0" encoding="UTF-8"?>\n""")
- ET.ElementTree(root).write(output, 'utf-8')
-
-
-def main():
- usage = \
-'''%prog [options] <input GTFS.zip> [<output.kml>]
-
-Reads GTFS file or directory <input GTFS.zip> and creates a KML file
-<output.kml> that contains the geographical features of the input. If
-<output.kml> is omitted a default filename is picked based on
-<input GTFS.zip>. By default the KML contains all stops and shapes.
-'''
-
- parser = util.OptionParserLongError(
- usage=usage, version='%prog '+transitfeed.__version__)
- parser.add_option('-t', '--showtrips', action='store_true',
- dest='show_trips',
- help='include the individual trips for each route')
- parser.add_option('-a', '--altitude_per_sec', action='store', type='float',
- dest='altitude_per_sec',
- help='if greater than 0 trips are drawn with time a