#167 Render percentages on the site_usage graphs
#167 Render percentages on the site_usage graphs

file:a/.gitignore -> file:b/.gitignore
*.py[co] *.py[co]
*.py~ *.py~
.gitignore .gitignore
  ckan.log
   
# Packages # Packages
*.egg *.egg
*.egg-info *.egg-info
dist dist
build build
eggs eggs
parts parts
bin bin
var var
sdist sdist
develop-eggs develop-eggs
.installed.cfg .installed.cfg
   
# Private info # Private info
credentials.json credentials.json
token.dat token.dat
   
# Installer logs # Installer logs
pip-log.txt pip-log.txt
   
# Unit test / coverage reports # Unit test / coverage reports
.coverage .coverage
.tox .tox
   
#Translations #Translations
*.mo *.mo
   
#Mr Developer #Mr Developer
.mr.developer.cfg .mr.developer.cfg
   
file:a/README.rst -> file:b/README.rst
ckanext-ga-report ckanext-ga-report
================= =================
   
**Status:** Development **Status:** Development
   
**CKAN Version:** 1.7.1+ **CKAN Version:** 1.7.1+
   
   
Overview Overview
-------- --------
   
For creating detailed reports of CKAN analytics, including totals per group. For creating detailed reports of CKAN analytics, including totals per group.
   
Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor).
   
Contents of this extension: Contents of this extension:
   
* Use the CLI tool to download Google Analytics data for each time period into this extension's database tables * Use the CLI tool to download Google Analytics data for each time period into this extension's database tables
   
* Users can view the data as web page reports * Users can view the data as web page reports
   
   
Installation Installation
------------ ------------
   
1. Activate you CKAN python environment and install this extension's software:: 1. Activate you CKAN python environment and install this extension's software::
   
$ pyenv/bin/activate $ pyenv/bin/activate
$ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report $ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report
   
2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration::
   
googleanalytics.id = UA-1010101-1 googleanalytics.id = UA-1010101-1
googleanalytics.account = Account name (i.e. data.gov.uk, see top level item at https://www.google.com/analytics) googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics)
  googleanalytics.token.filepath = ~/pyenv/token.dat
ga-report.period = monthly ga-report.period = monthly
  ga-report.bounce_url = /
   
Note that your credentials will be readable by system administrators on your server. Rather than use sensitive account details, it is suggested you give access to the GA account to a new Google account that you create just for this purpose. The ga-report.bounce_url specifies a particular path to record the bounce rate for. Typically it is / (the home page).
   
3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file)::
   
$ paster initdb --config=../ckan/development.ini $ paster initdb --config=../ckan/development.ini
   
4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: 4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``::
   
ckan.plugins = ga-report ckan.plugins = ga-report
   
  Problem shooting
  ----------------
   
  * ``(ProgrammingError) relation "ga_url" does not exist``
  This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension.
   
   
Authorization Authorization
-------------- --------------
   
Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience:
   
1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ 1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_
   
2. Sign-in and create a project or use an existing project. 2. Sign-in and create a project or use an existing project.
   
3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. 3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service.
   
4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ 4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_
   
5. Click Create an OAuth 2.0 client ID.... 5. Click Create an OAuth 2.0 client ID....
   
6. Fill out the Branding Information fields and click Next. 6. Fill out the Branding Information fields and click Next.
   
7. In Client ID Settings, set Application type to Installed application. 7. In Client ID Settings, set Application type to Installed application.
   
8. Click Create client ID 8. Click Create client ID
   
9. The details you need below are Client ID, Client secret, and Redirect URIs 9. The details you need below are Client ID, Client secret, and Redirect URIs
   
   
Once you have set up your credentials.json file you can generate an oauth token file by using the Once you have set up your credentials.json file you can generate an oauth token file by using the
following command, which will store your oauth token in a file called token.dat once you have finished following command, which will store your oauth token in a file called token.dat once you have finished
giving permission in the browser:: giving permission in the browser::
   
$ paster getauthtoken --config=../ckan/development.ini $ paster getauthtoken --config=../ckan/development.ini
   
  Now ensure you reference the correct path to your token.dat in your CKAN config file (e.g. development.ini)::
   
  googleanalytics.token.filepath = ~/pyenv/token.dat
   
   
Tutorial Tutorial
-------- --------
   
Download some GA data and store it in CKAN's db. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step::
   
$ paster loadanalytics token.dat latest --config=../ckan/development.ini $ paster loadanalytics latest --config=../ckan/development.ini
   
The value after the token file is how much data you want to retrieve, this can be The value after the token file is how much data you want to retrieve, this can be
   
* **all** - data for all time (since 2010) * **all** - data for all time (since 2010)
   
* **latest** - (default) just the 'latest' data * **latest** - (default) just the 'latest' data
   
* **YYYY-MM-DD** - just data for all time periods going back to (and including) this date * **YYYY-MM-DD** - just data for all time periods going back to (and including) this date
   
   
   
Software Licence Software Licence
================ ================
   
This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License).
   
OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/
   
import logging import logging
import datetime import datetime
  import os
   
  from pylons import config
   
from ckan.lib.cli import CkanCommand from ckan.lib.cli import CkanCommand
# No other CKAN imports allowed until _load_config is run, # No other CKAN imports allowed until _load_config is run,
# or logging is disabled # or logging is disabled
   
   
class InitDB(CkanCommand): class InitDB(CkanCommand):
"""Initialise the extension's database tables """Initialise the extension's database tables
""" """
summary = __doc__.split('\n')[0] summary = __doc__.split('\n')[0]
usage = __doc__ usage = __doc__
max_args = 0 max_args = 0
min_args = 0 min_args = 0
   
def command(self): def command(self):
self._load_config() self._load_config()
   
import ckan.model as model import ckan.model as model
model.Session.remove() model.Session.remove()
model.Session.configure(bind=model.meta.engine) model.Session.configure(bind=model.meta.engine)
log = logging.getLogger('ckanext.ga-report') log = logging.getLogger('ckanext.ga-report')
   
import ga_model import ga_model
ga_model.init_tables() ga_model.init_tables()
log.info("DB tables are setup") log.info("DB tables are setup")
   
   
class GetAuthToken(CkanCommand): class GetAuthToken(CkanCommand):
""" Get's the Google auth token """ Get's the Google auth token
   
Usage: paster getauthtoken <credentials_file> Usage: paster getauthtoken <credentials_file>
   
Where <credentials_file> is the file name containing the details Where <credentials_file> is the file name containing the details
for the service (obtained from https://code.google.com/apis/console). for the service (obtained from https://code.google.com/apis/console).
By default this is set to credentials.json By default this is set to credentials.json
""" """
summary = __doc__.split('\n')[0] summary = __doc__.split('\n')[0]
usage = __doc__ usage = __doc__
max_args = 0 max_args = 0
min_args = 0 min_args = 0
   
def command(self): def command(self):
""" """
In this case we don't want a valid service, but rather just to In this case we don't want a valid service, but rather just to
force the user through the auth flow. We allow this to complete to force the user through the auth flow. We allow this to complete to
act as a form of verification instead of just getting the token and act as a form of verification instead of just getting the token and
assuming it is correct. assuming it is correct.
""" """
from ga_auth import init_service from ga_auth import init_service
init_service('token.dat', init_service('token.dat',
self.args[0] if self.args self.args[0] if self.args
else 'credentials.json') else 'credentials.json')
   
  class FixTimePeriods(CkanCommand):
  """
  Fixes the 'All' records for GA_Urls
   
  It is possible that older urls that haven't recently been visited
  do not have All records. This command will traverse through those
  records and generate valid All records for them.
  """
  summary = __doc__.split('\n')[0]
  usage = __doc__
  max_args = 0
  min_args = 0
   
  def __init__(self, name):
  super(FixTimePeriods, self).__init__(name)
   
  def command(self):
  import ckan.model as model
  from ga_model import post_update_url_stats
  self._load_config()
  model.Session.remove()
  model.Session.configure(bind=model.meta.engine)
   
  log = logging.getLogger('ckanext.ga_report')
   
  log.info("Updating 'All' records for old URLs")
  post_update_url_stats()
  log.info("Processing complete")
   
   
   
class LoadAnalytics(CkanCommand): class LoadAnalytics(CkanCommand):
"""Get data from Google Analytics API and save it """Get data from Google Analytics API and save it
in the ga_model in the ga_model
   
Usage: paster loadanalytics <tokenfile> <time-period> Usage: paster loadanalytics <time-period>
   
Where <tokenfile> is the name of the auth token file from Where <time-period> is:
the getauthtoken step.  
   
And where <time-period> is:  
all - data for all time all - data for all time
latest - (default) just the 'latest' data latest - (default) just the 'latest' data
YYYY-MM - just data for the specific month YYYY-MM - just data for the specific month
""" """
summary = __doc__.split('\n')[0] summary = __doc__.split('\n')[0]
usage = __doc__ usage = __doc__
max_args = 2 max_args = 1
min_args = 1 min_args = 0
   
  def __init__(self, name):
  super(LoadAnalytics, self).__init__(name)
  self.parser.add_option('-d', '--delete-first',
  action='store_true',
  default=False,
  dest='delete_first',
  help='Delete data for the period first')
  self.parser.add_option('-s', '--skip_url_stats',
  action='store_true',
  default=False,
  dest='skip_url_stats',
  help='Skip the download of URL data - just do site-wide stats')
   
def command(self): def command(self):
self._load_config() self._load_config()
   
from download_analytics import DownloadAnalytics from download_analytics import DownloadAnalytics
from ga_auth import (init_service, get_profile_id) from ga_auth import (init_service, get_profile_id)
   
  ga_token_filepath = os.path.expanduser(config.get('googleanalytics.token.filepath', ''))
  if not ga_token_filepath:
  print 'ERROR: In the CKAN config you need to specify the filepath of the ' \
  'Google Analytics token file under key: googleanalytics.token.filepath'
  return
   
try: try:
svc = init_service(self.args[0], None) svc = init_service(ga_token_filepath, None)
except TypeError: except TypeError:
print ('Have you correctly run the getauthtoken task and ' print ('Have you correctly run the getauthtoken task and '
'specified the correct file here') 'specified the correct token file in the CKAN config under '
  '"googleanalytics.token.filepath"?')
return return
   
downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc)) downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc),
  delete_first=self.options.delete_first,
  skip_url_stats=self.options.skip_url_stats)
   
time_period = self.args[1] if self.args and len(self.args) > 1 \ time_period = self.args[0] if self.args else 'latest'
else 'latest'  
if time_period == 'all': if time_period == 'all':
downloader.all_() downloader.all_()
elif time_period == 'latest': elif time_period == 'latest':
downloader.latest() downloader.latest()
else: else:
# The month to use # The month to use
for_date = datetime.datetime.strptime(time_period, '%Y-%m') for_date = datetime.datetime.strptime(time_period, '%Y-%m')
downloader.specific_month(for_date) downloader.specific_month(for_date)
   
import re import re
import csv import csv
import sys import sys
  import json
import logging import logging
import operator import operator
import collections import collections
from ckan.lib.base import BaseController, c, render, request, response, abort from ckan.lib.base import (BaseController, c, g, render, request, response, abort)
   
import sqlalchemy import sqlalchemy
from sqlalchemy import func, cast, Integer from sqlalchemy import func, cast, Integer
import ckan.model as model import ckan.model as model
from ga_model import GA_Url, GA_Stat from ga_model import GA_Url, GA_Stat, GA_ReferralStat, GA_Publisher
   
log = logging.getLogger('ckanext.ga-report') log = logging.getLogger('ckanext.ga-report')
   
  DOWNLOADS_AVAILABLE_FROM = '2012-12'
   
def _get_month_name(strdate): def _get_month_name(strdate):
import calendar import calendar
from time import strptime from time import strptime
d = strptime(strdate, '%Y-%m') d = strptime(strdate, '%Y-%m')
return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year) return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year)
   
  def _get_unix_epoch(strdate):
def _month_details(cls): from time import strptime,mktime
  d = strptime(strdate, '%Y-%m')
  return int(mktime(d))
   
  def _month_details(cls, stat_key=None):
  '''
  Returns a list of all the periods for which we have data, unfortunately
  knows too much about the type of the cls being passed as GA_Url has a
  more complex query
   
  This may need extending if we add a period_name to the stats
  '''
months = [] months = []
vals = model.Session.query(cls.period_name).distinct().all() day = None
   
  q = model.Session.query(cls.period_name,cls.period_complete_day)\
  .filter(cls.period_name!='All').distinct(cls.period_name)
  if stat_key:
  q= q.filter(cls.stat_name==stat_key)
   
  vals = q.order_by("period_name desc").all()
   
  if vals and vals[0][1]:
  day = int(vals[0][1])
  ordinal = 'th' if 11 <= day <= 13 \
  else {1:'st',2:'nd',3:'rd'}.get(day % 10, 'th')
  day = "{day}{ordinal}".format(day=day, ordinal=ordinal)
   
for m in vals: for m in vals:
months.append( (m[0], _get_month_name(m[0]))) months.append( (m[0], _get_month_name(m[0])))
return sorted(months, key=operator.itemgetter(0), reverse=True)  
  return months, day
   
   
class GaReport(BaseController): class GaReport(BaseController):
   
def csv(self, month): def csv(self, month):
import csv import csv
   
q = model.Session.query(GA_Stat) q = model.Session.query(GA_Stat).filter(GA_Stat.stat_name!='Downloads')
if month != 'all': if month != 'all':
q = q.filter(GA_Stat.period_name==month) q = q.filter(GA_Stat.period_name==month)
entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all() entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all()
   
response.headers['Content-Type'] = "text/csv; charset=utf-8" response.headers['Content-Type'] = "text/csv; charset=utf-8"
response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,)) response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,))
   
writer = csv.writer(response) writer = csv.writer(response)
writer.writerow(["Period", "Statistic", "Key", "Value"]) writer.writerow(["Period", "Statistic", "Key", "Value"])
   
for entry in entries: for entry in entries:
writer.writerow([entry.period_name.encode('utf-8'), writer.writerow([entry.period_name.encode('utf-8'),
entry.stat_name.encode('utf-8'), entry.stat_name.encode('utf-8'),
entry.key.encode('utf-8'), entry.key.encode('utf-8'),
entry.value.encode('utf-8')]) entry.value.encode('utf-8')])
   
   
def index(self): def index(self):
   
# Get the month details by fetching distinct values and determining the # Get the month details by fetching distinct values and determining the
# month names from the values. # month names from the values.
c.months = _month_details(GA_Stat) c.months, c.day = _month_details(GA_Stat)
   
# Work out which month to show, based on query params of the first item # Work out which month to show, based on query params of the first item
c.month_desc = 'all time' c.month_desc = 'all months'
c.month = request.params.get('month', '') c.month = request.params.get('month', '')
if c.month: if c.month:
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
   
q = model.Session.query(GA_Stat).\ q = model.Session.query(GA_Stat).\
filter(GA_Stat.stat_name=='Totals') filter(GA_Stat.stat_name=='Totals')
if c.month: if c.month:
q = q.filter(GA_Stat.period_name==c.month) q = q.filter(GA_Stat.period_name==c.month)
entries = q.order_by('ga_stat.key').all() entries = q.order_by('ga_stat.key').all()
   
def clean_key(key, val): def clean_key(key, val):
if key in ['Average time on site', 'Pages per visit', 'Percent new visits']: if key in ['Average time on site', 'Pages per visit', 'New visits', 'Bounce rate (home page)']:
val = "%.2f" % round(float(val), 2) val = "%.2f" % round(float(val), 2)
if key == 'Average time on site': if key == 'Average time on site':
mins, secs = divmod(float(val), 60) mins, secs = divmod(float(val), 60)
hours, mins = divmod(mins, 60) hours, mins = divmod(mins, 60)
val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val) val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val)
if key == 'Percent new visits': if key in ['New visits','Bounce rate (home page)']:
key = 'New visits'  
val = "%s%%" % val val = "%s%%" % val
if key in ['Bounces', 'Total pageviews']: if key in ['Total page views', 'Total visits']:
val = int(val) val = int(val)
if key == 'Total pageviews':  
key = 'Total page views'  
   
return key, val return key, val
   
  # Query historic values for sparkline rendering
  sparkline_query = model.Session.query(GA_Stat)\
  .filter(GA_Stat.stat_name=='Totals')\
  .order_by(GA_Stat.period_name)
  sparkline_data = {}
  for x in sparkline_query:
  sparkline_data[x.key] = sparkline_data.get(x.key,[])
  key, val = clean_key(x.key,float(x.value))
  tooltip = '%s: %s' % (_get_month_name(x.period_name), val)
  sparkline_data[x.key].append( (tooltip,x.value) )
  # Trim the latest month, as it looks like a huge dropoff
  for key in sparkline_data:
  sparkline_data[key] = sparkline_data[key][:-1]
   
c.global_totals = [] c.global_totals = []
if c.month: if c.month:
for e in entries: for e in entries:
key, val = clean_key(e.key, e.value) key, val = clean_key(e.key, e.value)
c.global_totals.append((key, val)) sparkline = sparkline_data[e.key]
  c.global_totals.append((key, val, sparkline))
else: else:
d = collections.defaultdict(list) d = collections.defaultdict(list)
for e in entries: for e in entries:
d[e.key].append(float(e.value)) d[e.key].append(float(e.value))
for k, v in d.iteritems(): for k, v in d.iteritems():
if k in ['Bounces', 'Total pageviews']: if k in ['Total page views', 'Total visits']:
v = sum(v) v = sum(v)
else: else:
v = float(sum(v))/len(v) v = float(sum(v))/float(len(v))
  sparkline = sparkline_data[k]
key, val = clean_key(k,v) key, val = clean_key(k,v)
c.global_totals.append((key, val))  
c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0)) c.global_totals.append((key, val, sparkline))
  # Sort the global totals into a more pleasant order
  def sort_func(x):
  key = x[0]
  total_order = ['Total page views','Total visits','Pages per visit']
  if key in total_order:
  return total_order.index(key)
  return 999
  c.global_totals = sorted(c.global_totals, key=sort_func)
   
keys = { keys = {
'Browser versions': 'browser_versions', 'Browser versions': 'browser_versions',
'Browsers': 'browsers', 'Browsers': 'browsers',
'Operating Systems versions': 'os_versions', 'Operating Systems versions': 'os_versions',
'Operating Systems': 'os', 'Operating Systems': 'os',
'Social sources': 'social_networks', 'Social sources': 'social_networks',
'Languages': 'languages', 'Languages': 'languages',
'Country': 'country' 'Country': 'country'
} }
   
browser_version_re = re.compile("(.*)\((.*)\)") def shorten_name(name, length=60):
  return (name[:length] + '..') if len(name) > 60 else name
   
  def fill_out_url(url):
  import urlparse
  return urlparse.urljoin(g.site_url, url)
   
  c.social_referrer_totals, c.social_referrers = [], []
  q = model.Session.query(GA_ReferralStat)
  q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
  q = q.order_by('ga_referrer.count::int desc')
  for entry in q.all():
  c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url),
  entry.source,entry.count))
   
  q = model.Session.query(GA_ReferralStat.url,
  func.sum(GA_ReferralStat.count).label('count'))
  q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
  q = q.order_by('count desc').group_by(GA_ReferralStat.url)
  for entry in q.all():
  c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'',
  entry[1]))
   
for k, v in keys.iteritems(): for k, v in keys.iteritems():
   
def clean_field(key):  
if k != 'Browser versions':  
return key  
m = browser_version_re.match(key)  
browser = m.groups()[0].strip()  
ver = m.groups()[1]  
parts = ver.split('.')  
if len(parts) > 1:  
if parts[1][0] == '0':  
ver = parts[0]  
else:  
ver = "%s.%s" % (parts[0],parts[1])  
if browser in ['Safari','Android Browser']: # Special case complex version nums  
ver = parts[0]  
if len(ver) > 2:  
ver = "%s%sX" % (ver[0], ver[1])  
   
return "%s (%s)" % (browser, ver,)  
   
q = model.Session.query(GA_Stat).\ q = model.Session.query(GA_Stat).\
filter(GA_Stat.stat_name==k) filter(GA_Stat.stat_name==k).\
  order_by(GA_Stat.period_name)
  # Run the query on all months to gather graph data
  graph = {}
  for stat in q:
  graph[ stat.key ] = graph.get(stat.key,{
  'name':stat.key,
  'data': []
  })
  graph[ stat.key ]['data'].append({
  'x':_get_unix_epoch(stat.period_name),
  'y':float(stat.value)
  })
  setattr(c, v+'_graph', json.dumps( _to_rickshaw(graph.values(),percentageMode=True) ))
   
  # Buffer the tabular data
if c.month: if c.month:
entries = [] entries = []
q = q.filter(GA_Stat.period_name==c.month).\ q = q.filter(GA_Stat.period_name==c.month).\
order_by('ga_stat.value::int desc') order_by('ga_stat.value::int desc')
   
d = collections.defaultdict(int) d = collections.defaultdict(int)
for e in q.all(): for e in q.all():
d[clean_field(e.key)] += int(e.value) d[e.key] += int(e.value)
entries = [] entries = []
for key, val in d.iteritems(): for key, val in d.iteritems():
entries.append((key,val,)) entries.append((key,val,))
entries = sorted(entries, key=operator.itemgetter(1), reverse=True) entries = sorted(entries, key=operator.itemgetter(1), reverse=True)
   
def percent(num, total):  
p = 100 * float(num)/float(total)  
return "%.2f%%" % round(p, 2)  
   
# Get the total for each set of values and then set the value as # Get the total for each set of values and then set the value as
# a percentage of the total # a percentage of the total
total = sum([num for _,num in entries]) if k == 'Social sources':
setattr(c, v, [(k,percent(v,total)) for k,v in entries ]) total = sum([x for n,x,graph in c.global_totals if n == 'Total visits'])
  else:
  total = sum([num for _,num in entries])
  setattr(c, v, [(k,_percent(v,total)) for k,v in entries ])
   
return render('ga_report/site/index.html') return render('ga_report/site/index.html')
   
   
class GaPublisherReport(BaseController): class GaDatasetReport(BaseController):
""" """
Displays the pageview and visit count for specific publishers based on Displays the pageview and visit count for datasets
the datasets associated with the publisher. with options to filter by publisher and time period.
""" """
def csv(self, month): def publisher_csv(self, month):
  '''
c.month = month if not month =='all' else '' Returns a CSV of each publisher with the total number of dataset
  views & visits.
  '''
  c.month = month if not month == 'all' else ''
response.headers['Content-Type'] = "text/csv; charset=utf-8" response.headers['Content-Type'] = "text/csv; charset=utf-8"
response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,)) response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,))
   
writer = csv.writer(response) writer = csv.writer(response)
writer.writerow(["Publisher", "Views", "Visits", "Period Name"]) writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"])
   
for publisher,view,visit in self._get_publishers(None): top_publishers, top_publishers_graph = _get_top_publishers(None)
   
  for publisher,view,visit in top_publishers:
writer.writerow([publisher.title.encode('utf-8'), writer.writerow([publisher.title.encode('utf-8'),
  publisher.name.encode('utf-8'),
view, view,
visit, visit,
month]) month])
   
  def dataset_csv(self, id='all', month='all'):
  '''
def publisher_csv(self, id, month): Returns a CSV with the number of views & visits for each dataset.
   
c.month = month if not month =='all' else '' :param id: A Publisher ID or None if you want for all
c.publisher = model.Group.get(id) :param month: The time period, or 'all'
if not c.publisher: '''
abort(404, 'A publisher with that name could not be found') c.month = month if not month == 'all' else ''
  if id != 'all':
  c.publisher = model.Group.get(id)
  if not c.publisher:
  abort(404, 'A publisher with that name could not be found')
   
packages = self._get_packages(c.publisher) packages = self._get_packages(c.publisher)
response.headers['Content-Type'] = "text/csv; charset=utf-8" response.headers['Content-Type'] = "text/csv; charset=utf-8"
response.headers['Content-Disposition'] = \ response.headers['Content-Disposition'] = \
str('attachment; filename=%s_%s.csv' % (c.publisher.name, month,)) str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,))
   
writer = csv.writer(response) writer = csv.writer(response)
writer.writerow(["Publisher", "Views", "Visits", "Period Name"]) writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Resource downloads", "Period Name"])
   
for package,view,visit in packages: for package,view,visit,downloads in packages:
writer.writerow([package.title.encode('utf-8'), writer.writerow([package.title.encode('utf-8'),
  package.name.encode('utf-8'),
view, view,
visit, visit,
  downloads,
month]) month])
   
  def publishers(self):
  '''A list of publishers and the number of views/visits for each'''
def index(self):  
   
# Get the month details by fetching distinct values and determining the # Get the month details by fetching distinct values and determining the
# month names from the values. # month names from the values.
c.months = _month_details(GA_Url) c.months, c.day = _month_details(GA_Url)
   
# Work out which month to show, based on query params of the first item # Work out which month to show, based on query params of the first item
c.month = request.params.get('month', '') c.month = request.params.get('month', '')
c.month_desc = 'all time' c.month_desc = 'all months'
if c.month: if c.month:
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
   
c.top_publishers = self._get_publishers() c.top_publishers, graph_data = _get_top_publishers()
  c.top_publishers_graph = json.dumps( _to_rickshaw(graph_data.values()) )
   
return render('ga_report/publisher/index.html') return render('ga_report/publisher/index.html')
   
def _get_publishers(self, limit=20): def _get_packages(self, publisher=None, count=-1):
connection = model.Session.connection() '''Returns the datasets in order of views'''
q = """ have_download_data = True
select department_id, sum(pageviews::int) views, sum(visitors::int) visits month = c.month or 'All'
from ga_url if month != 'All':
where department_id <> ''""" have_download_data = month >= DOWNLOADS_AVAILABLE_FROM
if c.month:  
q = q + """ q = model.Session.query(GA_Url,model.Package)\
and period_name=%s .filter(model.Package.name==GA_Url.package_id)\
""" .filter(GA_Url.url.like('/dataset/%'))
q = q + """ if publisher:
group by department_id order by views desc q = q.filter(GA_Url.department_id==publisher.name)
""" q = q.filter(GA_Url.period_name==month)
if limit: q = q.order_by('ga_url.pageviews::int desc')
q = q + " limit %s;" % (limit) top_packages = []
   
# Add this back (before and period_name =%s) if you want to ignore publisher  
# homepage views  
# and not url like '/publisher/%%'  
   
top_publishers = []  
res = connection.execute(q, c.month)  
   
for row in res:  
g = model.Group.get(row[0])  
if g:  
top_publishers.append((g, row[1], row[2]))  
return top_publishers  
   
def _get_packages(self, publisher, count=-1):  
if count == -1: if count == -1:
count = sys.maxint entries = q.all()
   
top_packages = []  
q = model.Session.query(GA_Url).\  
filter(GA_Url.department_id==publisher.name).\  
filter(GA_Url.url.like('/dataset/%'))  
if c.month:  
q = q.filter(GA_Url.period_name==c.month)  
q = q.order_by('ga_url.pageviews::int desc')  
   
if c.month:  
for entry in q[:count]:  
p = model.Package.get(entry.url[len('/dataset/'):])  
top_packages.append((p,entry.pageviews,entry.visitors))  
else: else:
ds = {} entries = q.limit(count)
for entry in q.all():  
if len(ds) >= count: for entry,package in entries:
break if package:
p = model.Package.get(entry.url[len('/dataset/'):]) # Downloads ....
if not p in ds: if have_download_data:
ds[p] = {'views':0, 'visits': 0} dls = model.Session.query(GA_Stat).\
ds[p]['views'] = ds[p]['views'] + int(entry.pageviews) filter(GA_Stat.stat_name=='Downloads').\
ds[p]['visits'] = ds[p]['visits'] + int(entry.visitors) filter(GA_Stat.key==package.name)
  if month != 'All': # Fetch everything unless the month is specific
results = [] dls = dls.filter(GA_Stat.period_name==month)
for k, v in ds.iteritems(): downloads = 0
results.append((k,v['views'],v['visits'])) for x in dls:
  downloads += int(x.value)
top_packages = sorted(results, key=operator.itemgetter(1), reverse=True) else:
  downloads = 'No data'
  top_packages.append((package, entry.pageviews, entry.visits, downloads))
  else:
  log.warning('Could not find package associated package')
   
return top_packages return top_packages
   
  def read(self):
def read(self, id): '''
  Lists the most popular datasets across all publishers
  '''
  return self.read_publisher(None)
   
  def read_publisher(self, id):
  '''
  Lists the most popular datasets for a publisher (or across all publishers)
  '''
count = 20 count = 20
   
c.publisher = model.Group.get(id) c.publishers = _get_publishers()
if not c.publisher:  
abort(404, 'A publisher with that name could not be found') id = request.params.get('publisher', id)
  if id and id != 'all':
  c.publisher = model.Group.get(id)
  if not c.publisher:
  abort(404, 'A publisher with that name could not be found')
  c.publisher_name = c.publisher.name
c.top_packages = [] # package, dataset_views in c.top_packages c.top_packages = [] # package, dataset_views in c.top_packages
   
# Get the month details by fetching distinct values and determining the # Get the month details by fetching distinct values and determining the
# month names from the values. # month names from the values.
c.months = _month_details(GA_Url) c.months, c.day = _month_details(GA_Url)
   
# Work out which month to show, based on query params of the first item # Work out which month to show, based on query params of the first item
c.month = request.params.get('month', '') c.month = request.params.get('month', '')
if not c.month: if not c.month:
c.month_desc = 'all time' c.month_desc = 'all months'
else: else:
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
   
  month = c.month or 'All'
c.publisher_page_views = 0 c.publisher_page_views = 0
q = model.Session.query(GA_Url).\ q = model.Session.query(GA_Url).\
filter(GA_Url.url=='/publisher/%s' % c.publisher.name) filter(GA_Url.url=='/publisher/%s' % c.publisher_name)
if c.month: entry = q.filter(GA_Url.period_name==c.month).first()
entry = q.filter(GA_Url.period_name==c.month).first() c.publisher_page_views = entry.pageviews if entry else 0
c.publisher_page_views = entry.pageviews if entry else 0  
else:  
for e in q.all():  
c.publisher_page_views = c.publisher_page_views + int(e.pageviews)  
   
c.top_packages = self._get_packages(c.publisher, 20) c.top_packages = self._get_packages(c.publisher, 20)
   
  # Graph query
  top_package_names = [ x[0].name for x in c.top_packages ]
  graph_query = model.Session.query(GA_Url,model.Package)\
  .filter(model.Package.name==GA_Url.package_id)\
  .filter(GA_Url.url.like('/dataset/%'))\
  .filter(GA_Url.package_id.in_(top_package_names))
  graph_data = {}
  for entry,package in graph_query:
  if not package: continue
  if entry.period_name=='All': continue
  graph_data[package.id] = graph_data.get(package.id,{
  'name':package.title,
  'data':[]
  })
  graph_data[package.id]['data'].append({
  'x':_get_unix_epoch(entry.period_name),
  'y':int(entry.pageviews),
  })
   
  c.graph_data = json.dumps( _to_rickshaw(graph_data.values()) )
   
return render('ga_report/publisher/read.html') return render('ga_report/publisher/read.html')
   
  def _to_rickshaw(data, percentageMode=False):
  if data==[]:
  return data
  # Create a consistent x-axis
  num_points = [ len(package['data']) for package in data ]
  ideal_index = num_points.index( max(num_points) )
  x_axis = [ point['x'] for point in data[ideal_index]['data'] ]
  for package in data:
  xs = [ point['x'] for point in package['data'] ]
  assert set(xs).issubset( set(x_axis) ), (xs, x_axis)
  # Zero pad any missing values
  for x in set(x_axis).difference(set(xs)):
  package['data'].append( {'x':x, 'y':0} )
  assert len(package['data'])==len(x_axis), (len(package['data']),len(x_axis),package['data'],x_axis,set(x_axis).difference(set(xs)))
  if percentageMode:
  # Transform data into percentage stacks
  totals = {}
  for x in x_axis:
  for package in data:
  for point in package['data']:
  totals[ point['x'] ] = totals.get(point['x'],0) + point['y']
  # Roll insignificant series into a catch-all
  THRESHOLD = 0.01
  significant_series = []
  for package in data:
  for point in package['data']:
  fraction = float(point['y']) / totals[point['x']]
  if fraction>THRESHOLD and not (package in significant_series):
  significant_series.append(package)
  temp = {}
  for package in data:
  if package in significant_series: continue
  for point in package['data']:
  temp[point['x']] = temp.get(point['x'],0) + point['y']
  catch_all = { 'name':'Other','data': [ {'x':x,'y':y} for x,y in temp.items() ] }
  # Roll insignificant series into one
  data = significant_series
  data.append(catch_all)
  # Turn each point into a percentage
  for package in data:
  for point in package['data']:
  point['y'] = (point['y']*100) / totals[point['x']]
  # Sort the points
  for package in data:
  package['data'] = sorted( package['data'], key=lambda x:x['x'] )
  # Strip the latest month's incomplete analytics
  package['data'] = package['data'][:-1]
  return data
   
   
  def _get_top_publishers(limit=20):
  '''
  Returns a list of the top 20 publishers by dataset visits.
  (The number to show can be varied with 'limit')
  '''
  month = c.month or 'All'
  connection = model.Session.connection()
  q = """
  select department_id, sum(pageviews::int) views, sum(visits::int) visits
  from ga_url
  where department_id <> ''
  and package_id <> ''
  and url like '/dataset/%%'
  and period_name=%s
  group by department_id order by views desc
  """
  if limit:
  q = q + " limit %s;" % (limit)
   
  top_publishers = []
  res = connection.execute(q, month)
  department_ids = []
  for row in res:
  g = model.Group.get(row[0])
  if g:
  department_ids.append(row[0])
  top_publishers.append((g, row[1], row[2]))
   
  graph = {}
  if limit is not None:
  # Query for a history graph of these publishers
  q = model.Session.query(
  GA_Url.department_id,
  GA_Url.period_name,
  func.sum(cast(GA_Url.pageviews,sqlalchemy.types.INT)))\
  .filter( GA_Url.department_id.in_(department_ids) )\
  .filter( GA_Url.period_name!='All' )\
  .filter( GA_Url.url.like('/dataset/%') )\
  .filter( GA_Url.package_id!='' )\
  .group_by( GA_Url.department_id, GA_Url.period_name )
  for dept_id,period_name,views in q:
  graph[dept_id] = graph.get( dept_id, {
  'name' : model.Group.get(dept_id).title,
  'data' : []
  })
  graph[dept_id]['data'].append({
  'x': _get_unix_epoch(period_name),
  'y': views
  })
  return top_publishers, graph
   
   
  def _get_publishers():
  '''
  Returns a list of all publishers. Each item is a tuple:
  (name, title)
  '''
  publishers = []
  for pub in model.Session.query(model.Group).\
  filter(model.Group.type=='publisher').\
  filter(model.Group.state=='active').\
  order_by(model.Group.name):
  publishers.append((pub.name, pub.title))
  return publishers
   
  def _percent(num, total):
  p = 100 * float(num)/float(total)
  return "%.2f%%" % round(p, 2)
   
import os import os
import logging import logging
import datetime import datetime
  import collections
from pylons import config from pylons import config
  from ga_model import _normalize_url
import ga_model import ga_model
   
#from ga_client import GA #from ga_client import GA
   
log = logging.getLogger('ckanext.ga-report') log = logging.getLogger('ckanext.ga-report')
   
FORMAT_MONTH = '%Y-%m' FORMAT_MONTH = '%Y-%m'
  MIN_VIEWS = 50
  MIN_VISITS = 20
  MIN_DOWNLOADS = 10
   
class DownloadAnalytics(object): class DownloadAnalytics(object):
'''Downloads and stores analytics info''' '''Downloads and stores analytics info'''
   
def __init__(self, service=None, profile_id=None): def __init__(self, service=None, profile_id=None, delete_first=False,
  skip_url_stats=False):
self.period = config['ga-report.period'] self.period = config['ga-report.period']
self.service = service self.service = service
self.profile_id = profile_id self.profile_id = profile_id
  self.delete_first = delete_first
  self.skip_url_stats = skip_url_stats
   
def specific_month(self, date): def specific_month(self, date):
import calendar import calendar
   
first_of_this_month = datetime.datetime(date.year, date.month, 1) first_of_this_month = datetime.datetime(date.year, date.month, 1)
_, last_day_of_month = calendar.monthrange(int(date.year), int(date.month)) _, last_day_of_month = calendar.monthrange(int(date.year), int(date.month))
last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month) last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month)
periods = ((date.strftime(FORMAT_MONTH), periods = ((date.strftime(FORMAT_MONTH),
last_day_of_month, last_day_of_month,
first_of_this_month, last_of_this_month),) first_of_this_month, last_of_this_month),)
self.download_and_store(periods) self.download_and_store(periods)
   
   
def latest(self): def latest(self):
if self.period == 'monthly': if self.period == 'monthly':
# from first of this month to today # from first of this month to today
now = datetime.datetime.now() now = datetime.datetime.now()
first_of_this_month = datetime.datetime(now.year, now.month, 1) first_of_this_month = datetime.datetime(now.year, now.month, 1)
periods = ((now.strftime(FORMAT_MONTH), periods = ((now.strftime(FORMAT_MONTH),
now.day, now.day,
first_of_this_month, now),) first_of_this_month, now),)
else: else:
raise NotImplementedError raise NotImplementedError
self.download_and_store(periods) self.download_and_store(periods)
   
   
def for_date(self, for_date): def for_date(self, for_date):
assert isinstance(since_date, datetime.datetime) assert isinstance(since_date, datetime.datetime)
periods = [] # (period_name, period_complete_day, start_date, end_date) periods = [] # (period_name, period_complete_day, start_date, end_date)
if self.period == 'monthly': if self.period == 'monthly':
first_of_the_months_until_now = [] first_of_the_months_until_now = []
year = for_date.year year = for_date.year
month = for_date.month month = for_date.month
now = datetime.datetime.now() now = datetime.datetime.now()
first_of_this_month = datetime.datetime(now.year, now.month, 1) first_of_this_month = datetime.datetime(now.year, now.month, 1)
while True: while True:
first_of_the_month = datetime.datetime(year, month, 1) first_of_the_month = datetime.datetime(year, month, 1)
if first_of_the_month == first_of_this_month: if first_of_the_month == first_of_this_month:
periods.append((now.strftime(FORMAT_MONTH), periods.append((now.strftime(FORMAT_MONTH),
now.day, now.day,
first_of_this_month, now)) first_of_this_month, now))
break break
elif first_of_the_month < first_of_this_month: elif first_of_the_month < first_of_this_month:
in_the_next_month = first_of_the_month + datetime.timedelta(40) in_the_next_month = first_of_the_month + datetime.timedelta(40)
last_of_the_month = datetime.datetime(in_the_next_month.year, last_of_the_month = datetime.datetime(in_the_next_month.year,
in_the_next_month.month, 1)\ in_the_next_month.month, 1)\
- datetime.timedelta(1) - datetime.timedelta(1)
periods.append((now.strftime(FORMAT_MONTH), 0, periods.append((now.strftime(FORMAT_MONTH), 0,
first_of_the_month, last_of_the_month)) first_of_the_month, last_of_the_month))
else: else:
# first_of_the_month has got to the future somehow # first_of_the_month has got to the future somehow
break break
month += 1 month += 1
if month > 12: if month > 12:
year += 1 year += 1
month = 1 month = 1
else: else:
raise NotImplementedError raise NotImplementedError
self.download_and_store(periods) self.download_and_store(periods)
   
@staticmethod @staticmethod
def get_full_period_name(period_name, period_complete_day): def get_full_period_name(period_name, period_complete_day):
if period_complete_day: if period_complete_day:
return period_name + ' (up to %ith)' % period_complete_day return period_name + ' (up to %ith)' % period_complete_day
else: else:
return period_name return period_name
   
   
def download_and_store(self, periods): def download_and_store(self, periods):
for period_name, period_complete_day, start_date, end_date in periods: for period_name, period_complete_day, start_date, end_date in periods:
log.info('Downloading Analytics for period "%s" (%s - %s)', log.info('Period "%s" (%s - %s)',
self.get_full_period_name(period_name, period_complete_day), self.get_full_period_name(period_name, period_complete_day),
start_date.strftime('%Y %m %d'), start_date.strftime('%Y-%m-%d'),
end_date.strftime('%Y %m %d')) end_date.strftime('%Y-%m-%d'))
   
data = self.download(start_date, end_date, '~/dataset/[a-z0-9-_]+') if self.delete_first:
log.info('Storing Dataset Analytics for period "%s"', log.info('Deleting existing Analytics for this period "%s"',
self.get_full_period_name(period_name, period_complete_day)) period_name)
self.store(period_name, period_complete_day, data, ) ga_model.delete(period_name)
   
data = self.download(start_date, end_date, '~/publisher/[a-z0-9-_]+') if not self.skip_url_stats:
log.info('Storing Publisher Analytics for period "%s"', # Clean out old url data before storing the new
self.get_full_period_name(period_name, period_complete_day)) ga_model.pre_update_url_stats(period_name)
self.store(period_name, period_complete_day, data,)  
  accountName = config.get('googleanalytics.account')
ga_model.update_publisher_stats(period_name) # about 30 seconds.  
self.sitewide_stats( period_name ) log.info('Downloading analytics for dataset views')
  data = self.download(start_date, end_date, '~/%s/dataset/[a-z0-9-_]+' % accountName)
   
def download(self, start_date, end_date, path='~/dataset/[a-z0-9-_]+'): log.info('Storing dataset views (%i rows)', len(data.get('url')))
  self.store(period_name, period_complete_day, data, )
   
  log.info('Downloading analytics for publisher views')
  data = self.download(start_date, end_date, '~/%s/publisher/[a-z0-9-_]+' % accountName)
   
  log.info('Storing publisher views (%i rows)', len(data.get('url')))
  self.store(period_name, period_complete_day, data,)
   
  # Make sure the All records are correct.
  ga_model.post_update_url_stats()
   
  log.info('Aggregating datasets by publisher')
  ga_model.update_publisher_stats(period_name) # about 30 seconds.
   
   
  log.info('Downloading and storing analytics for site-wide stats')
  self.sitewide_stats( period_name, period_complete_day )
   
  log.info('Downloading and storing analytics for social networks')
  self.update_social_info(period_name, start_date, end_date)
   
   
  def update_social_info(self, period_name, start_date, end_date):
  start_date = start_date.strftime('%Y-%m-%d')
  end_date = end_date.strftime('%Y-%m-%d')
  query = 'ga:hasSocialSourceReferral=~Yes$'
  metrics = 'ga:entrances'
  sort = '-ga:entrances'
   
  # Supported query params at
  # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
  results = self.service.data().ga().get(
  ids='ga:' + self.profile_id,
  filters=query,
  start_date=start_date,
  metrics=metrics,
  sort=sort,
  dimensions="ga:landingPagePath,ga:socialNetwork",
  max_results=10000,
  end_date=end_date).execute()
  data = collections.defaultdict(list)
  rows = results.get('rows',[])
  for row in rows:
  url = _normalize_url('http:/' + row[0])
  data[url].append( (row[1], int(row[2]),) )
  ga_model.update_social(period_name, data)
   
   
  def download(self, start_date, end_date, path=None):
'''Get data from GA for a given time period''' '''Get data from GA for a given time period'''
start_date = start_date.strftime('%Y-%m-%d') start_date = start_date.strftime('%Y-%m-%d')
end_date = end_date.strftime('%Y-%m-%d') end_date = end_date.strftime('%Y-%m-%d')
query = 'ga:pagePath=%s$' % path query = 'ga:pagePath=%s$' % path
metrics = 'ga:uniquePageviews, ga:visitors' metrics = 'ga:pageviews, ga:visits'
sort = '-ga:uniquePageviews' sort = '-ga:pageviews'
   
# Supported query params at # Supported query params at
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
filters=query, filters=query,
start_date=start_date, start_date=start_date,
metrics=metrics, metrics=metrics,
sort=sort, sort=sort,
dimensions="ga:pagePath", dimensions="ga:pagePath",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
   
if os.getenv('DEBUG'):  
import pprint  
pprint.pprint(results)  
print 'Total results: %s' % results.get('totalResults')  
   
packages = [] packages = []
  log.info("There are %d results" % results['totalResults'])
for entry in results.get('rows'): for entry in results.get('rows'):
(loc,pageviews,visits) = entry (loc,pageviews,visits) = entry
packages.append( ('http:/' + loc, pageviews, visits,) ) # Temporary hack url = _normalize_url('http:/' + loc) # strips off domain e.g. www.data.gov.uk or data.gov.uk
   
  if not url.startswith('/dataset/') and not url.startswith('/publisher/'):
  # filter out strays like:
  # /data/user/login?came_from=http://data.gov.uk/dataset/os-code-point-open
  # /403.html?page=/about&from=http://data.gov.uk/publisher/planning-inspectorate
  continue
  packages.append( (url, pageviews, visits,) ) # Temporary hack
return dict(url=packages) return dict(url=packages)
   
def store(self, period_name, period_complete_day, data): def store(self, period_name, period_complete_day, data):
if 'url' in data: if 'url' in data:
ga_model.update_url_stats(period_name, period_complete_day, data['url']) ga_model.update_url_stats(period_name, period_complete_day, data['url'])
   
def sitewide_stats(self, period_name): def sitewide_stats(self, period_name, period_complete_day):
import calendar import calendar
year, month = period_name.split('-') year, month = period_name.split('-')
_, last_day_of_month = calendar.monthrange(int(year), int(month)) _, last_day_of_month = calendar.monthrange(int(year), int(month))
   
start_date = '%s-01' % period_name start_date = '%s-01' % period_name
end_date = '%s-%s' % (period_name, last_day_of_month) end_date = '%s-%s' % (period_name, last_day_of_month)
print 'Sitewide_stats for %s (%s -> %s)' % (period_name, start_date, end_date)  
   
funcs = ['_totals_stats', '_social_stats', '_os_stats', funcs = ['_totals_stats', '_social_stats', '_os_stats',
'_locale_stats', '_browser_stats', '_mobile_stats'] '_locale_stats', '_browser_stats', '_mobile_stats', '_download_stats']
for f in funcs: for f in funcs:
print ' + Fetching %s stats' % f.split('_')[1] log.info('Downloading analytics for %s' % f.split('_')[1])
getattr(self, f)(start_date, end_date, period_name) getattr(self, f)(start_date, end_date, period_name, period_complete_day)
   
def _get_results(result_data, f): def _get_results(result_data, f):
data = {} data = {}
for result in result_data: for result in result_data:
key = f(result) key = f(result)
data[key] = data.get(key,0) + result[1] data[key] = data.get(key,0) + result[1]
return data return data
   
def _totals_stats(self, start_date, end_date, period_name): def _totals_stats(self, start_date, end_date, period_name, period_complete_day):
""" Fetches distinct totals, total pageviews etc """ """ Fetches distinct totals, total pageviews etc """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:pageviews',
sort='-ga:uniquePageviews', sort='-ga:pageviews',
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
ga_model.update_sitewide_stats(period_name, "Totals", {'Total pageviews': result_data[0][0]}) ga_model.update_sitewide_stats(period_name, "Totals", {'Total page views': result_data[0][0]},
  period_complete_day)
results = self.service.data().ga().get(  
ids='ga:' + self.profile_id, results = self.service.data().ga().get(
start_date=start_date, ids='ga:' + self.profile_id,
metrics='ga:pageviewsPerVisit,ga:bounces,ga:avgTimeOnSite,ga:percentNewVisits', start_date=start_date,
  metrics='ga:pageviewsPerVisit,ga:avgTimeOnSite,ga:percentNewVisits,ga:visits',
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = { data = {
'Pages per visit': result_data[0][0], 'Pages per visit': result_data[0][0],
'Bounces': result_data[0][1], 'Average time on site': result_data[0][1],
'Average time on site': result_data[0][2], 'New visits': result_data[0][2],
'Percent new visits': result_data[0][3], 'Total visits': result_data[0][3],
} }
ga_model.update_sitewide_stats(period_name, "Totals", data) ga_model.update_sitewide_stats(period_name, "Totals", data, period_complete_day)
   
  # Bounces from / or another configurable page.
def _locale_stats(self, start_date, end_date, period_name): path = '/%s%s' % (config.get('googleanalytics.account'),
  config.get('ga-report.bounce_url', '/'))
  results = self.service.data().ga().get(
  ids='ga:' + self.profile_id,
  filters='ga:pagePath==%s' % (path,),
  start_date=start_date,
  metrics='ga:visitBounceRate',
  dimensions='ga:pagePath',
  max_results=10000,
  end_date=end_date).execute()
  result_data = results.get('rows')
  if not result_data or len(result_data) != 1:
  log.error('Could not pinpoint the bounces for path: %s. Got results: %r',
  path, result_data)
  return
  results = result_data[0]
  bounces = float(results[1])
  # visitBounceRate is already a %
  log.info('Google reports visitBounceRate as %s', bounces)
  ga_model.update_sitewide_stats(period_name, "Totals", {'Bounce rate (home page)': float(bounces)},
  period_complete_day)
   
   
  def _locale_stats(self, start_date, end_date, period_name, period_complete_day):
""" Fetches stats about language and country """ """ Fetches stats about language and country """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:pageviews',
sort='-ga:uniquePageviews', sort='-ga:pageviews',
dimensions="ga:language,ga:country", dimensions="ga:language,ga:country",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
ga_model.update_sitewide_stats(period_name, "Languages", data) self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Languages", data, period_complete_day)
   
data = {} data = {}
for result in result_data: for result in result_data:
data[result[1]] = data.get(result[1], 0) + int(result[2]) data[result[1]] = data.get(result[1], 0) + int(result[2])
ga_model.update_sitewide_stats(period_name, "Country", data) self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Country", data, period_complete_day)
   
def _social_stats(self, start_date, end_date, period_name):  
  def _download_stats(self, start_date, end_date, period_name, period_complete_day):
  """ Fetches stats about language and country """
  import ckan.model as model
   
  data = {}
   
  results = self.service.data().ga().get(
  ids='ga:' + self.profile_id,
  start_date=start_date,
  filters='ga:eventAction==download',
  metrics='ga:totalEvents',
  sort='-ga:totalEvents',
  dimensions="ga:eventLabel",
  max_results=10000,
  end_date=end_date).execute()
  result_data = results.get('rows')
  if not result_data:
  # We may not have data for this time period, so we need to bail
  # early.
  log.info("There is no download data for this time period")
  return
   
  def process_result_data(result_data, cached=False):
  for result in result_data:
  url = result[0].strip()
   
  # Get package id associated with the resource that has this URL.
  q = model.Session.query(model.Resource)
  if cached:
  r = q.filter(model.Resource.cache_url.like("%s%%" % url)).first()
  else:
  r = q.filter(model.Resource.url.like("%s%%" % url)).first()
   
  package_name = r.resource_group.package.name if r else ""
  if package_name:
  data[package_name] = data.get(package_name, 0) + int(result[1])
  else:
  log.warning(u"Could not find resource for URL: {url}".format(url=url))
  continue
   
  process_result_data(results.get('rows'))
   
  results = self.service.data().ga().get(
  ids='ga:' + self.profile_id,
  start_date=start_date,
  filters='ga:eventAction==download-cache',
  metrics='ga:totalEvents',
  sort='-ga:totalEvents',
  dimensions="ga:eventLabel",
  max_results=10000,
  end_date=end_date).execute()
  process_result_data(results.get('rows'), cached=False)
   
  self._filter_out_long_tail(data, MIN_DOWNLOADS)
  ga_model.update_sitewide_stats(period_name, "Downloads", data, period_complete_day)
   
  def _social_stats(self, start_date, end_date, period_name, period_complete_day):
""" Finds out which social sites people are referred from """ """ Finds out which social sites people are referred from """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:pageviews',
sort='-ga:uniquePageviews', sort='-ga:pageviews',
dimensions="ga:socialNetwork,ga:referralPath", dimensions="ga:socialNetwork,ga:referralPath",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
twitter_links = []  
data = {} data = {}
for result in result_data: for result in result_data:
if not result[0] == '(not set)': if not result[0] == '(not set)':
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
if result[0] == 'Twitter': self._filter_out_long_tail(data, 3)
twitter_links.append(result[1]) ga_model.update_sitewide_stats(period_name, "Social sources", data, period_complete_day)
ga_model.update_sitewide_stats(period_name, "Social sources", data)  
   
  def _os_stats(self, start_date, end_date, period_name, period_complete_day):
def _os_stats(self, start_date, end_date, period_name):  
""" Operating system stats """ """ Operating system stats """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:pageviews',
sort='-ga:uniquePageviews', sort='-ga:pageviews',
dimensions="ga:operatingSystem,ga:operatingSystemVersion", dimensions="ga:operatingSystem,ga:operatingSystemVersion",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
ga_model.update_sitewide_stats(period_name, "Operating Systems", data) self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Operating Systems", data, period_complete_day)
data = {}  
for result in result_data: data = {}
key = "%s (%s)" % (result[0],result[1]) for result in result_data:
data[key] = result[2] if int(result[2]) >= MIN_VIEWS:
ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data) key = "%s %s" % (result[0],result[1])
  data[key] = result[2]
  ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data, period_complete_day)
def _browser_stats(self, start_date, end_date, period_name):  
   
  def _browser_stats(self, start_date, end_date, period_name, period_complete_day):
""" Information about browsers and browser versions """ """ Information about browsers and browser versions """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:pageviews',
sort='-ga:uniquePageviews', sort='-ga:pageviews',
dimensions="ga:browser,ga:browserVersion", dimensions="ga:browser,ga:browserVersion",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
  # e.g. [u'Firefox', u'19.0', u'20']
   
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
ga_model.update_sitewide_stats(period_name, "Browsers", data) self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Browsers", data, period_complete_day)
data = {}  
for result in result_data: data = {}
key = "%s (%s)" % (result[0], result[1]) for result in result_data:
data[key] = result[2] key = "%s %s" % (result[0], self._filter_browser_version(result[0], result[1]))
ga_model.update_sitewide_stats(period_name, "Browser versions", data) data[key] = data.get(key, 0) + int(result[2])
  self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Browser versions", data, period_complete_day)
def _mobile_stats(self, start_date, end_date, period_name):  
  @classmethod
  def _filter_browser_version(cls, browser, version_str):
  '''
  Simplifies a browser version string if it is detailed.
  i.e. groups together Firefox 3.5.1 and 3.5.2 to be just 3.
  This is helpful when viewing stats and good to protect privacy.
  '''
  ver = version_str
  parts = ver.split('.')
  if len(parts) > 1:
  if parts[1][0] == '0':
  ver = parts[0]
  else:
  ver = "%s" % (parts[0])
  # Special case complex version nums
  if browser in ['Safari', 'Android Browser']:
  ver = parts[0]
  if len(ver) > 2:
  num_hidden_digits = len(ver) - 2
  ver = ver[0] + ver[1] + 'X' * num_hidden_digits
  return ver
   
  def _mobile_stats(self, start_date, end_date, period_name, period_complete_day):
""" Info about mobile devices """ """ Info about mobile devices """
   
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:pageviews',
sort='-ga:uniquePageviews', sort='-ga:pageviews',
dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo", dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
   
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
ga_model.update_sitewide_stats(period_name, "Mobile brands", data) self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Mobile brands", data, period_complete_day)
   
data = {} data = {}
for result in result_data: for result in result_data:
data[result[1]] = data.get(result[1], 0) + int(result[2]) data[result[1]] = data.get(result[1], 0) + int(result[2])
ga_model.update_sitewide_stats(period_name, "Mobile devices", data) self._filter_out_long_tail(data, MIN_VIEWS)
  ga_model.update_sitewide_stats(period_name, "Mobile devices", data, period_complete_day)
   
  @classmethod
  def _filter_out_long_tail(cls, data, threshold=10):
  '''
  Given data which is a frequency distribution, filter out
  results which are below a threshold count. This is good to protect
  privacy.
  '''
  for key, value in data.items():
  if value < threshold:
  del data[key]
   
import os import os
import httplib2 import httplib2
from apiclient.discovery import build from apiclient.discovery import build
from oauth2client.client import flow_from_clientsecrets from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage from oauth2client.file import Storage
from oauth2client.tools import run from oauth2client.tools import run
   
from pylons import config from pylons import config
   
   
def _prepare_credentials(token_filename, credentials_filename): def _prepare_credentials(token_filename, credentials_filename):
""" """
Either returns the user's oauth credentials or uses the credentials Either returns the user's oauth credentials or uses the credentials
file to generate a token (by forcing the user to login in the browser) file to generate a token (by forcing the user to login in the browser)
""" """
storage = Storage(token_filename) storage = Storage(token_filename)
credentials = storage.get() credentials = storage.get()
   
if credentials is None or credentials.invalid: if credentials is None or credentials.invalid:
flow = flow_from_clientsecrets(credentials_filename, flow = flow_from_clientsecrets(credentials_filename,
scope='https://www.googleapis.com/auth/analytics.readonly', scope='https://www.googleapis.com/auth/analytics.readonly',
message="Can't find the credentials file") message="Can't find the credentials file")
credentials = run(flow, storage) credentials = run(flow, storage)
   
return credentials return credentials
   
   
def init_service(token_file, credentials_file): def init_service(token_file, credentials_file):
""" """
Given a file containing the user's oauth token (and another with Given a file containing the user's oauth token (and another with
credentials in case we need to generate the token) will return a credentials in case we need to generate the token) will return a
service object representing the analytics API. service object representing the analytics API.
""" """
http = httplib2.Http() http = httplib2.Http()
   
credentials = _prepare_credentials(token_file, credentials_file) credentials = _prepare_credentials(token_file, credentials_file)
http = credentials.authorize(http) # authorize the http object http = credentials.authorize(http) # authorize the http object
   
return build('analytics', 'v3', http=http) return build('analytics', 'v3', http=http)
   
   
def get_profile_id(service): def get_profile_id(service):
""" """
Get the profile ID for this user and the service specified by the Get the profile ID for this user and the service specified by the
'googleanalytics.id' configuration option. This function iterates 'googleanalytics.id' configuration option. This function iterates
over all of the accounts available to the user who invoked the over all of the accounts available to the user who invoked the
service to find one where the account name matches (in case the service to find one where the account name matches (in case the
user has several). user has several).
""" """
accounts = service.management().accounts().list().execute() accounts = service.management().accounts().list().execute()
   
if not accounts.get('items'): if not accounts.get('items'):
return None return None
   
accountName = config.get('googleanalytics.account') accountName = config.get('googleanalytics.account')
  if not accountName:
  raise Exception('googleanalytics.account needs to be configured')
webPropertyId = config.get('googleanalytics.id') webPropertyId = config.get('googleanalytics.id')
  if not webPropertyId:
  raise Exception('googleanalytics.id needs to be configured')
for acc in accounts.get('items'): for acc in accounts.get('items'):
if acc.get('name') == accountName: if acc.get('name') == accountName:
accountId = acc.get('id') accountId = acc.get('id')
   
webproperties = service.management().webproperties().list(accountId=accountId).execute() webproperties = service.management().webproperties().list(accountId=accountId).execute()
   
profiles = service.management().profiles().list( profiles = service.management().profiles().list(
accountId=accountId, webPropertyId=webPropertyId).execute() accountId=accountId, webPropertyId=webPropertyId).execute()
   
if profiles.get('items'): if profiles.get('items'):
return profiles.get('items')[0].get('id') return profiles.get('items')[0].get('id')
   
return None return None
   
import re import re
import uuid import uuid
   
from sqlalchemy import Table, Column, MetaData from sqlalchemy import Table, Column, MetaData, ForeignKey
from sqlalchemy import types from sqlalchemy import types
from sqlalchemy.sql import select from sqlalchemy.sql import select
from sqlalchemy.orm import mapper from sqlalchemy.orm import mapper, relation
from sqlalchemy import func from sqlalchemy import func
   
import ckan.model as model import ckan.model as model
from ckan.lib.base import * from ckan.lib.base import *
   
  log = __import__('logging').getLogger(__name__)
   
def make_uuid(): def make_uuid():
return unicode(uuid.uuid4()) return unicode(uuid.uuid4())
   
  metadata = MetaData()
   
class GA_Url(object): class GA_Url(object):