merge
merge

file:a/README.rst -> file:b/README.rst
ckanext-ga-report ckanext-ga-report
================= =================
   
**Status:** Development **Status:** Development
   
**CKAN Version:** 1.7.1+ **CKAN Version:** 1.7.1+
   
   
Overview Overview
-------- --------
   
For creating detailed reports of CKAN analytics, including totals per group. For creating detailed reports of CKAN analytics, including totals per group.
   
Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor).
   
Contents of this extension: Contents of this extension:
   
* Use the CLI tool to download Google Analytics data for each time period into this extension's database tables * Use the CLI tool to download Google Analytics data for each time period into this extension's database tables
   
* Users can view the data as web page reports * Users can view the data as web page reports
   
   
Installation Installation
------------ ------------
   
1. Activate you CKAN python environment and install this extension's software:: 1. Activate you CKAN python environment and install this extension's software::
   
$ pyenv/bin/activate $ pyenv/bin/activate
$ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report $ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report
   
2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration::
   
googleanalytics.id = UA-1010101-1 googleanalytics.id = UA-1010101-1
googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics) googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics)
ga-report.period = monthly ga-report.period = monthly
  ga-report.bounce_url = /data
   
  The ga-report.bounce_url specifies the path to use when calculating bounces. For DGU this is /data
  but you may want to set this to /.
   
3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file)::
   
$ paster initdb --config=../ckan/development.ini $ paster initdb --config=../ckan/development.ini
   
4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: 4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``::
   
ckan.plugins = ga-report ckan.plugins = ga-report
   
Problem shooting Problem shooting
---------------- ----------------
   
* ``(ProgrammingError) relation "ga_url" does not exist`` * ``(ProgrammingError) relation "ga_url" does not exist``
This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension. This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension.
   
   
Authorization Authorization
-------------- --------------
   
Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience:
   
1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ 1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_
   
2. Sign-in and create a project or use an existing project. 2. Sign-in and create a project or use an existing project.
   
3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. 3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service.
   
4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ 4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_
   
5. Click Create an OAuth 2.0 client ID.... 5. Click Create an OAuth 2.0 client ID....
   
6. Fill out the Branding Information fields and click Next. 6. Fill out the Branding Information fields and click Next.
   
7. In Client ID Settings, set Application type to Installed application. 7. In Client ID Settings, set Application type to Installed application.
   
8. Click Create client ID 8. Click Create client ID
   
9. The details you need below are Client ID, Client secret, and Redirect URIs 9. The details you need below are Client ID, Client secret, and Redirect URIs
   
   
Once you have set up your credentials.json file you can generate an oauth token file by using the Once you have set up your credentials.json file you can generate an oauth token file by using the
following command, which will store your oauth token in a file called token.dat once you have finished following command, which will store your oauth token in a file called token.dat once you have finished
giving permission in the browser:: giving permission in the browser::
   
$ paster getauthtoken --config=../ckan/development.ini $ paster getauthtoken --config=../ckan/development.ini
   
   
Tutorial Tutorial
-------- --------
   
Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step::
   
$ paster loadanalytics token.dat latest --config=../ckan/development.ini $ paster loadanalytics token.dat latest --config=../ckan/development.ini
   
The value after the token file is how much data you want to retrieve, this can be The value after the token file is how much data you want to retrieve, this can be
   
* **all** - data for all time (since 2010) * **all** - data for all time (since 2010)
   
* **latest** - (default) just the 'latest' data * **latest** - (default) just the 'latest' data
   
* **YYYY-MM-DD** - just data for all time periods going back to (and including) this date * **YYYY-MM-DD** - just data for all time periods going back to (and including) this date
   
   
   
Software Licence Software Licence
================ ================
   
This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License).
   
OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/
   
import re import re
import csv import csv
import sys import sys
import logging import logging
import operator import operator
import collections import collections
from ckan.lib.base import (BaseController, c, g, render, request, response, abort) from ckan.lib.base import (BaseController, c, g, render, request, response, abort)
   
import sqlalchemy import sqlalchemy
from sqlalchemy import func, cast, Integer from sqlalchemy import func, cast, Integer
import ckan.model as model import ckan.model as model
from ga_model import GA_Url, GA_Stat, GA_ReferralStat from ga_model import GA_Url, GA_Stat, GA_ReferralStat
   
log = logging.getLogger('ckanext.ga-report') log = logging.getLogger('ckanext.ga-report')
   
   
def _get_month_name(strdate): def _get_month_name(strdate):
import calendar import calendar
from time import strptime from time import strptime
d = strptime(strdate, '%Y-%m') d = strptime(strdate, '%Y-%m')
return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year) return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year)
   
   
def _month_details(cls): def _month_details(cls):
months = [] months = []
vals = model.Session.query(cls.period_name).distinct().all() vals = model.Session.query(cls.period_name).distinct().all()
for m in vals: for m in vals:
months.append( (m[0], _get_month_name(m[0]))) months.append( (m[0], _get_month_name(m[0])))
return sorted(months, key=operator.itemgetter(0), reverse=True) return sorted(months, key=operator.itemgetter(0), reverse=True)
   
   
class GaReport(BaseController): class GaReport(BaseController):
   
def csv(self, month): def csv(self, month):
import csv import csv
   
q = model.Session.query(GA_Stat) q = model.Session.query(GA_Stat)
if month != 'all': if month != 'all':
q = q.filter(GA_Stat.period_name==month) q = q.filter(GA_Stat.period_name==month)
entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all() entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all()
   
response.headers['Content-Type'] = "text/csv; charset=utf-8" response.headers['Content-Type'] = "text/csv; charset=utf-8"
response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,)) response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,))
   
writer = csv.writer(response) writer = csv.writer(response)
writer.writerow(["Period", "Statistic", "Key", "Value"]) writer.writerow(["Period", "Statistic", "Key", "Value"])
   
for entry in entries: for entry in entries:
writer.writerow([entry.period_name.encode('utf-8'), writer.writerow([entry.period_name.encode('utf-8'),
entry.stat_name.encode('utf-8'), entry.stat_name.encode('utf-8'),
entry.key.encode('utf-8'), entry.key.encode('utf-8'),
entry.value.encode('utf-8')]) entry.value.encode('utf-8')])
   
def index(self): def index(self):
   
# Get the month details by fetching distinct values and determining the # Get the month details by fetching distinct values and determining the
# month names from the values. # month names from the values.
c.months = _month_details(GA_Stat) c.months = _month_details(GA_Stat)
   
# Work out which month to show, based on query params of the first item # Work out which month to show, based on query params of the first item
c.month_desc = 'all months' c.month_desc = 'all months'
c.month = request.params.get('month', '') c.month = request.params.get('month', '')
if c.month: if c.month:
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
   
q = model.Session.query(GA_Stat).\ q = model.Session.query(GA_Stat).\
filter(GA_Stat.stat_name=='Totals') filter(GA_Stat.stat_name=='Totals')
if c.month: if c.month:
q = q.filter(GA_Stat.period_name==c.month) q = q.filter(GA_Stat.period_name==c.month)
entries = q.order_by('ga_stat.key').all() entries = q.order_by('ga_stat.key').all()
   
def clean_key(key, val): def clean_key(key, val):
if key in ['Average time on site', 'Pages per visit', 'New visits']: if key in ['Average time on site', 'Pages per visit', 'New visits', 'Bounces']:
val = "%.2f" % round(float(val), 2) val = "%.2f" % round(float(val), 2)
if key == 'Average time on site': if key == 'Average time on site':
mins, secs = divmod(float(val), 60) mins, secs = divmod(float(val), 60)
hours, mins = divmod(mins, 60) hours, mins = divmod(mins, 60)
val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val) val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val)
if key == 'New visits': if key in ['New visits','Bounces']:
val = "%s%%" % val val = "%s%%" % val
if key in ['Bounces', 'Total page views', 'Total visits']: if key in ['Total page views', 'Total visits']:
val = int(val) val = int(val)
   
return key, val return key, val
   
c.global_totals = [] c.global_totals = []
if c.month: if c.month:
for e in entries: for e in entries:
key, val = clean_key(e.key, e.value) key, val = clean_key(e.key, e.value)
c.global_totals.append((key, val)) c.global_totals.append((key, val))
else: else:
d = collections.defaultdict(list) d = collections.defaultdict(list)
for e in entries: for e in entries:
d[e.key].append(float(e.value)) d[e.key].append(float(e.value))
for k, v in d.iteritems(): for k, v in d.iteritems():
if k in ['Bounces', 'Total page views', 'Total visits']: if k in ['Total page views', 'Total visits']:
v = sum(v) v = sum(v)
else: else:
v = float(sum(v))/len(v) v = float(sum(v))/len(v)
key, val = clean_key(k,v) key, val = clean_key(k,v)
   
c.global_totals.append((key, val)) c.global_totals.append((key, val))
c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0)) c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0))
   
keys = { keys = {
'Browser versions': 'browser_versions', 'Browser versions': 'browser_versions',
'Browsers': 'browsers', 'Browsers': 'browsers',
'Operating Systems versions': 'os_versions', 'Operating Systems versions': 'os_versions',
'Operating Systems': 'os', 'Operating Systems': 'os',
'Social sources': 'social_networks', 'Social sources': 'social_networks',
'Languages': 'languages', 'Languages': 'languages',
'Country': 'country' 'Country': 'country'
} }
   
def shorten_name(name, length=60): def shorten_name(name, length=60):
return (name[:length] + '..') if len(name) > 60 else name return (name[:length] + '..') if len(name) > 60 else name
   
def fill_out_url(url): def fill_out_url(url):
import urlparse import urlparse
return urlparse.urljoin(g.site_url, url) return urlparse.urljoin(g.site_url, url)
   
c.social_referrer_totals, c.social_referrers = [], [] c.social_referrer_totals, c.social_referrers = [], []
q = model.Session.query(GA_ReferralStat) q = model.Session.query(GA_ReferralStat)
q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
q = q.order_by('ga_referrer.count::int desc') q = q.order_by('ga_referrer.count::int desc')
for entry in q.all(): for entry in q.all():
c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url), c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url),
entry.source,entry.count)) entry.source,entry.count))
   
q = model.Session.query(GA_ReferralStat.url, q = model.Session.query(GA_ReferralStat.url,
func.sum(GA_ReferralStat.count).label('count')) func.sum(GA_ReferralStat.count).label('count'))
q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
q = q.order_by('count desc').group_by(GA_ReferralStat.url) q = q.order_by('count desc').group_by(GA_ReferralStat.url)
for entry in q.all(): for entry in q.all():
c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'', c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'',
entry[1])) entry[1]))
   
   
browser_version_re = re.compile("(.*)\((.*)\)")  
for k, v in keys.iteritems(): for k, v in keys.iteritems():
   
def clean_field(key):  
if k != 'Browser versions':  
return key  
m = browser_version_re.match(key)  
browser = m.groups()[0].strip()  
ver = m.groups()[1]  
parts = ver.split('.')  
if len(parts) > 1:  
if parts[1][0] == '0':  
ver = parts[0]  
else:  
ver = "%s.%s" % (parts[0],parts[1])  
if browser in ['Safari','Android Browser']: # Special case complex version nums  
ver = parts[0]  
if len(ver) > 2:  
ver = "%s%sX" % (ver[0], ver[1])  
   
return "%s (%s)" % (browser, ver,)  
   
q = model.Session.query(GA_Stat).\ q = model.Session.query(GA_Stat).\
filter(GA_Stat.stat_name==k) filter(GA_Stat.stat_name==k)
if c.month: if c.month:
entries = [] entries = []
q = q.filter(GA_Stat.period_name==c.month).\ q = q.filter(GA_Stat.period_name==c.month).\
order_by('ga_stat.value::int desc') order_by('ga_stat.value::int desc')
   
d = collections.defaultdict(int) d = collections.defaultdict(int)
for e in q.all(): for e in q.all():
d[e.key] += int(e.value) d[e.key] += int(e.value)
entries = [] entries = []
for key, val in d.iteritems(): for key, val in d.iteritems():
entries.append((key,val,)) entries.append((key,val,))
entries = sorted(entries, key=operator.itemgetter(1), reverse=True) entries = sorted(entries, key=operator.itemgetter(1), reverse=True)
   
def percent(num, total):  
p = 100 * float(num)/float(total)  
return "%.2f%%" % round(p, 2)  
   
# Get the total for each set of values and then set the value as # Get the total for each set of values and then set the value as
# a percentage of the total # a percentage of the total
if k == 'Social sources': if k == 'Social sources':
total = sum([x for n,x in c.global_totals if n == 'Total visits']) total = sum([x for n,x in c.global_totals if n == 'Total visits'])
else: else:
total = sum([num for _,num in entries]) total = sum([num for _,num in entries])
setattr(c, v, [(k,percent(v,total)) for k,v in entries ]) setattr(c, v, [(k,_percent(v,total)) for k,v in entries ])
   
return render('ga_report/site/index.html') return render('ga_report/site/index.html')
   
   
class GaDatasetReport(BaseController): class GaDatasetReport(BaseController):
""" """
Displays the pageview and visit count for datasets Displays the pageview and visit count for datasets
with options to filter by publisher and time period. with options to filter by publisher and time period.
""" """
def publisher_csv(self, month): def publisher_csv(self, month):
''' '''
Returns a CSV of each publisher with the total number of dataset Returns a CSV of each publisher with the total number of dataset
views & visits. views & visits.
''' '''
c.month = month if not month == 'all' else '' c.month = month if not month == 'all' else ''
response.headers['Content-Type'] = "text/csv; charset=utf-8" response.headers['Content-Type'] = "text/csv; charset=utf-8"
response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,)) response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,))
   
writer = csv.writer(response) writer = csv.writer(response)
writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"]) writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"])
   
for publisher,view,visit in _get_top_publishers(None): for publisher,view,visit in _get_top_publishers(None):
writer.writerow([publisher.title.encode('utf-8'), writer.writerow([publisher.title.encode('utf-8'),
publisher.name.encode('utf-8'), publisher.name.encode('utf-8'),
view, view,
visit, visit,
month]) month])
   
def dataset_csv(self, id='all', month='all'): def dataset_csv(self, id='all', month='all'):
''' '''
Returns a CSV with the number of views & visits for each dataset. Returns a CSV with the number of views & visits for each dataset.
   
:param id: A Publisher ID or None if you want for all :param id: A Publisher ID or None if you want for all
:param month: The time period, or 'all' :param month: The time period, or 'all'
''' '''
c.month = month if not month == 'all' else '' c.month = month if not month == 'all' else ''
if id != 'all': if id != 'all':
c.publisher = model.Group.get(id) c.publisher = model.Group.get(id)
if not c.publisher: if not c.publisher:
abort(404, 'A publisher with that name could not be found') abort(404, 'A publisher with that name could not be found')
   
packages = self._get_packages(c.publisher) packages = self._get_packages(c.publisher)
response.headers['Content-Type'] = "text/csv; charset=utf-8" response.headers['Content-Type'] = "text/csv; charset=utf-8"
response.headers['Content-Disposition'] = \ response.headers['Content-Disposition'] = \
str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,)) str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,))
   
writer = csv.writer(response) writer = csv.writer(response)
writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"]) writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"])
   
for package,view,visit in packages: for package,view,visit in packages:
writer.writerow([package.title.encode('utf-8'), writer.writerow([package.title.encode('utf-8'),
package.name.encode('utf-8'), package.name.encode('utf-8'),
view, view,
visit, visit,
month]) month])
   
def publishers(self): def publishers(self):
'''A list of publishers and the number of views/visits for each''' '''A list of publishers and the number of views/visits for each'''
   
# Get the month details by fetching distinct values and determining the # Get the month details by fetching distinct values and determining the
# month names from the values. # month names from the values.
c.months = _month_details(GA_Url) c.months = _month_details(GA_Url)
   
# Work out which month to show, based on query params of the first item # Work out which month to show, based on query params of the first item
c.month = request.params.get('month', '') c.month = request.params.get('month', '')
c.month_desc = 'all months' c.month_desc = 'all months'
if c.month: if c.month:
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
   
c.top_publishers = _get_top_publishers() c.top_publishers = _get_top_publishers()
   
return render('ga_report/publisher/index.html') return render('ga_report/publisher/index.html')
   
def _get_packages(self, publisher=None, count=-1): def _get_packages(self, publisher=None, count=-1):
'''Returns the datasets in order of visits''' '''Returns the datasets in order of visits'''
if count == -1: if count == -1:
count = sys.maxint count = sys.maxint
   
q = model.Session.query(GA_Url)\ q = model.Session.query(GA_Url)\
.filter(GA_Url.url.like('/dataset/%')) .filter(GA_Url.url.like('/dataset/%'))
if publisher: if publisher:
q = q.filter(GA_Url.department_id==publisher.name) q = q.filter(GA_Url.department_id==publisher.name)
if c.month: if c.month:
q = q.filter(GA_Url.period_name==c.month) q = q.filter(GA_Url.period_name==c.month)
q = q.order_by('ga_url.visitors::int desc') q = q.order_by('ga_url.visitors::int desc')
   
if c.month: if c.month:
top_packages = [] top_packages = []
for entry in q.limit(count): for entry in q.limit(count):
package_name = entry.url[len('/dataset/'):] package_name = entry.url[len('/dataset/'):]
p = model.Package.get(package_name) p = model.Package.get(package_name)
if p: if p:
top_packages.append((p, entry.pageviews, entry.visitors)) top_packages.append((p, entry.pageviews, entry.visitors))
else: else:
log.warning('Could not find package "%s"', package_name) log.warning('Could not find package "%s"', package_name)
else: else:
ds = {} ds = {}
for entry in q: for entry in q:
if len(ds) >= count: if len(ds) >= count:
break break
package_name = entry.url[len('/dataset/'):] package_name = entry.url[len('/dataset/'):]
p = model.Package.get(package_name) p = model.Package.get(package_name)
if p: if p:
if not p in ds: if not p in ds:
ds[p] = {'views': 0, 'visits': 0} ds[p] = {'views': 0, 'visits': 0}
ds[p]['views'] = ds[p]['views'] + int(entry.pageviews) ds[p]['views'] = ds[p]['views'] + int(entry.pageviews)
ds[p]['visits'] = ds[p]['visits'] + int(entry.visitors) ds[p]['visits'] = ds[p]['visits'] + int(entry.visitors)
else: else:
log.warning('Could not find package "%s"', package_name) log.warning('Could not find package "%s"', package_name)
   
results = [] results = []
for k, v in ds.iteritems(): for k, v in ds.iteritems():
results.append((k,v['views'],v['visits'])) results.append((k,v['views'],v['visits']))
   
top_packages = sorted(results, key=operator.itemgetter(1), reverse=True) top_packages = sorted(results, key=operator.itemgetter(1), reverse=True)
return top_packages return top_packages
   
def read(self): def read(self):
''' '''
Lists the most popular datasets across all publishers Lists the most popular datasets across all publishers
''' '''
return self.read_publisher(None) return self.read_publisher(None)
   
def read_publisher(self, id): def read_publisher(self, id):
''' '''
Lists the most popular datasets for a publisher (or across all publishers) Lists the most popular datasets for a publisher (or across all publishers)
''' '''
count = 20 count = 20
   
c.publishers = _get_publishers() c.publishers = _get_publishers()
   
id = request.params.get('publisher', id) id = request.params.get('publisher', id)
if id and id != 'all': if id and id != 'all':
c.publisher = model.Group.get(id) c.publisher = model.Group.get(id)
if not c.publisher: if not c.publisher:
abort(404, 'A publisher with that name could not be found') abort(404, 'A publisher with that name could not be found')
c.publisher_name = c.publisher.name c.publisher_name = c.publisher.name
c.top_packages = [] # package, dataset_views in c.top_packages c.top_packages = [] # package, dataset_views in c.top_packages
   
# Get the month details by fetching distinct values and determining the # Get the month details by fetching distinct values and determining the
# month names from the values. # month names from the values.
c.months = _month_details(GA_Url) c.months = _month_details(GA_Url)
   
# Work out which month to show, based on query params of the first item # Work out which month to show, based on query params of the first item
c.month = request.params.get('month', '') c.month = request.params.get('month', '')
if not c.month: if not c.month:
c.month_desc = 'all months' c.month_desc = 'all months'
else: else:
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
   
c.publisher_page_views = 0 c.publisher_page_views = 0
q = model.Session.query(GA_Url).\ q = model.Session.query(GA_Url).\
filter(GA_Url.url=='/publisher/%s' % c.publisher_name) filter(GA_Url.url=='/publisher/%s' % c.publisher_name)
if c.month: if c.month:
entry = q.filter(GA_Url.period_name==c.month).first() entry = q.filter(GA_Url.period_name==c.month).first()
c.publisher_page_views = entry.pageviews if entry else 0 c.publisher_page_views = entry.pageviews if entry else 0
else: else:
for e in q.all(): for e in q.all():
c.publisher_page_views = c.publisher_page_views + int(e.pageviews) c.publisher_page_views = c.publisher_page_views + int(e.pageviews)
   
c.top_packages = self._get_packages(c.publisher, 20) c.top_packages = self._get_packages(c.publisher, 20)
   
return render('ga_report/publisher/read.html') return render('ga_report/publisher/read.html')
   
def _get_top_publishers(limit=20): def _get_top_publishers(limit=20):
''' '''
Returns a list of the top 20 publishers by dataset visits. Returns a list of the top 20 publishers by dataset visits.
(The number to show can be varied with 'limit') (The number to show can be varied with 'limit')
''' '''
connection = model.Session.connection() connection = model.Session.connection()
q = """ q = """
select department_id, sum(pageviews::int) views, sum(visitors::int) visits select department_id, sum(pageviews::int) views, sum(visitors::int) visits
from ga_url from ga_url
where department_id <> ''""" where department_id <> ''"""
if c.month: if c.month:
q = q + """ q = q + """
and period_name=%s and period_name=%s
""" """
q = q + """ q = q + """
group by department_id order by visits desc group by department_id order by visits desc
""" """
if limit: if limit:
q = q + " limit %s;" % (limit) q = q + " limit %s;" % (limit)
   
# Add this back (before and period_name =%s) if you want to ignore publisher # Add this back (before and period_name =%s) if you want to ignore publisher
# homepage views # homepage views
# and not url like '/publisher/%%' # and not url like '/publisher/%%'
   
top_publishers = [] top_publishers = []
res = connection.execute(q, c.month) res = connection.execute(q, c.month)
   
for row in res: for row in res:
g = model.Group.get(row[0]) g = model.Group.get(row[0])
if g: if g:
top_publishers.append((g, row[1], row[2])) top_publishers.append((g, row[1], row[2]))
return top_publishers return top_publishers
   
def _get_publishers(): def _get_publishers():
''' '''
Returns a list of all publishers. Each item is a tuple: Returns a list of all publishers. Each item is a tuple:
(names, title) (names, title)
''' '''
publishers = [] publishers = []
for pub in model.Session.query(model.Group).\ for pub in model.Session.query(model.Group).\
filter(model.Group.type=='publisher').\ filter(model.Group.type=='publisher').\
filter(model.Group.state=='active').\ filter(model.Group.state=='active').\
order_by(model.Group.name): order_by(model.Group.name):
publishers.append((pub.name, pub.title)) publishers.append((pub.name, pub.title))
return publishers return publishers
   
  def _percent(num, total):
  p = 100 * float(num)/float(total)
  return "%.2f%%" % round(p, 2)
   
import os import os
import logging import logging
import datetime import datetime
import collections import collections
from pylons import config from pylons import config
   
import ga_model import ga_model
   
#from ga_client import GA #from ga_client import GA
   
log = logging.getLogger('ckanext.ga-report') log = logging.getLogger('ckanext.ga-report')
   
FORMAT_MONTH = '%Y-%m' FORMAT_MONTH = '%Y-%m'
MIN_VIEWS = 50 MIN_VIEWS = 50
MIN_VISITS = 20 MIN_VISITS = 20
   
class DownloadAnalytics(object): class DownloadAnalytics(object):
'''Downloads and stores analytics info''' '''Downloads and stores analytics info'''
   
def __init__(self, service=None, profile_id=None, delete_first=False): def __init__(self, service=None, profile_id=None, delete_first=False):
self.period = config['ga-report.period'] self.period = config['ga-report.period']
self.service = service self.service = service
self.profile_id = profile_id self.profile_id = profile_id
self.delete_first = delete_first self.delete_first = delete_first
   
def specific_month(self, date): def specific_month(self, date):
import calendar import calendar
   
first_of_this_month = datetime.datetime(date.year, date.month, 1) first_of_this_month = datetime.datetime(date.year, date.month, 1)
_, last_day_of_month = calendar.monthrange(int(date.year), int(date.month)) _, last_day_of_month = calendar.monthrange(int(date.year), int(date.month))
last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month) last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month)
periods = ((date.strftime(FORMAT_MONTH), periods = ((date.strftime(FORMAT_MONTH),
last_day_of_month, last_day_of_month,
first_of_this_month, last_of_this_month),) first_of_this_month, last_of_this_month),)
self.download_and_store(periods) self.download_and_store(periods)
   
   
def latest(self): def latest(self):
if self.period == 'monthly': if self.period == 'monthly':
# from first of this month to today # from first of this month to today
now = datetime.datetime.now() now = datetime.datetime.now()
first_of_this_month = datetime.datetime(now.year, now.month, 1) first_of_this_month = datetime.datetime(now.year, now.month, 1)
periods = ((now.strftime(FORMAT_MONTH), periods = ((now.strftime(FORMAT_MONTH),
now.day, now.day,
first_of_this_month, now),) first_of_this_month, now),)
else: else:
raise NotImplementedError raise NotImplementedError
self.download_and_store(periods) self.download_and_store(periods)
   
   
def for_date(self, for_date): def for_date(self, for_date):
assert isinstance(since_date, datetime.datetime) assert isinstance(since_date, datetime.datetime)
periods = [] # (period_name, period_complete_day, start_date, end_date) periods = [] # (period_name, period_complete_day, start_date, end_date)
if self.period == 'monthly': if self.period == 'monthly':
first_of_the_months_until_now = [] first_of_the_months_until_now = []
year = for_date.year year = for_date.year
month = for_date.month month = for_date.month
now = datetime.datetime.now() now = datetime.datetime.now()
first_of_this_month = datetime.datetime(now.year, now.month, 1) first_of_this_month = datetime.datetime(now.year, now.month, 1)
while True: while True:
first_of_the_month = datetime.datetime(year, month, 1) first_of_the_month = datetime.datetime(year, month, 1)
if first_of_the_month == first_of_this_month: if first_of_the_month == first_of_this_month:
periods.append((now.strftime(FORMAT_MONTH), periods.append((now.strftime(FORMAT_MONTH),
now.day, now.day,
first_of_this_month, now)) first_of_this_month, now))
break break
elif first_of_the_month < first_of_this_month: elif first_of_the_month < first_of_this_month:
in_the_next_month = first_of_the_month + datetime.timedelta(40) in_the_next_month = first_of_the_month + datetime.timedelta(40)
last_of_the_month = datetime.datetime(in_the_next_month.year, last_of_the_month = datetime.datetime(in_the_next_month.year,
in_the_next_month.month, 1)\ in_the_next_month.month, 1)\
- datetime.timedelta(1) - datetime.timedelta(1)
periods.append((now.strftime(FORMAT_MONTH), 0, periods.append((now.strftime(FORMAT_MONTH), 0,
first_of_the_month, last_of_the_month)) first_of_the_month, last_of_the_month))
else: else:
# first_of_the_month has got to the future somehow # first_of_the_month has got to the future somehow
break break
month += 1 month += 1
if month > 12: if month > 12:
year += 1 year += 1
month = 1 month = 1
else: else:
raise NotImplementedError raise NotImplementedError
self.download_and_store(periods) self.download_and_store(periods)
   
@staticmethod @staticmethod
def get_full_period_name(period_name, period_complete_day): def get_full_period_name(period_name, period_complete_day):
if period_complete_day: if period_complete_day:
return period_name + ' (up to %ith)' % period_complete_day return period_name + ' (up to %ith)' % period_complete_day
else: else:
return period_name return period_name
   
   
def download_and_store(self, periods): def download_and_store(self, periods):
for period_name, period_complete_day, start_date, end_date in periods: for period_name, period_complete_day, start_date, end_date in periods:
if self.delete_first: if self.delete_first:
log.info('Deleting existing Analytics for period "%s"', log.info('Deleting existing Analytics for period "%s"',
period_name) period_name)
ga_model.delete(period_name) ga_model.delete(period_name)
log.info('Downloading Analytics for period "%s" (%s - %s)', log.info('Downloading Analytics for period "%s" (%s - %s)',
self.get_full_period_name(period_name, period_complete_day), self.get_full_period_name(period_name, period_complete_day),
start_date.strftime('%Y %m %d'), start_date.strftime('%Y %m %d'),
end_date.strftime('%Y %m %d')) end_date.strftime('%Y %m %d'))
   
data = self.download(start_date, end_date, '~/dataset/[a-z0-9-_]+') data = self.download(start_date, end_date, '~/dataset/[a-z0-9-_]+')
log.info('Storing Dataset Analytics for period "%s"', log.info('Storing Dataset Analytics for period "%s"',
self.get_full_period_name(period_name, period_complete_day)) self.get_full_period_name(period_name, period_complete_day))
self.store(period_name, period_complete_day, data, ) self.store(period_name, period_complete_day, data, )
   
data = self.download(start_date, end_date, '~/publisher/[a-z0-9-_]+') data = self.download(start_date, end_date, '~/publisher/[a-z0-9-_]+')
log.info('Storing Publisher Analytics for period "%s"', log.info('Storing Publisher Analytics for period "%s"',
self.get_full_period_name(period_name, period_complete_day)) self.get_full_period_name(period_name, period_complete_day))
self.store(period_name, period_complete_day, data,) self.store(period_name, period_complete_day, data,)
   
ga_model.update_publisher_stats(period_name) # about 30 seconds. ga_model.update_publisher_stats(period_name) # about 30 seconds.
self.sitewide_stats( period_name ) self.sitewide_stats( period_name )
   
self.update_social_info(period_name, start_date, end_date) self.update_social_info(period_name, start_date, end_date)
   
def update_social_info(self, period_name, start_date, end_date): def update_social_info(self, period_name, start_date, end_date):
start_date = start_date.strftime('%Y-%m-%d') start_date = start_date.strftime('%Y-%m-%d')
end_date = end_date.strftime('%Y-%m-%d') end_date = end_date.strftime('%Y-%m-%d')
query = 'ga:hasSocialSourceReferral=~Yes$' query = 'ga:hasSocialSourceReferral=~Yes$'
metrics = 'ga:entrances' metrics = 'ga:entrances'
sort = '-ga:entrances' sort = '-ga:entrances'
   
# Supported query params at # Supported query params at
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
filters=query, filters=query,
start_date=start_date, start_date=start_date,
metrics=metrics, metrics=metrics,
sort=sort, sort=sort,
dimensions="ga:landingPagePath,ga:socialNetwork", dimensions="ga:landingPagePath,ga:socialNetwork",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
data = collections.defaultdict(list) data = collections.defaultdict(list)
rows = results.get('rows',[]) rows = results.get('rows',[])
for row in rows: for row in rows:
from ga_model import _normalize_url from ga_model import _normalize_url
data[_normalize_url(row[0])].append( (row[1], int(row[2]),) ) data[_normalize_url(row[0])].append( (row[1], int(row[2]),) )
ga_model.update_social(period_name, data) ga_model.update_social(period_name, data)
   
   
def download(self, start_date, end_date, path='~/dataset/[a-z0-9-_]+'): def download(self, start_date, end_date, path='~/dataset/[a-z0-9-_]+'):
'''Get data from GA for a given time period''' '''Get data from GA for a given time period'''
start_date = start_date.strftime('%Y-%m-%d') start_date = start_date.strftime('%Y-%m-%d')
end_date = end_date.strftime('%Y-%m-%d') end_date = end_date.strftime('%Y-%m-%d')
query = 'ga:pagePath=%s$' % path query = 'ga:pagePath=%s$' % path
metrics = 'ga:uniquePageviews, ga:visitors' metrics = 'ga:uniquePageviews, ga:visitors'
sort = '-ga:uniquePageviews' sort = '-ga:uniquePageviews'
   
# Supported query params at # Supported query params at
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
filters=query, filters=query,
start_date=start_date, start_date=start_date,
metrics=metrics, metrics=metrics,
sort=sort, sort=sort,
dimensions="ga:pagePath", dimensions="ga:pagePath",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
   
if os.getenv('DEBUG'):  
import pprint  
pprint.pprint(results)  
print 'Total results: %s' % results.get('totalResults')  
   
packages = [] packages = []
for entry in results.get('rows'): for entry in results.get('rows'):
(loc,pageviews,visits) = entry (loc,pageviews,visits) = entry
packages.append( ('http:/' + loc, pageviews, visits,) ) # Temporary hack packages.append( ('http:/' + loc, pageviews, visits,) ) # Temporary hack
return dict(url=packages) return dict(url=packages)
   
def store(self, period_name, period_complete_day, data): def store(self, period_name, period_complete_day, data):
if 'url' in data: if 'url' in data:
ga_model.update_url_stats(period_name, period_complete_day, data['url']) ga_model.update_url_stats(period_name, period_complete_day, data['url'])
   
def sitewide_stats(self, period_name): def sitewide_stats(self, period_name):
import calendar import calendar
year, month = period_name.split('-') year, month = period_name.split('-')
_, last_day_of_month = calendar.monthrange(int(year), int(month)) _, last_day_of_month = calendar.monthrange(int(year), int(month))
   
start_date = '%s-01' % period_name start_date = '%s-01' % period_name
end_date = '%s-%s' % (period_name, last_day_of_month) end_date = '%s-%s' % (period_name, last_day_of_month)
print 'Sitewide_stats for %s (%s -> %s)' % (period_name, start_date, end_date) print 'Sitewide_stats for %s (%s -> %s)' % (period_name, start_date, end_date)
   
funcs = ['_totals_stats', '_social_stats', '_os_stats', funcs = ['_totals_stats', '_social_stats', '_os_stats',
'_locale_stats', '_browser_stats', '_mobile_stats'] '_locale_stats', '_browser_stats', '_mobile_stats']
for f in funcs: for f in funcs:
print ' + Fetching %s stats' % f.split('_')[1] print ' + Fetching %s stats' % f.split('_')[1]
getattr(self, f)(start_date, end_date, period_name) getattr(self, f)(start_date, end_date, period_name)
   
def _get_results(result_data, f): def _get_results(result_data, f):
data = {} data = {}
for result in result_data: for result in result_data:
key = f(result) key = f(result)
data[key] = data.get(key,0) + result[1] data[key] = data.get(key,0) + result[1]
return data return data
   
def _totals_stats(self, start_date, end_date, period_name): def _totals_stats(self, start_date, end_date, period_name):
""" Fetches distinct totals, total pageviews etc """ """ Fetches distinct totals, total pageviews etc """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:uniquePageviews',
sort='-ga:uniquePageviews', sort='-ga:uniquePageviews',
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
ga_model.update_sitewide_stats(period_name, "Totals", {'Total page views': result_data[0][0]}) ga_model.update_sitewide_stats(period_name, "Totals", {'Total page views': result_data[0][0]})
   
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:pageviewsPerVisit,ga:bounces,ga:avgTimeOnSite,ga:percentNewVisits,ga:visitors', metrics='ga:pageviewsPerVisit,ga:avgTimeOnSite,ga:percentNewVisits,ga:visitors',
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = { data = {
'Pages per visit': result_data[0][0], 'Pages per visit': result_data[0][0],
'Bounces': result_data[0][1], 'Average time on site': result_data[0][1],
'Average time on site': result_data[0][2], 'New visits': result_data[0][2],
'New visits': result_data[0][3], 'Total visits': result_data[0][3],
'Total visits': result_data[0][4],  
} }
ga_model.update_sitewide_stats(period_name, "Totals", data) ga_model.update_sitewide_stats(period_name, "Totals", data)
   
  # Bounces from /data. This url is specified in configuration because
  # for DGU we don't want /.
  path = config.get('ga-report.bounce_url','/')
  print path
  results = self.service.data().ga().get(
  ids='ga:' + self.profile_id,
  filters='ga:pagePath=~%s$' % (path,),
  start_date=start_date,
  metrics='ga:bounces,ga:uniquePageviews',
  dimensions='ga:pagePath',
  max_results=10000,
  end_date=end_date).execute()
  result_data = results.get('rows')
  for results in result_data:
  if results[0] == path:
  bounce, total = [float(x) for x in results[1:]]
  pct = 100 * bounce/total
  print "%d bounces from %d total == %s" % (bounce, total, pct)
  ga_model.update_sitewide_stats(period_name, "Totals", {'Bounces': pct})
   
   
def _locale_stats(self, start_date, end_date, period_name): def _locale_stats(self, start_date, end_date, period_name):
""" Fetches stats about language and country """ """ Fetches stats about language and country """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:uniquePageviews',
sort='-ga:uniquePageviews', sort='-ga:uniquePageviews',
dimensions="ga:language,ga:country", dimensions="ga:language,ga:country",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Languages", data) ga_model.update_sitewide_stats(period_name, "Languages", data)
   
data = {} data = {}
for result in result_data: for result in result_data:
data[result[1]] = data.get(result[1], 0) + int(result[2]) data[result[1]] = data.get(result[1], 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Country", data) ga_model.update_sitewide_stats(period_name, "Country", data)
   
   
def _social_stats(self, start_date, end_date, period_name): def _social_stats(self, start_date, end_date, period_name):
""" Finds out which social sites people are referred from """ """ Finds out which social sites people are referred from """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:uniquePageviews',
sort='-ga:uniquePageviews', sort='-ga:uniquePageviews',
dimensions="ga:socialNetwork,ga:referralPath", dimensions="ga:socialNetwork,ga:referralPath",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
if not result[0] == '(not set)': if not result[0] == '(not set)':
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
self._filter_out_long_tail(data, 3) self._filter_out_long_tail(data, 3)
ga_model.update_sitewide_stats(period_name, "Social sources", data) ga_model.update_sitewide_stats(period_name, "Social sources", data)
   
   
def _os_stats(self, start_date, end_date, period_name): def _os_stats(self, start_date, end_date, period_name):
""" Operating system stats """ """ Operating system stats """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:uniquePageviews',
sort='-ga:uniquePageviews', sort='-ga:uniquePageviews',
dimensions="ga:operatingSystem,ga:operatingSystemVersion", dimensions="ga:operatingSystem,ga:operatingSystemVersion",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Operating Systems", data) ga_model.update_sitewide_stats(period_name, "Operating Systems", data)
   
data = {} data = {}
for result in result_data: for result in result_data:
if int(result[2]) >= MIN_VIEWS: if int(result[2]) >= MIN_VIEWS:
key = "%s %s" % (result[0],result[1]) key = "%s %s" % (result[0],result[1])
data[key] = result[2] data[key] = result[2]
ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data) ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data)
   
   
def _browser_stats(self, start_date, end_date, period_name): def _browser_stats(self, start_date, end_date, period_name):
""" Information about browsers and browser versions """ """ Information about browsers and browser versions """
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:uniquePageviews',
sort='-ga:uniquePageviews', sort='-ga:uniquePageviews',
dimensions="ga:browser,ga:browserVersion", dimensions="ga:browser,ga:browserVersion",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
result_data = results.get('rows') result_data = results.get('rows')
# e.g. [u'Firefox', u'19.0', u'20'] # e.g. [u'Firefox', u'19.0', u'20']
   
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Browsers", data) ga_model.update_sitewide_stats(period_name, "Browsers", data)
   
data = {} data = {}
for result in result_data: for result in result_data:
key = "%s %s" % (result[0], self._filter_browser_version(result[0], result[1])) key = "%s %s" % (result[0], self._filter_browser_version(result[0], result[1]))
data[key] = data.get(key, 0) + int(result[2]) data[key] = data.get(key, 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Browser versions", data) ga_model.update_sitewide_stats(period_name, "Browser versions", data)
   
@classmethod @classmethod
def _filter_browser_version(cls, browser, version_str): def _filter_browser_version(cls, browser, version_str):
''' '''
Simplifies a browser version string if it is detailed. Simplifies a browser version string if it is detailed.
i.e. groups together Firefox 3.5.1 and 3.5.2 to be just 3. i.e. groups together Firefox 3.5.1 and 3.5.2 to be just 3.
This is helpful when viewing stats and good to protect privacy. This is helpful when viewing stats and good to protect privacy.
''' '''
ver = version_str ver = version_str
parts = ver.split('.') parts = ver.split('.')
if len(parts) > 1: if len(parts) > 1:
if parts[1][0] == '0': if parts[1][0] == '0':
ver = parts[0] ver = parts[0]
else: else:
ver = "%s" % (parts[0]) ver = "%s" % (parts[0])
# Special case complex version nums # Special case complex version nums
if browser in ['Safari', 'Android Browser']: if browser in ['Safari', 'Android Browser']:
ver = parts[0] ver = parts[0]
if len(ver) > 2: if len(ver) > 2:
num_hidden_digits = len(ver) - 2 num_hidden_digits = len(ver) - 2
ver = ver[0] + ver[1] + 'X' * num_hidden_digits ver = ver[0] + ver[1] + 'X' * num_hidden_digits
return ver return ver
   
def _mobile_stats(self, start_date, end_date, period_name): def _mobile_stats(self, start_date, end_date, period_name):
""" Info about mobile devices """ """ Info about mobile devices """
   
results = self.service.data().ga().get( results = self.service.data().ga().get(
ids='ga:' + self.profile_id, ids='ga:' + self.profile_id,
start_date=start_date, start_date=start_date,
metrics='ga:uniquePageviews', metrics='ga:uniquePageviews',
sort='-ga:uniquePageviews', sort='-ga:uniquePageviews',
dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo", dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo",
max_results=10000, max_results=10000,
end_date=end_date).execute() end_date=end_date).execute()
   
result_data = results.get('rows') result_data = results.get('rows')
data = {} data = {}
for result in result_data: for result in result_data:
data[result[0]] = data.get(result[0], 0) + int(result[2]) data[result[0]] = data.get(result[0], 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Mobile brands", data) ga_model.update_sitewide_stats(period_name, "Mobile brands", data)
   
data = {} data = {}
for result in result_data: for result in result_data:
data[result[1]] = data.get(result[1], 0) + int(result[2]) data[result[1]] = data.get(result[1], 0) + int(result[2])
self._filter_out_long_tail(data, MIN_VIEWS) self._filter_out_long_tail(data, MIN_VIEWS)
ga_model.update_sitewide_stats(period_name, "Mobile devices", data) ga_model.update_sitewide_stats(period_name, "Mobile devices", data)
   
@classmethod @classmethod
def _filter_out_long_tail(cls, data, threshold=10): def _filter_out_long_tail(cls, data, threshold=10):
''' '''
Given data which is a frequency distribution, filter out Given data which is a frequency distribution, filter out
results which are below a threshold count. This is good to protect results which are below a threshold count. This is good to protect
privacy. privacy.
''' '''
for key, value in data.items(): for key, value in data.items():
if value < threshold: if value < threshold:
del data[key] del data[key]