ckanext-ga-report | ckanext-ga-report |
================= | ================= |
**Status:** Development | **Status:** Development |
**CKAN Version:** 1.7.1+ | **CKAN Version:** 1.7.1+ |
Overview | Overview |
-------- | -------- |
For creating detailed reports of CKAN analytics, including totals per group. | For creating detailed reports of CKAN analytics, including totals per group. |
Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). | Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). |
Contents of this extension: | Contents of this extension: |
* Use the CLI tool to download Google Analytics data for each time period into this extension's database tables | * Use the CLI tool to download Google Analytics data for each time period into this extension's database tables |
* Users can view the data as web page reports | * Users can view the data as web page reports |
Installation | Installation |
------------ | ------------ |
1. Activate you CKAN python environment and install this extension's software:: | 1. Activate you CKAN python environment and install this extension's software:: |
$ pyenv/bin/activate | $ pyenv/bin/activate |
$ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report | $ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report |
2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: | 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: |
googleanalytics.id = UA-1010101-1 | googleanalytics.id = UA-1010101-1 |
googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics) | googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics) |
googleanalytics.token.filepath = ~/pyenv/token.dat | |
ga-report.period = monthly | ga-report.period = monthly |
ga-report.bounce_url = /data | ga-report.bounce_url = / |
The ga-report.bounce_url specifies the path to use when calculating bounces. For DGU this is /data | The ga-report.bounce_url specifies a particular path to record the bounce rate for. Typically it is / (the home page). |
but you may want to set this to /. | |
3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: | 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: |
$ paster initdb --config=../ckan/development.ini | $ paster initdb --config=../ckan/development.ini |
4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: | 4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: |
ckan.plugins = ga-report | ckan.plugins = ga-report |
Problem shooting | Problem shooting |
---------------- | ---------------- |
* ``(ProgrammingError) relation "ga_url" does not exist`` | * ``(ProgrammingError) relation "ga_url" does not exist`` |
This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension. | This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension. |
Authorization | Authorization |
-------------- | -------------- |
Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: | Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: |
1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ | 1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ |
2. Sign-in and create a project or use an existing project. | 2. Sign-in and create a project or use an existing project. |
3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. | 3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. |
4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ | 4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ |
5. Click Create an OAuth 2.0 client ID.... | 5. Click Create an OAuth 2.0 client ID.... |
6. Fill out the Branding Information fields and click Next. | 6. Fill out the Branding Information fields and click Next. |
7. In Client ID Settings, set Application type to Installed application. | 7. In Client ID Settings, set Application type to Installed application. |
8. Click Create client ID | 8. Click Create client ID |
9. The details you need below are Client ID, Client secret, and Redirect URIs | 9. The details you need below are Client ID, Client secret, and Redirect URIs |
Once you have set up your credentials.json file you can generate an oauth token file by using the | Once you have set up your credentials.json file you can generate an oauth token file by using the |
following command, which will store your oauth token in a file called token.dat once you have finished | following command, which will store your oauth token in a file called token.dat once you have finished |
giving permission in the browser:: | giving permission in the browser:: |
$ paster getauthtoken --config=../ckan/development.ini | $ paster getauthtoken --config=../ckan/development.ini |
Now ensure you reference the correct path to your token.dat in your CKAN config file (e.g. development.ini):: | |
googleanalytics.token.filepath = ~/pyenv/token.dat | |
Tutorial | Tutorial |
-------- | -------- |
Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: | Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: |
$ paster loadanalytics token.dat latest --config=../ckan/development.ini | $ paster loadanalytics latest --config=../ckan/development.ini |
The value after the token file is how much data you want to retrieve, this can be | The value after the token file is how much data you want to retrieve, this can be |
* **all** - data for all time (since 2010) | * **all** - data for all time (since 2010) |
* **latest** - (default) just the 'latest' data | * **latest** - (default) just the 'latest' data |
* **YYYY-MM-DD** - just data for all time periods going back to (and including) this date | * **YYYY-MM-DD** - just data for all time periods going back to (and including) this date |
Software Licence | Software Licence |
================ | ================ |
This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). | This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). |
OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ | OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ |
import logging | import logging |
import datetime | import datetime |
import os | |
from pylons import config | |
from ckan.lib.cli import CkanCommand | from ckan.lib.cli import CkanCommand |
# No other CKAN imports allowed until _load_config is run, | # No other CKAN imports allowed until _load_config is run, |
# or logging is disabled | # or logging is disabled |
class InitDB(CkanCommand): | class InitDB(CkanCommand): |
"""Initialise the extension's database tables | """Initialise the extension's database tables |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 0 | max_args = 0 |
min_args = 0 | min_args = 0 |
def command(self): | def command(self): |
self._load_config() | self._load_config() |
import ckan.model as model | import ckan.model as model |
model.Session.remove() | model.Session.remove() |
model.Session.configure(bind=model.meta.engine) | model.Session.configure(bind=model.meta.engine) |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
import ga_model | import ga_model |
ga_model.init_tables() | ga_model.init_tables() |
log.info("DB tables are setup") | log.info("DB tables are setup") |
class GetAuthToken(CkanCommand): | class GetAuthToken(CkanCommand): |
""" Get's the Google auth token | """ Get's the Google auth token |
Usage: paster getauthtoken <credentials_file> | Usage: paster getauthtoken <credentials_file> |
Where <credentials_file> is the file name containing the details | Where <credentials_file> is the file name containing the details |
for the service (obtained from https://code.google.com/apis/console). | for the service (obtained from https://code.google.com/apis/console). |
By default this is set to credentials.json | By default this is set to credentials.json |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 0 | max_args = 0 |
min_args = 0 | min_args = 0 |
def command(self): | def command(self): |
""" | """ |
In this case we don't want a valid service, but rather just to | In this case we don't want a valid service, but rather just to |
force the user through the auth flow. We allow this to complete to | force the user through the auth flow. We allow this to complete to |
act as a form of verification instead of just getting the token and | act as a form of verification instead of just getting the token and |
assuming it is correct. | assuming it is correct. |
""" | """ |
from ga_auth import init_service | from ga_auth import init_service |
init_service('token.dat', | init_service('token.dat', |
self.args[0] if self.args | self.args[0] if self.args |
else 'credentials.json') | else 'credentials.json') |
class LoadAnalytics(CkanCommand): | class LoadAnalytics(CkanCommand): |
"""Get data from Google Analytics API and save it | """Get data from Google Analytics API and save it |
in the ga_model | in the ga_model |
Usage: paster loadanalytics <tokenfile> <time-period> | Usage: paster loadanalytics <time-period> |
Where <tokenfile> is the name of the auth token file from | Where <time-period> is: |
the getauthtoken step. | |
And where <time-period> is: | |
all - data for all time | all - data for all time |
latest - (default) just the 'latest' data | latest - (default) just the 'latest' data |
YYYY-MM - just data for the specific month | YYYY-MM - just data for the specific month |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 2 | max_args = 1 |
min_args = 1 | min_args = 0 |
def __init__(self, name): | def __init__(self, name): |
super(LoadAnalytics, self).__init__(name) | super(LoadAnalytics, self).__init__(name) |
self.parser.add_option('-d', '--delete-first', | self.parser.add_option('-d', '--delete-first', |
action='store_true', | action='store_true', |
default=False, | default=False, |
dest='delete_first', | dest='delete_first', |
help='Delete data for the period first') | help='Delete data for the period first') |
self.parser.add_option('-s', '--skip_url_stats', | |
action='store_true', | |
default=False, | |
dest='skip_url_stats', | |
help='Skip the download of URL data - just do site-wide stats') | |
def command(self): | def command(self): |
self._load_config() | self._load_config() |
from download_analytics import DownloadAnalytics | from download_analytics import DownloadAnalytics |
from ga_auth import (init_service, get_profile_id) | from ga_auth import (init_service, get_profile_id) |
ga_token_filepath = os.path.expanduser(config.get('googleanalytics.token.filepath', '')) | |
if not ga_token_filepath: | |
print 'ERROR: In the CKAN config you need to specify the filepath of the ' \ | |
'Google Analytics token file under key: googleanalytics.token.filepath' | |
return | |
try: | try: |
svc = init_service(self.args[0], None) | svc = init_service(ga_token_filepath, None) |
except TypeError: | except TypeError: |
print ('Have you correctly run the getauthtoken task and ' | print ('Have you correctly run the getauthtoken task and ' |
'specified the correct token file?') | 'specified the correct token file in the CKAN config under ' |
'"googleanalytics.token.filepath"?') | |
return | return |
downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc), | downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc), |
delete_first=self.options.delete_first) | delete_first=self.options.delete_first, |
skip_url_stats=self.options.skip_url_stats) | |
time_period = self.args[1] if self.args and len(self.args) > 1 \ | time_period = self.args[0] if self.args else 'latest' |
else 'latest' | |
if time_period == 'all': | if time_period == 'all': |
downloader.all_() | downloader.all_() |
elif time_period == 'latest': | elif time_period == 'latest': |
downloader.latest() | downloader.latest() |
else: | else: |
# The month to use | # The month to use |
for_date = datetime.datetime.strptime(time_period, '%Y-%m') | for_date = datetime.datetime.strptime(time_period, '%Y-%m') |
downloader.specific_month(for_date) | downloader.specific_month(for_date) |
import re | import re |
import csv | import csv |
import sys | import sys |
import logging | import logging |
import operator | import operator |
import collections | import collections |
from ckan.lib.base import (BaseController, c, g, render, request, response, abort) | from ckan.lib.base import (BaseController, c, g, render, request, response, abort) |
import sqlalchemy | import sqlalchemy |
from sqlalchemy import func, cast, Integer | from sqlalchemy import func, cast, Integer |
import ckan.model as model | import ckan.model as model |
from ga_model import GA_Url, GA_Stat, GA_ReferralStat | from ga_model import GA_Url, GA_Stat, GA_ReferralStat, GA_Publisher |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
def _get_month_name(strdate): | def _get_month_name(strdate): |
import calendar | import calendar |
from time import strptime | from time import strptime |
d = strptime(strdate, '%Y-%m') | d = strptime(strdate, '%Y-%m') |
return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year) | return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year) |
def _month_details(cls): | def _month_details(cls): |
'''Returns a list of all the month names''' | '''Returns a list of all the month names''' |
months = [] | months = [] |
vals = model.Session.query(cls.period_name).filter(cls.period_name!='All').distinct().all() | vals = model.Session.query(cls.period_name).filter(cls.period_name!='All').distinct().all() |
for m in vals: | for m in vals: |
months.append( (m[0], _get_month_name(m[0]))) | months.append( (m[0], _get_month_name(m[0]))) |
return sorted(months, key=operator.itemgetter(0), reverse=True) | return sorted(months, key=operator.itemgetter(0), reverse=True) |
class GaReport(BaseController): | class GaReport(BaseController): |
def csv(self, month): | def csv(self, month): |
import csv | import csv |
q = model.Session.query(GA_Stat) | q = model.Session.query(GA_Stat) |
if month != 'all': | if month != 'all': |
q = q.filter(GA_Stat.period_name==month) | q = q.filter(GA_Stat.period_name==month) |
entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all() | entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all() |
response.headers['Content-Type'] = "text/csv; charset=utf-8" | response.headers['Content-Type'] = "text/csv; charset=utf-8" |
response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,)) | response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,)) |
writer = csv.writer(response) | writer = csv.writer(response) |
writer.writerow(["Period", "Statistic", "Key", "Value"]) | writer.writerow(["Period", "Statistic", "Key", "Value"]) |
for entry in entries: | for entry in entries: |
writer.writerow([entry.period_name.encode('utf-8'), | writer.writerow([entry.period_name.encode('utf-8'), |
entry.stat_name.encode('utf-8'), | entry.stat_name.encode('utf-8'), |
entry.key.encode('utf-8'), | entry.key.encode('utf-8'), |
entry.value.encode('utf-8')]) | entry.value.encode('utf-8')]) |
def index(self): | def index(self): |
# Get the month details by fetching distinct values and determining the | # Get the month details by fetching distinct values and determining the |
# month names from the values. | # month names from the values. |
c.months = _month_details(GA_Stat) | c.months = _month_details(GA_Stat) |
# Work out which month to show, based on query params of the first item | # Work out which month to show, based on query params of the first item |
c.month_desc = 'all months' | c.month_desc = 'all months' |
c.month = request.params.get('month', '') | c.month = request.params.get('month', '') |
if c.month: | if c.month: |
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) | c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) |
q = model.Session.query(GA_Stat).\ | q = model.Session.query(GA_Stat).\ |
filter(GA_Stat.stat_name=='Totals') | filter(GA_Stat.stat_name=='Totals') |
if c.month: | if c.month: |
q = q.filter(GA_Stat.period_name==c.month) | q = q.filter(GA_Stat.period_name==c.month) |
entries = q.order_by('ga_stat.key').all() | entries = q.order_by('ga_stat.key').all() |
def clean_key(key, val): | def clean_key(key, val): |
if key in ['Average time on site', 'Pages per visit', 'New visits', 'Bounces']: | if key in ['Average time on site', 'Pages per visit', 'New visits', 'Bounce rate (home page)']: |
val = "%.2f" % round(float(val), 2) | val = "%.2f" % round(float(val), 2) |
if key == 'Average time on site': | if key == 'Average time on site': |
mins, secs = divmod(float(val), 60) | mins, secs = divmod(float(val), 60) |
hours, mins = divmod(mins, 60) | hours, mins = divmod(mins, 60) |
val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val) | val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val) |
if key in ['New visits','Bounces']: | if key in ['New visits','Bounce rate (home page)']: |
val = "%s%%" % val | val = "%s%%" % val |
if key in ['Total page views', 'Total visits']: | if key in ['Total page views', 'Total visits']: |
val = int(val) | val = int(val) |
return key, val | return key, val |
c.global_totals = [] | c.global_totals = [] |
if c.month: | if c.month: |
for e in entries: | for e in entries: |
key, val = clean_key(e.key, e.value) | key, val = clean_key(e.key, e.value) |
c.global_totals.append((key, val)) | c.global_totals.append((key, val)) |
else: | else: |
d = collections.defaultdict(list) | d = collections.defaultdict(list) |
for e in entries: | for e in entries: |
d[e.key].append(float(e.value)) | d[e.key].append(float(e.value)) |
for k, v in d.iteritems(): | for k, v in d.iteritems(): |
if k in ['Total page views', 'Total visits']: | if k in ['Total page views', 'Total visits']: |
v = sum(v) | v = sum(v) |
else: | else: |
v = float(sum(v))/len(v) | v = float(sum(v))/len(v) |
key, val = clean_key(k,v) | key, val = clean_key(k,v) |
c.global_totals.append((key, val)) | c.global_totals.append((key, val)) |
c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0)) | c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0)) |
keys = { | keys = { |
'Browser versions': 'browser_versions', | 'Browser versions': 'browser_versions', |
'Browsers': 'browsers', | 'Browsers': 'browsers', |
'Operating Systems versions': 'os_versions', | 'Operating Systems versions': 'os_versions', |
'Operating Systems': 'os', | 'Operating Systems': 'os', |
'Social sources': 'social_networks', | 'Social sources': 'social_networks', |
'Languages': 'languages', | 'Languages': 'languages', |
'Country': 'country' | 'Country': 'country' |
} | } |
def shorten_name(name, length=60): | def shorten_name(name, length=60): |
return (name[:length] + '..') if len(name) > 60 else name | return (name[:length] + '..') if len(name) > 60 else name |
def fill_out_url(url): | def fill_out_url(url): |
import urlparse | import urlparse |
return urlparse.urljoin(g.site_url, url) | return urlparse.urljoin(g.site_url, url) |
c.social_referrer_totals, c.social_referrers = [], [] | c.social_referrer_totals, c.social_referrers = [], [] |
q = model.Session.query(GA_ReferralStat) | q = model.Session.query(GA_ReferralStat) |
q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q | q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q |
q = q.order_by('ga_referrer.count::int desc') | q = q.order_by('ga_referrer.count::int desc') |
for entry in q.all(): | for entry in q.all(): |
c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url), | c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url), |
entry.source,entry.count)) | entry.source,entry.count)) |
q = model.Session.query(GA_ReferralStat.url, | q = model.Session.query(GA_ReferralStat.url, |
func.sum(GA_ReferralStat.count).label('count')) | func.sum(GA_ReferralStat.count).label('count')) |
q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q | q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q |
q = q.order_by('count desc').group_by(GA_ReferralStat.url) | q = q.order_by('count desc').group_by(GA_ReferralStat.url) |
for entry in q.all(): | for entry in q.all(): |
c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'', | c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'', |
entry[1])) | entry[1])) |
for k, v in keys.iteritems(): | for k, v in keys.iteritems(): |
q = model.Session.query(GA_Stat).\ | q = model.Session.query(GA_Stat).\ |
filter(GA_Stat.stat_name==k) | filter(GA_Stat.stat_name==k) |
if c.month: | if c.month: |
entries = [] | entries = [] |
q = q.filter(GA_Stat.period_name==c.month).\ | q = q.filter(GA_Stat.period_name==c.month).\ |
order_by('ga_stat.value::int desc') | order_by('ga_stat.value::int desc') |
d = collections.defaultdict(int) | d = collections.defaultdict(int) |
for e in q.all(): | for e in q.all(): |
d[e.key] += int(e.value) | d[e.key] += int(e.value) |
entries = [] | entries = [] |
for key, val in d.iteritems(): | for key, val in d.iteritems(): |
entries.append((key,val,)) | entries.append((key,val,)) |
entries = sorted(entries, key=operator.itemgetter(1), reverse=True) | entries = sorted(entries, key=operator.itemgetter(1), reverse=True) |
# Get the total for each set of values and then set the value as | # Get the total for each set of values and then set the value as |
# a percentage of the total | # a percentage of the total |
if k == 'Social sources': | if k == 'Social sources': |
total = sum([x for n,x in c.global_totals if n == 'Total visits']) | total = sum([x for n,x in c.global_totals if n == 'Total visits']) |
else: | else: |
total = sum([num for _,num in entries]) | total = sum([num for _,num in entries]) |
setattr(c, v, [(k,_percent(v,total)) for k,v in entries ]) | setattr(c, v, [(k,_percent(v,total)) for k,v in entries ]) |
return render('ga_report/site/index.html') | return render('ga_report/site/index.html') |
class GaDatasetReport(BaseController): | class GaDatasetReport(BaseController): |
""" | """ |
Displays the pageview and visit count for datasets | Displays the pageview and visit count for datasets |
with options to filter by publisher and time period. | with options to filter by publisher and time period. |
""" | """ |
def publisher_csv(self, month): | def publisher_csv(self, month): |
''' | ''' |
Returns a CSV of each publisher with the total number of dataset | Returns a CSV of each publisher with the total number of dataset |
views & visits. | views & visits. |
''' | ''' |
c.month = month if not month == 'all' else '' | c.month = month if not month == 'all' else '' |
response.headers['Content-Type'] = "text/csv; charset=utf-8" | response.headers['Content-Type'] = "text/csv; charset=utf-8" |
response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,)) | response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,)) |
writer = csv.writer(response) | writer = csv.writer(response) |
writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"]) | writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"]) |
for publisher,view,visit in _get_top_publishers(None): | for publisher,view,visit in _get_top_publishers(None): |
writer.writerow([publisher.title.encode('utf-8'), | writer.writerow([publisher.title.encode('utf-8'), |
publisher.name.encode('utf-8'), | publisher.name.encode('utf-8'), |
view, | view, |
visit, | visit, |
month]) | month]) |
def dataset_csv(self, id='all', month='all'): | def dataset_csv(self, id='all', month='all'): |
''' | ''' |
Returns a CSV with the number of views & visits for each dataset. | Returns a CSV with the number of views & visits for each dataset. |
:param id: A Publisher ID or None if you want for all | :param id: A Publisher ID or None if you want for all |
:param month: The time period, or 'all' | :param month: The time period, or 'all' |
''' | ''' |
c.month = month if not month == 'all' else '' | c.month = month if not month == 'all' else '' |
if id != 'all': | if id != 'all': |
c.publisher = model.Group.get(id) | c.publisher = model.Group.get(id) |
if not c.publisher: | if not c.publisher: |
abort(404, 'A publisher with that name could not be found') | abort(404, 'A publisher with that name could not be found') |
packages = self._get_packages(c.publisher) | packages = self._get_packages(c.publisher) |
response.headers['Content-Type'] = "text/csv; charset=utf-8" | response.headers['Content-Type'] = "text/csv; charset=utf-8" |
response.headers['Content-Disposition'] = \ | response.headers['Content-Disposition'] = \ |
str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,)) | str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,)) |
writer = csv.writer(response) | writer = csv.writer(response) |
writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"]) | writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"]) |
for package,view,visit in packages: | for package,view,visit in packages: |
writer.writerow([package.title.encode('utf-8'), | writer.writerow([package.title.encode('utf-8'), |
package.name.encode('utf-8'), | package.name.encode('utf-8'), |
view, | view, |
visit, | visit, |
month]) | month]) |
def publishers(self): | def publishers(self): |
'''A list of publishers and the number of views/visits for each''' | '''A list of publishers and the number of views/visits for each''' |
# Get the month details by fetching distinct values and determining the | # Get the month details by fetching distinct values and determining the |
# month names from the values. | # month names from the values. |
c.months = _month_details(GA_Url) | c.months = _month_details(GA_Url) |
# Work out which month to show, based on query params of the first item | # Work out which month to show, based on query params of the first item |
c.month = request.params.get('month', '') | c.month = request.params.get('month', '') |
c.month_desc = 'all months' | c.month_desc = 'all months' |
if c.month: | if c.month: |
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) | c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) |
c.top_publishers = _get_top_publishers() | c.top_publishers = _get_top_publishers() |
return render('ga_report/publisher/index.html') | return render('ga_report/publisher/index.html') |
def _get_packages(self, publisher=None, count=-1): | def _get_packages(self, publisher=None, count=-1): |
'''Returns the datasets in order of visits''' | '''Returns the datasets in order of views''' |
if count == -1: | if count == -1: |
count = sys.maxint | count = sys.maxint |
month = c.month or 'All' | month = c.month or 'All' |
q = model.Session.query(GA_Url,model.Package)\ | q = model.Session.query(GA_Url,model.Package)\ |
.filter(model.Package.name==GA_Url.package_id)\ | .filter(model.Package.name==GA_Url.package_id)\ |
.filter(GA_Url.url.like('/dataset/%')) | .filter(GA_Url.url.like('/dataset/%')) |
if publisher: | if publisher: |
q = q.filter(GA_Url.department_id==publisher.name) | q = q.filter(GA_Url.department_id==publisher.name) |
q = q.filter(GA_Url.period_name==month) | q = q.filter(GA_Url.period_name==month) |
q = q.order_by('ga_url.visitors::int desc') | q = q.order_by('ga_url.pageviews::int desc') |
top_packages = [] | top_packages = [] |
for entry,package in q.limit(count): | for entry,package in q.limit(count): |
if package: | if package: |
top_packages.append((package, entry.pageviews, entry.visitors)) | top_packages.append((package, entry.pageviews, entry.visits)) |
else: | else: |
log.warning('Could not find package associated package') | log.warning('Could not find package associated package') |
return top_packages | return top_packages |
def read(self): | def read(self): |
''' | ''' |
Lists the most popular datasets across all publishers | Lists the most popular datasets across all publishers |
''' | ''' |
return self.read_publisher(None) | return self.read_publisher(None) |
def read_publisher(self, id): | def read_publisher(self, id): |
''' | ''' |
Lists the most popular datasets for a publisher (or across all publishers) | Lists the most popular datasets for a publisher (or across all publishers) |
''' | ''' |
count = 20 | count = 20 |
c.publishers = _get_publishers() | c.publishers = _get_publishers() |
id = request.params.get('publisher', id) | id = request.params.get('publisher', id) |
if id and id != 'all': | if id and id != 'all': |
c.publisher = model.Group.get(id) | c.publisher = model.Group.get(id) |
if not c.publisher: | if not c.publisher: |
abort(404, 'A publisher with that name could not be found') | abort(404, 'A publisher with that name could not be found') |
c.publisher_name = c.publisher.name | c.publisher_name = c.publisher.name |
c.top_packages = [] # package, dataset_views in c.top_packages | c.top_packages = [] # package, dataset_views in c.top_packages |
# Get the month details by fetching distinct values and determining the | # Get the month details by fetching distinct values and determining the |
# month names from the values. | # month names from the values. |
c.months = _month_details(GA_Url) | c.months = _month_details(GA_Url) |
# Work out which month to show, based on query params of the first item | # Work out which month to show, based on query params of the first item |
c.month = request.params.get('month', '') | c.month = request.params.get('month', '') |
if not c.month: | if not c.month: |
c.month_desc = 'all months' | c.month_desc = 'all months' |
else: | else: |
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) | c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) |
month = c.mnth or 'All' | month = c.month or 'All' |
c.publisher_page_views = 0 | c.publisher_page_views = 0 |
q = model.Session.query(GA_Url).\ | q = model.Session.query(GA_Url).\ |
filter(GA_Url.url=='/publisher/%s' % c.publisher_name) | filter(GA_Url.url=='/publisher/%s' % c.publisher_name) |
entry = q.filter(GA_Url.period_name==c.month).first() | entry = q.filter(GA_Url.period_name==c.month).first() |
c.publisher_page_views = entry.pageviews if entry else 0 | c.publisher_page_views = entry.pageviews if entry else 0 |
c.top_packages = self._get_packages(c.publisher, 20) | c.top_packages = self._get_packages(c.publisher, 20) |
return render('ga_report/publisher/read.html') | return render('ga_report/publisher/read.html') |
def _get_top_publishers(limit=20): | def _get_top_publishers(limit=20): |
''' | ''' |
Returns a list of the top 20 publishers by dataset visits. | Returns a list of the top 20 publishers by dataset visits. |
(The number to show can be varied with 'limit') | (The number to show can be varied with 'limit') |
''' | ''' |
month = c.month or 'All' | |
connection = model.Session.connection() | connection = model.Session.connection() |
q = """ | q = """ |
select department_id, sum(pageviews::int) views, sum(visitors::int) visits | select department_id, sum(pageviews::int) views, sum(visits::int) visits |
from ga_url | from ga_url |
where department_id <> ''""" | where department_id <> '' |
if c.month: | and package_id <> '' |
q = q + """ | and url like '/dataset/%%' |
and period_name=%s | and period_name=%s |
""" | group by department_id order by views desc |
q = q + """ | |
group by department_id order by visits desc | |
""" | """ |
if limit: | if limit: |
q = q + " limit %s;" % (limit) | q = q + " limit %s;" % (limit) |
# Add this back (before and period_name =%s) if you want to ignore publisher | |
# homepage views | |
# and not url like '/publisher/%%' | |
top_publishers = [] | top_publishers = [] |
res = connection.execute(q, c.month) | res = connection.execute(q, month) |
for row in res: | for row in res: |
g = model.Group.get(row[0]) | g = model.Group.get(row[0]) |
if g: | if g: |
top_publishers.append((g, row[1], row[2])) | top_publishers.append((g, row[1], row[2])) |
return top_publishers | return top_publishers |
def _get_publishers(): | def _get_publishers(): |
''' | ''' |
Returns a list of all publishers. Each item is a tuple: | Returns a list of all publishers. Each item is a tuple: |
(names, title) | (name, title) |
''' | ''' |
publishers = [] | publishers = [] |
for pub in model.Session.query(model.Group).\ | for pub in model.Session.query(model.Group).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
filter(model.Group.state=='active').\ | filter(model.Group.state=='active').\ |
order_by(model.Group.name): | order_by(model.Group.name): |
publishers.append((pub.name, pub.title)) | publishers.append((pub.name, pub.title)) |
return publishers | return publishers |
def _percent(num, total): | def _percent(num, total): |
p = 100 * float(num)/float(total) | p = 100 * float(num)/float(total) |
return "%.2f%%" % round(p, 2) | return "%.2f%%" % round(p, 2) |
import os | import os |
import logging | import logging |
import datetime | import datetime |
import collections | import collections |
from pylons import config | from pylons import config |
from ga_model import _normalize_url | |
import ga_model | import ga_model |
#from ga_client import GA | #from ga_client import GA |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
FORMAT_MONTH = '%Y-%m' | FORMAT_MONTH = '%Y-%m' |
MIN_VIEWS = 50 | MIN_VIEWS = 50 |
MIN_VISITS = 20 | MIN_VISITS = 20 |
class DownloadAnalytics(object): | class DownloadAnalytics(object): |
'''Downloads and stores analytics info''' | '''Downloads and stores analytics info''' |
def __init__(self, service=None, profile_id=None, delete_first=False): | def __init__(self, service=None, profile_id=None, delete_first=False, |
skip_url_stats=False): | |
self.period = config['ga-report.period'] | self.period = config['ga-report.period'] |
self.service = service | self.service = service |
self.profile_id = profile_id | self.profile_id = profile_id |
self.delete_first = delete_first | self.delete_first = delete_first |
self.skip_url_stats = skip_url_stats | |
def specific_month(self, date): | def specific_month(self, date): |
import calendar | import calendar |
first_of_this_month = datetime.datetime(date.year, date.month, 1) | first_of_this_month = datetime.datetime(date.year, date.month, 1) |
_, last_day_of_month = calendar.monthrange(int(date.year), int(date.month)) | _, last_day_of_month = calendar.monthrange(int(date.year), int(date.month)) |
last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month) | last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month) |
periods = ((date.strftime(FORMAT_MONTH), | periods = ((date.strftime(FORMAT_MONTH), |
last_day_of_month, | last_day_of_month, |
first_of_this_month, last_of_this_month),) | first_of_this_month, last_of_this_month),) |
self.download_and_store(periods) | self.download_and_store(periods) |
def latest(self): | def latest(self): |
if self.period == 'monthly': | if self.period == 'monthly': |
# from first of this month to today | # from first of this month to today |
now = datetime.datetime.now() | now = datetime.datetime.now() |
first_of_this_month = datetime.datetime(now.year, now.month, 1) | first_of_this_month = datetime.datetime(now.year, now.month, 1) |
periods = ((now.strftime(FORMAT_MONTH), | periods = ((now.strftime(FORMAT_MONTH), |
now.day, | now.day, |
first_of_this_month, now),) | first_of_this_month, now),) |
else: | else: |
raise NotImplementedError | raise NotImplementedError |
self.download_and_store(periods) | self.download_and_store(periods) |
def for_date(self, for_date): | def for_date(self, for_date): |
assert isinstance(since_date, datetime.datetime) | assert isinstance(since_date, datetime.datetime) |
periods = [] # (period_name, period_complete_day, start_date, end_date) | periods = [] # (period_name, period_complete_day, start_date, end_date) |
if self.period == 'monthly': | if self.period == 'monthly': |
first_of_the_months_until_now = [] | first_of_the_months_until_now = [] |
year = for_date.year | year = for_date.year |
month = for_date.month | month = for_date.month |
now = datetime.datetime.now() | now = datetime.datetime.now() |
first_of_this_month = datetime.datetime(now.year, now.month, 1) | first_of_this_month = datetime.datetime(now.year, now.month, 1) |
while True: | while True: |
first_of_the_month = datetime.datetime(year, month, 1) | first_of_the_month = datetime.datetime(year, month, 1) |
if first_of_the_month == first_of_this_month: | if first_of_the_month == first_of_this_month: |
periods.append((now.strftime(FORMAT_MONTH), | periods.append((now.strftime(FORMAT_MONTH), |
now.day, | now.day, |
first_of_this_month, now)) | first_of_this_month, now)) |
break | break |
elif first_of_the_month < first_of_this_month: | elif first_of_the_month < first_of_this_month: |
in_the_next_month = first_of_the_month + datetime.timedelta(40) | in_the_next_month = first_of_the_month + datetime.timedelta(40) |
last_of_the_month = datetime.datetime(in_the_next_month.year, | last_of_the_month = datetime.datetime(in_the_next_month.year, |
in_the_next_month.month, 1)\ | in_the_next_month.month, 1)\ |
- datetime.timedelta(1) | - datetime.timedelta(1) |
periods.append((now.strftime(FORMAT_MONTH), 0, | periods.append((now.strftime(FORMAT_MONTH), 0, |
first_of_the_month, last_of_the_month)) | first_of_the_month, last_of_the_month)) |
else: | else: |
# first_of_the_month has got to the future somehow | # first_of_the_month has got to the future somehow |
break | break |
month += 1 | month += 1 |
if month > 12: | if month > 12: |
year += 1 | year += 1 |
month = 1 | month = 1 |
else: | else: |
raise NotImplementedError | raise NotImplementedError |
self.download_and_store(periods) | self.download_and_store(periods) |
@staticmethod | @staticmethod |
def get_full_period_name(period_name, period_complete_day): | def get_full_period_name(period_name, period_complete_day): |
if period_complete_day: | if period_complete_day: |
return period_name + ' (up to %ith)' % period_complete_day | return period_name + ' (up to %ith)' % period_complete_day |
else: | else: |
return period_name | return period_name |
def download_and_store(self, periods): | def download_and_store(self, periods): |
for period_name, period_complete_day, start_date, end_date in periods: | for period_name, period_complete_day, start_date, end_date in periods: |
log.info('Period "%s" (%s - %s)', | |
self.get_full_period_name(period_name, period_complete_day), | |
start_date.strftime('%Y-%m-%d'), | |
end_date.strftime('%Y-%m-%d')) | |
if self.delete_first: | if self.delete_first: |
log.info('Deleting existing Analytics for period "%s"', | log.info('Deleting existing Analytics for this period "%s"', |
period_name) | period_name) |
ga_model.delete(period_name) | ga_model.delete(period_name) |
log.info('Downloading Analytics for period "%s" (%s - %s)', | |
self.get_full_period_name(period_name, period_complete_day), | if not self.skip_url_stats: |
start_date.strftime('%Y %m %d'), | # Clean out old url data before storing the new |
end_date.strftime('%Y %m %d')) | ga_model.pre_update_url_stats(period_name) |
data = self.download(start_date, end_date, '~/dataset/[a-z0-9-_]+') | accountName = config.get('googleanalytics.account') |
log.info('Storing Dataset Analytics for period "%s"', | |
self.get_full_period_name(period_name, period_complete_day)) | log.info('Downloading analytics for dataset views') |
self.store(period_name, period_complete_day, data, ) | data = self.download(start_date, end_date, '~/%s/dataset/[a-z0-9-_]+' % accountName) |
data = self.download(start_date, end_date, '~/publisher/[a-z0-9-_]+') | log.info('Storing dataset views (%i rows)', len(data.get('url'))) |
log.info('Storing Publisher Analytics for period "%s"', | self.store(period_name, period_complete_day, data, ) |
self.get_full_period_name(period_name, period_complete_day)) | |
self.store(period_name, period_complete_day, data,) | log.info('Downloading analytics for publisher views') |
data = self.download(start_date, end_date, '~/%s/publisher/[a-z0-9-_]+' % accountName) | |
ga_model.update_publisher_stats(period_name) # about 30 seconds. | |
log.info('Storing publisher views (%i rows)', len(data.get('url'))) | |
self.store(period_name, period_complete_day, data,) | |
log.info('Aggregating datasets by publisher') | |
ga_model.update_publisher_stats(period_name) # about 30 seconds. | |
log.info('Downloading and storing analytics for site-wide stats') | |
self.sitewide_stats( period_name ) | self.sitewide_stats( period_name ) |
log.info('Downloading and storing analytics for social networks') | |
self.update_social_info(period_name, start_date, end_date) | self.update_social_info(period_name, start_date, end_date) |
def update_social_info(self, period_name, start_date, end_date): | def update_social_info(self, period_name, start_date, end_date): |
start_date = start_date.strftime('%Y-%m-%d') | start_date = start_date.strftime('%Y-%m-%d') |
end_date = end_date.strftime('%Y-%m-%d') | end_date = end_date.strftime('%Y-%m-%d') |
query = 'ga:hasSocialSourceReferral=~Yes$' | query = 'ga:hasSocialSourceReferral=~Yes$' |
metrics = 'ga:entrances' | metrics = 'ga:entrances' |
sort = '-ga:entrances' | sort = '-ga:entrances' |
# Supported query params at | # Supported query params at |
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference | # https://developers.google.com/analytics/devguides/reporting/core/v3/reference |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
filters=query, | filters=query, |
start_date=start_date, | start_date=start_date, |
metrics=metrics, | metrics=metrics, |
sort=sort, | sort=sort, |
dimensions="ga:landingPagePath,ga:socialNetwork", | dimensions="ga:landingPagePath,ga:socialNetwork", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
data = collections.defaultdict(list) | data = collections.defaultdict(list) |
rows = results.get('rows',[]) | rows = results.get('rows',[]) |
for row in rows: | for row in rows: |
from ga_model import _normalize_url | |
data[_normalize_url(row[0])].append( (row[1], int(row[2]),) ) | data[_normalize_url(row[0])].append( (row[1], int(row[2]),) ) |
ga_model.update_social(period_name, data) | ga_model.update_social(period_name, data) |
def download(self, start_date, end_date, path='~/dataset/[a-z0-9-_]+'): | def download(self, start_date, end_date, path=None): |
'''Get data from GA for a given time period''' | '''Get data from GA for a given time period''' |
start_date = start_date.strftime('%Y-%m-%d') | start_date = start_date.strftime('%Y-%m-%d') |
end_date = end_date.strftime('%Y-%m-%d') | end_date = end_date.strftime('%Y-%m-%d') |
query = 'ga:pagePath=%s$' % path | query = 'ga:pagePath=%s$' % path |
metrics = 'ga:uniquePageviews, ga:visits' | metrics = 'ga:pageviews, ga:visits' |
sort = '-ga:uniquePageviews' | sort = '-ga:pageviews' |
# Supported query params at | # Supported query params at |
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference | # https://developers.google.com/analytics/devguides/reporting/core/v3/reference |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
filters=query, | filters=query, |
start_date=start_date, | start_date=start_date, |
metrics=metrics, | metrics=metrics, |
sort=sort, | sort=sort, |
dimensions="ga:pagePath", | dimensions="ga:pagePath", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
packages = [] | packages = [] |
for entry in results.get('rows'): | for entry in results.get('rows'): |
(loc,pageviews,visits) = entry | (loc,pageviews,visits) = entry |
packages.append( ('http:/' + loc, pageviews, visits,) ) # Temporary hack | url = _normalize_url('http:/' + loc) # strips off domain e.g. www.data.gov.uk or data.gov.uk |
if not url.startswith('/dataset/') and not url.startswith('/publisher/'): | |
# filter out strays like: | |
# /data/user/login?came_from=http://data.gov.uk/dataset/os-code-point-open | |
# /403.html?page=/about&from=http://data.gov.uk/publisher/planning-inspectorate | |
continue | |
packages.append( (url, pageviews, visits,) ) # Temporary hack | |
return dict(url=packages) | return dict(url=packages) |
def store(self, period_name, period_complete_day, data): | def store(self, period_name, period_complete_day, data): |
if 'url' in data: | if 'url' in data: |
ga_model.update_url_stats(period_name, period_complete_day, data['url']) | ga_model.update_url_stats(period_name, period_complete_day, data['url']) |
def sitewide_stats(self, period_name): | def sitewide_stats(self, period_name): |
import calendar | import calendar |
year, month = period_name.split('-') | year, month = period_name.split('-') |
_, last_day_of_month = calendar.monthrange(int(year), int(month)) | _, last_day_of_month = calendar.monthrange(int(year), int(month)) |
start_date = '%s-01' % period_name | start_date = '%s-01' % period_name |
end_date = '%s-%s' % (period_name, last_day_of_month) | end_date = '%s-%s' % (period_name, last_day_of_month) |
print 'Sitewide_stats for %s (%s -> %s)' % (period_name, start_date, end_date) | |
funcs = ['_totals_stats', '_social_stats', '_os_stats', | funcs = ['_totals_stats', '_social_stats', '_os_stats', |
'_locale_stats', '_browser_stats', '_mobile_stats'] | '_locale_stats', '_browser_stats', '_mobile_stats'] |
for f in funcs: | for f in funcs: |
print ' + Fetching %s stats' % f.split('_')[1] | log.info('Downloading analytics for %s' % f.split('_')[1]) |
getattr(self, f)(start_date, end_date, period_name) | getattr(self, f)(start_date, end_date, period_name) |
def _get_results(result_data, f): | def _get_results(result_data, f): |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
key = f(result) | key = f(result) |
data[key] = data.get(key,0) + result[1] | data[key] = data.get(key,0) + result[1] |
return data | return data |
def _totals_stats(self, start_date, end_date, period_name): | def _totals_stats(self, start_date, end_date, period_name): |
""" Fetches distinct totals, total pageviews etc """ | """ Fetches distinct totals, total pageviews etc """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:pageviews', |
sort='-ga:uniquePageviews', | sort='-ga:pageviews', |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
ga_model.update_sitewide_stats(period_name, "Totals", {'Total page views': result_data[0][0]}) | ga_model.update_sitewide_stats(period_name, "Totals", {'Total page views': result_data[0][0]}) |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:pageviewsPerVisit,ga:avgTimeOnSite,ga:percentNewVisits,ga:visits', | metrics='ga:pageviewsPerVisit,ga:avgTimeOnSite,ga:percentNewVisits,ga:visits', |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = { | data = { |
'Pages per visit': result_data[0][0], | 'Pages per visit': result_data[0][0], |
'Average time on site': result_data[0][1], | 'Average time on site': result_data[0][1], |
'New visits': result_data[0][2], | 'New visits': result_data[0][2], |
'Total visits': result_data[0][3], | 'Total visits': result_data[0][3], |
} | } |
ga_model.update_sitewide_stats(period_name, "Totals", data) | ga_model.update_sitewide_stats(period_name, "Totals", data) |
# Bounces from /data. This url is specified in configuration because | # Bounces from / or another configurable page. |
# for DGU we don't want /. | path = '/%s%s' % (config.get('googleanalytics.account'), |
path = config.get('ga-report.bounce_url','/') | config.get('ga-report.bounce_url', '/')) |
print path | results = self.service.data().ga().get( |
results = self.service.data().ga().get( | ids='ga:' + self.profile_id, |
ids='ga:' + self.profile_id, | filters='ga:pagePath==%s' % (path,), |
filters='ga:pagePath=~%s$' % (path,), | start_date=start_date, |
start_date=start_date, | metrics='ga:bounces,ga:pageviews', |
metrics='ga:bounces,ga:uniquePageviews', | |
dimensions='ga:pagePath', | dimensions='ga:pagePath', |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
for results in result_data: | if not result_data or len(result_data) != 1: |
if results[0] == path: | log.error('Could not pinpoint the bounces for path: %s. Got results: %r', |
bounce, total = [float(x) for x in results[1:]] | path, result_data) |
pct = 100 * bounce/total | return |
print "%d bounces from %d total == %s" % (bounce, total, pct) | results = result_data[0] |
ga_model.update_sitewide_stats(period_name, "Totals", {'Bounces': pct}) | bounces, total = [float(x) for x in result_data[0][1:]] |
pct = 100 * bounces/total | |
log.info('%d bounces from %d total == %s', bounces, total, pct) | |
ga_model.update_sitewide_stats(period_name, "Totals", {'Bounce rate (home page)': pct}) | |
def _locale_stats(self, start_date, end_date, period_name): | def _locale_stats(self, start_date, end_date, period_name): |
""" Fetches stats about language and country """ | """ Fetches stats about language and country """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:pageviews', |
sort='-ga:uniquePageviews', | sort='-ga:pageviews', |
dimensions="ga:language,ga:country", | dimensions="ga:language,ga:country", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Languages", data) | ga_model.update_sitewide_stats(period_name, "Languages", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[1]] = data.get(result[1], 0) + int(result[2]) | data[result[1]] = data.get(result[1], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Country", data) | ga_model.update_sitewide_stats(period_name, "Country", data) |
def _social_stats(self, start_date, end_date, period_name): | def _social_stats(self, start_date, end_date, period_name): |
""" Finds out which social sites people are referred from """ | """ Finds out which social sites people are referred from """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:pageviews', |
sort='-ga:uniquePageviews', | sort='-ga:pageviews', |
dimensions="ga:socialNetwork,ga:referralPath", | dimensions="ga:socialNetwork,ga:referralPath", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
if not result[0] == '(not set)': | if not result[0] == '(not set)': |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, 3) | self._filter_out_long_tail(data, 3) |
ga_model.update_sitewide_stats(period_name, "Social sources", data) | ga_model.update_sitewide_stats(period_name, "Social sources", data) |
def _os_stats(self, start_date, end_date, period_name): | def _os_stats(self, start_date, end_date, period_name): |
""" Operating system stats """ | """ Operating system stats """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:pageviews', |
sort='-ga:uniquePageviews', | sort='-ga:pageviews', |
dimensions="ga:operatingSystem,ga:operatingSystemVersion", | dimensions="ga:operatingSystem,ga:operatingSystemVersion", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Operating Systems", data) | ga_model.update_sitewide_stats(period_name, "Operating Systems", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
if int(result[2]) >= MIN_VIEWS: | if int(result[2]) >= MIN_VIEWS: |
key = "%s %s" % (result[0],result[1]) | key = "%s %s" % (result[0],result[1]) |
data[key] = result[2] | data[key] = result[2] |
ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data) | ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data) |
def _browser_stats(self, start_date, end_date, period_name): | def _browser_stats(self, start_date, end_date, period_name): |
""" Information about browsers and browser versions """ | """ Information about browsers and browser versions """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:pageviews', |
sort='-ga:uniquePageviews', | sort='-ga:pageviews', |
dimensions="ga:browser,ga:browserVersion", | dimensions="ga:browser,ga:browserVersion", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
# e.g. [u'Firefox', u'19.0', u'20'] | # e.g. [u'Firefox', u'19.0', u'20'] |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Browsers", data) | ga_model.update_sitewide_stats(period_name, "Browsers", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
key = "%s %s" % (result[0], self._filter_browser_version(result[0], result[1])) | key = "%s %s" % (result[0], self._filter_browser_version(result[0], result[1])) |
data[key] = data.get(key, 0) + int(result[2]) | data[key] = data.get(key, 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Browser versions", data) | ga_model.update_sitewide_stats(period_name, "Browser versions", data) |
@classmethod | @classmethod |
def _filter_browser_version(cls, browser, version_str): | def _filter_browser_version(cls, browser, version_str): |
''' | ''' |
Simplifies a browser version string if it is detailed. | Simplifies a browser version string if it is detailed. |
i.e. groups together Firefox 3.5.1 and 3.5.2 to be just 3. | i.e. groups together Firefox 3.5.1 and 3.5.2 to be just 3. |
This is helpful when viewing stats and good to protect privacy. | This is helpful when viewing stats and good to protect privacy. |
''' | ''' |
ver = version_str | ver = version_str |
parts = ver.split('.') | parts = ver.split('.') |
if len(parts) > 1: | if len(parts) > 1: |
if parts[1][0] == '0': | if parts[1][0] == '0': |
ver = parts[0] | ver = parts[0] |
else: | else: |
ver = "%s" % (parts[0]) | ver = "%s" % (parts[0]) |
# Special case complex version nums | # Special case complex version nums |
if browser in ['Safari', 'Android Browser']: | if browser in ['Safari', 'Android Browser']: |
ver = parts[0] | ver = parts[0] |
if len(ver) > 2: | if len(ver) > 2: |
num_hidden_digits = len(ver) - 2 | num_hidden_digits = len(ver) - 2 |
ver = ver[0] + ver[1] + 'X' * num_hidden_digits | ver = ver[0] + ver[1] + 'X' * num_hidden_digits |
return ver | return ver |
def _mobile_stats(self, start_date, end_date, period_name): | def _mobile_stats(self, start_date, end_date, period_name): |
""" Info about mobile devices """ | """ Info about mobile devices """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:pageviews', |
sort='-ga:uniquePageviews', | sort='-ga:pageviews', |
dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo", | dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Mobile brands", data) | ga_model.update_sitewide_stats(period_name, "Mobile brands", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[1]] = data.get(result[1], 0) + int(result[2]) | data[result[1]] = data.get(result[1], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | self._filter_out_long_tail(data, MIN_VIEWS) |
ga_model.update_sitewide_stats(period_name, "Mobile devices", data) | ga_model.update_sitewide_stats(period_name, "Mobile devices", data) |
@classmethod | @classmethod |
def _filter_out_long_tail(cls, data, threshold=10): | def _filter_out_long_tail(cls, data, threshold=10): |
''' | ''' |
Given data which is a frequency distribution, filter out | Given data which is a frequency distribution, filter out |
results which are below a threshold count. This is good to protect | results which are below a threshold count. This is good to protect |
privacy. | privacy. |
''' | ''' |
for key, value in data.items(): | for key, value in data.items(): |
if value < threshold: | if value < threshold: |
del data[key] | del data[key] |
import re | import re |
import uuid | import uuid |
from sqlalchemy import Table, Column, MetaData, ForeignKey | from sqlalchemy import Table, Column, MetaData, ForeignKey |
from sqlalchemy import types | from sqlalchemy import types |
from sqlalchemy.sql import select | from sqlalchemy.sql import select |
from sqlalchemy.orm import mapper, relation | from sqlalchemy.orm import mapper, relation |
from sqlalchemy import func | from sqlalchemy import func |
import ckan.model as model | import ckan.model as model |
from ckan.lib.base import * | from ckan.lib.base import * |
log = __import__('logging').getLogger(__name__) | |
def make_uuid(): | def make_uuid(): |
return unicode(uuid.uuid4()) | return unicode(uuid.uuid4()) |
metadata = MetaData() | metadata = MetaData() |
class GA_Url(object): | class GA_Url(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
url_table = Table('ga_url', metadata, | url_table = Table('ga_url', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('period_complete_day', types.Integer), | Column('period_complete_day', types.Integer), |
Column('pageviews', types.UnicodeText), | Column('pageviews', types.UnicodeText), |
Column('visitors', types.UnicodeText), | Column('visits', types.UnicodeText), |
Column('url', types.UnicodeText), | Column('url', types.UnicodeText), |
Column('department_id', types.UnicodeText), | Column('department_id', types.UnicodeText), |
Column('package_id', types.UnicodeText), | Column('package_id', types.UnicodeText), |
) | ) |
mapper(GA_Url, url_table) | mapper(GA_Url, url_table) |
class GA_Stat(object): | class GA_Stat(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
stat_table = Table('ga_stat', metadata, | stat_table = Table('ga_stat', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('stat_name', types.UnicodeText), | Column('stat_name', types.UnicodeText), |
Column('key', types.UnicodeText), | Column('key', types.UnicodeText), |
Column('value', types.UnicodeText), ) | Column('value', types.UnicodeText), ) |
mapper(GA_Stat, stat_table) | mapper(GA_Stat, stat_table) |
class GA_Publisher(object): | class GA_Publisher(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
pub_table = Table('ga_publisher', metadata, | pub_table = Table('ga_publisher', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('publisher_name', types.UnicodeText), | Column('publisher_name', types.UnicodeText), |
Column('views', types.UnicodeText), | Column('views', types.UnicodeText), |
Column('visitors', types.UnicodeText), | Column('visits', types.UnicodeText), |
Column('toplevel', types.Boolean, default=False), | Column('toplevel', types.Boolean, default=False), |
Column('subpublishercount', types.Integer, default=0), | Column('subpublishercount', types.Integer, default=0), |
Column('parent', types.UnicodeText), | Column('parent', types.UnicodeText), |
) | ) |
mapper(GA_Publisher, pub_table) | mapper(GA_Publisher, pub_table) |
class GA_ReferralStat(object): | class GA_ReferralStat(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
referrer_table = Table('ga_referrer', metadata, | referrer_table = Table('ga_referrer', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('source', types.UnicodeText), | Column('source', types.UnicodeText), |
Column('url', types.UnicodeText), | Column('url', types.UnicodeText), |
Column('count', types.Integer), | Column('count', types.Integer), |
) | ) |
mapper(GA_ReferralStat, referrer_table) | mapper(GA_ReferralStat, referrer_table) |
def init_tables(): | def init_tables(): |
metadata.create_all(model.meta.engine) | metadata.create_all(model.meta.engine) |
cached_tables = {} | cached_tables = {} |
def get_table(name): | def get_table(name): |
if name not in cached_tables: | if name not in cached_tables: |
meta = MetaData() | meta = MetaData() |
meta.reflect(bind=model.meta.engine) | meta.reflect(bind=model.meta.engine) |
table = meta.tables[name] | table = meta.tables[name] |
cached_tables[name] = table | cached_tables[name] = table |
return cached_tables[name] | return cached_tables[name] |
def _normalize_url(url): | def _normalize_url(url): |
'''Strip off the hostname etc. Do this before storing it. | '''Strip off the hostname etc. Do this before storing it. |
>>> normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices') | >>> normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices') |
'/dataset/weekly_fuel_prices' | '/dataset/weekly_fuel_prices' |
''' | ''' |
# Deliberately leaving a / | return '/' + '/'.join(url.split('/')[3:]) |
url = url.replace('http:/','') | |
return '/' + '/'.join(url.split('/')[2:]) | |
def _get_package_and_publisher(url): | |
def _get_department_id_of_url(url): | |
# e.g. /dataset/fuel_prices | # e.g. /dataset/fuel_prices |
# e.g. /dataset/fuel_prices/resource/e63380d4 | # e.g. /dataset/fuel_prices/resource/e63380d4 |
dataset_match = re.match('/dataset/([^/]+)(/.*)?', url) | dataset_match = re.match('/dataset/([^/]+)(/.*)?', url) |
if dataset_match: | if dataset_match: |
dataset_ref = dataset_match.groups()[0] | dataset_ref = dataset_match.groups()[0] |
dataset = model.Package.get(dataset_ref) | dataset = model.Package.get(dataset_ref) |
if dataset: | if dataset: |
publisher_groups = dataset.get_groups('publisher') | publisher_groups = dataset.get_groups('publisher') |
if publisher_groups: | if publisher_groups: |
return publisher_groups[0].name | return dataset_ref,publisher_groups[0].name |
return dataset_ref, None | |
else: | else: |
publisher_match = re.match('/publisher/([^/]+)(/.*)?', url) | publisher_match = re.match('/publisher/([^/]+)(/.*)?', url) |
if publisher_match: | if publisher_match: |
return publisher_match.groups()[0] | return None, publisher_match.groups()[0] |
return None, None | |
def update_sitewide_stats(period_name, stat_name, data): | def update_sitewide_stats(period_name, stat_name, data): |
for k,v in data.iteritems(): | for k,v in data.iteritems(): |
item = model.Session.query(GA_Stat).\ | item = model.Session.query(GA_Stat).\ |
filter(GA_Stat.period_name==period_name).\ | filter(GA_Stat.period_name==period_name).\ |
filter(GA_Stat.key==k).\ | filter(GA_Stat.key==k).\ |
filter(GA_Stat.stat_name==stat_name).first() | filter(GA_Stat.stat_name==stat_name).first() |
if item: | if item: |
item.period_name = period_name | item.period_name = period_name |
item.key = k | item.key = k |
item.value = v | item.value = v |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'key': k, | 'key': k, |
'value': v, | 'value': v, |
'stat_name': stat_name | 'stat_name': stat_name |
} | } |
model.Session.add(GA_Stat(**values)) | model.Session.add(GA_Stat(**values)) |
model.Session.commit() | model.Session.commit() |
def pre_update_url_stats(period_name): | |
model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name==period_name).delete() | |
model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name=='All').delete() | |
def update_url_stats(period_name, period_complete_day, url_data): | def update_url_stats(period_name, period_complete_day, url_data): |
for url, views, visitors in url_data: | ''' |
url = _normalize_url(url) | Given a list of urls and number of hits for each during a given period, |
department_id = _get_department_id_of_url(url) | stores them in GA_Url under the period and recalculates the totals for |
the 'All' period. | |
package = None | ''' |
if url.startswith('/dataset/'): | for url, views, visits in url_data: |
package = url[len('/dataset/'):] | package, publisher = _get_package_and_publisher(url) |
# see if the row for this url & month is in the table already | |
item = model.Session.query(GA_Url).\ | item = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name==period_name).\ | filter(GA_Url.period_name==period_name).\ |
filter(GA_Url.url==url).first() | filter(GA_Url.url==url).first() |
if item: | if item: |
item.period_name = period_name | item.pageviews = item.pageviews + views |
item.pageviews = views | item.visits = item.visits + visits |
item.visitors = visitors | if not item.package_id: |
item.department_id = department_id | item.package_id = package |
item.package_id = package | if not item.department_id: |
item.department_id = publisher | |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'period_complete_day': period_complete_day, | 'period_complete_day': period_complete_day, |
'url': url, | 'url': url, |
'pageviews': views, | 'pageviews': views, |
'visitors': visitors, | 'visits': visits, |
'department_id': department_id, | 'department_id': publisher, |
'package_id': package | 'package_id': package |
} | } |
model.Session.add(GA_Url(**values)) | model.Session.add(GA_Url(**values)) |
# We now need to recaculate the ALL time_period from the data we have | |
# Delete the old 'All' | |
old = model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name == "All").\ | |
filter(GA_Url.url==url).delete() | |
items = model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name != "All").\ | |
filter(GA_Url.url==url).all() | |
values = {'id': make_uuid(), | |
'period_name': "All", | |
'period_complete_day': "0", | |
'url': url, | |
'pageviews': sum([int(x.pageviews) for x in items]), | |
'visitors': sum([int(x.visitors) for x in items]), | |
'department_id': department_id, | |
'package_id': package | |
} | |
model.Session.add(GA_Url(**values)) | |
model.Session.commit() | model.Session.commit() |
if package: | |
old_pageviews, old_visits = 0, 0 | |
old = model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name=='All').\ | |
filter(GA_Url.url==url).all() | |
old_pageviews = sum([int(o.pageviews) for o in old]) | |
old_visits = sum([int(o.visits) for o in old]) | |
entries = model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name!='All').\ | |
filter(GA_Url.url==url).all() | |
values = {'id': make_uuid(), | |
'period_name': 'All', | |
'period_complete_day': 0, | |
'url': url, | |
'pageviews': sum([int(e.pageviews) for e in entries]) + old_pageviews, | |
'visits': sum([int(e.visits or 0) for e in entries]) + old_visits, | |
'department_id': publisher, | |
'package_id': package | |
} | |
model.Session.add(GA_Url(**values)) | |
model.Session.commit() | |
def update_social(period_name, data): | def update_social(period_name, data): |
# Clean up first. | # Clean up first. |
model.Session.query(GA_ReferralStat).\ | model.Session.query(GA_ReferralStat).\ |
filter(GA_ReferralStat.period_name==period_name).delete() | filter(GA_ReferralStat.period_name==period_name).delete() |
for url,data in data.iteritems(): | for url,data in data.iteritems(): |
for entry in data: | for entry in data: |
source = entry[0] | source = entry[0] |
count = entry[1] | count = entry[1] |
item = model.Session.query(GA_ReferralStat).\ | item = model.Session.query(GA_ReferralStat).\ |
filter(GA_ReferralStat.period_name==period_name).\ | filter(GA_ReferralStat.period_name==period_name).\ |
filter(GA_ReferralStat.source==source).\ | filter(GA_ReferralStat.source==source).\ |
filter(GA_ReferralStat.url==url).first() | filter(GA_ReferralStat.url==url).first() |
if item: | if item: |
item.count = item.count + count | item.count = item.count + count |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'source': source, | 'source': source, |
'url': url, | 'url': url, |
'count': count, | 'count': count, |
} | } |
model.Session.add(GA_ReferralStat(**values)) | model.Session.add(GA_ReferralStat(**values)) |
model.Session.commit() | model.Session.commit() |
def update_publisher_stats(period_name): | def update_publisher_stats(period_name): |
""" | """ |
Updates the publisher stats from the data retrieved for /dataset/* | Updates the publisher stats from the data retrieved for /dataset/* |
and /publisher/*. Will run against each dataset and generates the | and /publisher/*. Will run against each dataset and generates the |
totals for the entire tree beneath each publisher. | totals for the entire tree beneath each publisher. |
""" | """ |
toplevel = get_top_level() | toplevel = get_top_level() |
publishers = model.Session.query(model.Group).\ | publishers = model.Session.query(model.Group).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
filter(model.Group.state=='active').all() | filter(model.Group.state=='active').all() |
for publisher in publishers: | for publisher in publishers: |
views, visitors, subpub = update_publisher(period_name, publisher, publisher.name) | views, visits, subpub = update_publisher(period_name, publisher, publisher.name) |
parent, parents = '', publisher.get_groups('publisher') | parent, parents = '', publisher.get_groups('publisher') |
if parents: | if parents: |
parent = parents[0].name | parent = parents[0].name |
item = model.Session.query(GA_Publisher).\ | item = model.Session.query(GA_Publisher).\ |
filter(GA_Publisher.period_name==period_name).\ | filter(GA_Publisher.period_name==period_name).\ |
filter(GA_Publisher.publisher_name==publisher.name).first() | filter(GA_Publisher.publisher_name==publisher.name).first() |
if item: | if item: |
item.views = views | item.views = views |
item.visitors = visitors | item.visits = visits |
item.publisher_name = publisher.name | item.publisher_name = publisher.name |
item.toplevel = publisher in toplevel | item.toplevel = publisher in toplevel |
item.subpublishercount = subpub | item.subpublishercount = subpub |
item.parent = parent | item.parent = parent |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'publisher_name': publisher.name, | 'publisher_name': publisher.name, |
'views': views, | 'views': views, |
'visitors': visitors, | 'visits': visits, |
'toplevel': publisher in toplevel, | 'toplevel': publisher in toplevel, |
'subpublishercount': subpub, | 'subpublishercount': subpub, |
'parent': parent | 'parent': parent |
} | } |
model.Session.add(GA_Publisher(**values)) | model.Session.add(GA_Publisher(**values)) |
model.Session.commit() | model.Session.commit() |
def update_publisher(period_name, pub, part=''): | def update_publisher(period_name, pub, part=''): |
views,visitors,subpub = 0, 0, 0 | views,visits,subpub = 0, 0, 0 |
for publisher in go_down_tree(pub): | for publisher in go_down_tree(pub): |
subpub = subpub + 1 | subpub = subpub + 1 |
items = model.Session.query(GA_Url).\ | items = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name==period_name).\ | filter(GA_Url.period_name==period_name).\ |
filter(GA_Url.department_id==publisher.name).all() | filter(GA_Url.department_id==publisher.name).all() |
for item in items: | for item in items: |
views = views + int(item.pageviews) | views = views + int(item.pageviews) |
visitors = visitors + int(item.visitors) | visits = visits + int(item.visits) |
return views, visitors, (subpub-1) | return views, visits, (subpub-1) |
def get_top_level(): | def get_top_level(): |
'''Returns the top level publishers.''' | '''Returns the top level publishers.''' |
return model.Session.query(model.Group).\ | return model.Session.query(model.Group).\ |
outerjoin(model.Member, model.Member.table_id == model.Group.id and \ | outerjoin(model.Member, model.Member.table_id == model.Group.id and \ |
model.Member.table_name == 'group' and \ | model.Member.table_name == 'group' and \ |
model.Member.state == 'active').\ | model.Member.state == 'active').\ |
filter(model.Member.id==None).\ | filter(model.Member.id==None).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
order_by(model.Group.name).all() | order_by(model.Group.name).all() |
def get_children(publisher): | def get_children(publisher): |
'''Finds child publishers for the given publisher (object). (Not recursive)''' | '''Finds child publishers for the given publisher (object). (Not recursive)''' |
from ckan.model.group import HIERARCHY_CTE | from ckan.model.group import HIERARCHY_CTE |
return model.Session.query(model.Group).\ | return model.Session.query(model.Group).\ |
from_statement(HIERARCHY_CTE).params(id=publisher.id, type='publisher').\ | from_statement(HIERARCHY_CTE).params(id=publisher.id, type='publisher').\ |
all() | all() |
def go_down_tree(publisher): | def go_down_tree(publisher): |
'''Provided with a publisher object, it walks down the hierarchy and yields each publisher, | '''Provided with a publisher object, it walks down the hierarchy and yields each publisher, |
including the one you supply.''' | including the one you supply.''' |
yield publisher | yield publisher |
for child in get_children(publisher): | for child in get_children(publisher): |
for grandchild in go_down_tree(child): | for grandchild in go_down_tree(child): |
yield grandchild | yield grandchild |
def delete(period_name): | def delete(period_name): |
''' | ''' |
Deletes table data for the specified period, or specify 'all' | Deletes table data for the specified period, or specify 'all' |
for all periods. | for all periods. |
''' | ''' |
for object_type in (GA_Url, GA_Stat, GA_Publisher, GA_ReferralStat): | for object_type in (GA_Url, GA_Stat, GA_Publisher, GA_ReferralStat): |
q = model.Session.query(object_type) | q = model.Session.query(object_type) |
if period_name != 'all': | if period_name != 'all': |
q = q.filter_by(period_name=period_name) | q = q.filter_by(period_name=period_name) |
q.delete() | q.delete() |
model.Session.commit() | model.Session.commit() |
def get_score_for_dataset(dataset_name): | |
''' | |
Returns a "current popularity" score for a dataset, | |
based on how many views it has had recently. | |
''' | |
import datetime | |
now = datetime.datetime.now() | |
last_month = now - datetime.timedelta(days=30) | |
period_names = ['%s-%02d' % (last_month.year, last_month.month), | |
'%s-%02d' % (now.year, now.month), | |
] | |
score = 0 | |
for period_name in period_names: | |
score /= 2 # previous periods are discounted by 50% | |
entry = model.Session.query(GA_Url)\ | |
.filter(GA_Url.period_name==period_name)\ | |
.filter(GA_Url.package_id==dataset_name).first() | |
# score | |
if entry: | |
views = float(entry.pageviews) | |
if entry.period_complete_day: | |
views_per_day = views / entry.period_complete_day | |
else: | |
views_per_day = views / 15 # guess | |
score += views_per_day | |
score = int(score * 100) | |
log.debug('Popularity %s: %s', score, dataset_name) | |
return score | |
import logging | import logging |
import operator | import operator |
import ckan.lib.base as base | import ckan.lib.base as base |
import ckan.model as model | import ckan.model as model |
from ckan.logic import get_action | from ckan.logic import get_action |
from ckanext.ga_report.ga_model import GA_Url, GA_Publisher | from ckanext.ga_report.ga_model import GA_Url, GA_Publisher |
from ckanext.ga_report.controller import _get_publishers | from ckanext.ga_report.controller import _get_publishers |
_log = logging.getLogger(__name__) | _log = logging.getLogger(__name__) |
def popular_datasets(count=10): | def popular_datasets(count=10): |
import random | import random |
publisher = None | publisher = None |
publishers = _get_publishers(30) | publishers = _get_publishers(30) |
total = len(publishers) | total = len(publishers) |
while not publisher or not datasets: | while not publisher or not datasets: |
rand = random.randrange(0, total) | rand = random.randrange(0, total) |
publisher = publishers[rand][0] | publisher = publishers[rand][0] |
if not publisher.state == 'active': | if not publisher.state == 'active': |
publisher = None | publisher = None |
continue | continue |
datasets = _datasets_for_publisher(publisher, 10)[:count] | datasets = _datasets_for_publisher(publisher, 10)[:count] |
ctx = { | ctx = { |
'datasets': datasets, | 'datasets': datasets, |
'publisher': publisher | 'publisher': publisher |
} | } |
return base.render_snippet('ga_report/ga_popular_datasets.html', **ctx) | return base.render_snippet('ga_report/ga_popular_datasets.html', **ctx) |
def single_popular_dataset(top=20): | def single_popular_dataset(top=20): |
'''Returns a random dataset from the most popular ones. | '''Returns a random dataset from the most popular ones. |
:param top: the number of top datasets to select from | :param top: the number of top datasets to select from |
''' | ''' |
import random | import random |
top_datasets = model.Session.query(GA_Url).\ | top_datasets = model.Session.query(GA_Url).\ |
filter(GA_Url.url.like('/dataset/%')).\ | filter(GA_Url.url.like('/dataset/%')).\ |
order_by('ga_url.pageviews::int desc') | order_by('ga_url.pageviews::int desc') |
num_top_datasets = top_datasets.count() | num_top_datasets = top_datasets.count() |
dataset = None | dataset = None |
if num_top_datasets: | if num_top_datasets: |
count = 0 | count = 0 |
while not dataset: | while not dataset: |
rand = random.randrange(0, min(top, num_top_datasets)) | rand = random.randrange(0, min(top, num_top_datasets)) |
ga_url = top_datasets[rand] | ga_url = top_datasets[rand] |
dataset = model.Package.get(ga_url.url[len('/dataset/'):]) | dataset = model.Package.get(ga_url.url[len('/dataset/'):]) |
if dataset and not dataset.state == 'active': | if dataset and not dataset.state == 'active': |
dataset = None | dataset = None |
count += 1 | # When testing, it is possible that top datasets are not available |
if count > 10: | # so only go round this loop a few times before falling back on |
break | # a random dataset. |
count += 1 | |
if count > 10: | |
break | |
if not dataset: | if not dataset: |
# fallback | # fallback |
dataset = model.Session.query(model.Package)\ | dataset = model.Session.query(model.Package)\ |
.filter_by(state='active').first() | .filter_by(state='active').first() |
if not dataset: | if not dataset: |
return None | return None |
dataset_dict = get_action('package_show')({'model': model, | dataset_dict = get_action('package_show')({'model': model, |
'session': model.Session}, | 'session': model.Session, |
'validate': False}, | |
{'id':dataset.id}) | {'id':dataset.id}) |
return dataset_dict | return dataset_dict |
def single_popular_dataset_html(top=20): | def single_popular_dataset_html(top=20): |
dataset_dict = single_popular_dataset(top) | dataset_dict = single_popular_dataset(top) |
groups = package.get('groups', []) | groups = package.get('groups', []) |
publishers = [ g for g in groups if g.get('type') == 'publisher' ] | publishers = [ g for g in groups if g.get('type') == 'publisher' ] |
publisher = publishers[0] if publishers else {'name':'', 'title': ''} | publisher = publishers[0] if publishers else {'name':'', 'title': ''} |
context = { | context = { |
'dataset': dataset_dict, | 'dataset': dataset_dict, |
'publisher': publisher_dict | 'publisher': publisher_dict |
} | } |
return base.render_snippet('ga_report/ga_popular_single.html', **context) | return base.render_snippet('ga_report/ga_popular_single.html', **context) |
def most_popular_datasets(publisher, count=20): | def most_popular_datasets(publisher, count=20): |
if not publisher: | if not publisher: |
_log.error("No valid publisher passed to 'most_popular_datasets'") | _log.error("No valid publisher passed to 'most_popular_datasets'") |
return "" | return "" |
results = _datasets_for_publisher(publisher, count) | results = _datasets_for_publisher(publisher, count) |
ctx = { | ctx = { |
'dataset_count': len(results), | 'dataset_count': len(results), |
'datasets': results, | 'datasets': results, |
'publisher': publisher | 'publisher': publisher |
} | } |
return base.render_snippet('ga_report/publisher/popular.html', **ctx) | return base.render_snippet('ga_report/publisher/popular.html', **ctx) |
def _datasets_for_publisher(publisher, count): | def _datasets_for_publisher(publisher, count): |
datasets = {} | datasets = {} |
entries = model.Session.query(GA_Url).\ | entries = model.Session.query(GA_Url).\ |
filter(GA_Url.department_id==publisher.name).\ | filter(GA_Url.department_id==publisher.name).\ |
filter(GA_Url.url.like('/dataset/%')).\ | filter(GA_Url.url.like('/dataset/%')).\ |
order_by('ga_url.pageviews::int desc').all() | order_by('ga_url.pageviews::int desc').all() |
for entry in entries: | for entry in entries: |
if len(datasets) < count: | if len(datasets) < count: |
p = model.Package.get(entry.url[len('/dataset/'):]) | p = model.Package.get(entry.url[len('/dataset/'):]) |
if not p in datasets: | if not p in datasets: |
datasets[p] = {'views':0, 'visits': 0} | datasets[p] = {'views':0, 'visits': 0} |
datasets[p]['views'] = datasets[p]['views'] + int(entry.pageviews) | datasets[p]['views'] = datasets[p]['views'] + int(entry.pageviews) |
datasets[p]['visits'] = datasets[p]['visits'] + int(entry.visitors) | datasets[p]['visits'] = datasets[p]['visits'] + int(entry.visits) |
results = [] | results = [] |
for k, v in datasets.iteritems(): | for k, v in datasets.iteritems(): |
results.append((k,v['views'],v['visits'])) | results.append((k,v['views'],v['visits'])) |
return sorted(results, key=operator.itemgetter(1), reverse=True) | return sorted(results, key=operator.itemgetter(1), reverse=True) |
<html xmlns:py="http://genshi.edgewall.org/" | <html xmlns:py="http://genshi.edgewall.org/" |
xmlns:i18n="http://genshi.edgewall.org/i18n" | xmlns:i18n="http://genshi.edgewall.org/i18n" |
xmlns:xi="http://www.w3.org/2001/XInclude" | xmlns:xi="http://www.w3.org/2001/XInclude" |
py:strip=""> | py:strip=""> |
<li class="widget-container boxed widget_text"> | <li class="widget-container boxed widget_text"> |
<h4>Notes</h4> | <h4>Notes</h4> |
<ul> | <ul> |
<li>'Views' is the number of sessions during which that page was viewed one or more times ('Unique Pageviews').</li> | <li>"Views" is the number of times a page was loaded in users' browsers.</li> |
<!-- <li>'Visits' is the number of individual sessions initiated by all the visitors to your site, counted once for each visitor for each session.</li>--> | <li>"Visits" is the number of unique user visits to a page, counted once for each visitor for each of their browsing sessions.</li> |
<li>'Visitors' is the number of unique users visiting the site (whether once or more times).</li> | |
<li>These usage statistics are confined to users with javascript enabled, which excludes web crawlers and API calls.</li> | <li>These usage statistics are confined to users with javascript enabled, which excludes web crawlers and API calls.</li> |
<li>The results for only small numbers of views/visits are not shown. Where these relate to site pages, then they are available in full in the CSV download. Where these relate to users' web browser information, they are not disclosed, for privacy reasons.</li> | <li>The results are not shown when the number of views/visits is tiny. Where these relate to site pages, results are available in full in the CSV download. Where these relate to users' web browser information, results are not disclosed, for privacy reasons.</li> |
</ul> | </ul> |
</li> | </li> |
</html> | </html> |
<html xmlns:py="http://genshi.edgewall.org/" | <html xmlns:py="http://genshi.edgewall.org/" |
xmlns:i18n="http://genshi.edgewall.org/i18n" | xmlns:i18n="http://genshi.edgewall.org/i18n" |
xmlns:xi="http://www.w3.org/2001/XInclude" | xmlns:xi="http://www.w3.org/2001/XInclude" |
py:strip=""> | py:strip=""> |
<xi:include href="../ga_util.html" /> | <xi:include href="../ga_util.html" /> |
<py:def function="page_title">Usage by Publisher</py:def> | <py:def function="page_title">Usage by Publisher</py:def> |
<py:match path="primarysidebar"> | <py:match path="primarysidebar"> |
<li class="widget-container boxed widget_text"> | <li class="widget-container boxed widget_text"> |
<h4>Download</h4> | <h4>Download</h4> |
<p><center> | <p><center> |
<a class="btn button btn-primary" href="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='publisher_csv',month=c.month or 'all')}">Download as CSV</a></center> | <a class="btn button btn-primary" href="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='publisher_csv',month=c.month or 'all')}">Download as CSV</a></center> |
</p> | </p> |
</li> | </li> |
<xi:include href="../notes.html" /> | <xi:include href="../notes.html" /> |
</py:match> | </py:match> |
<div py:match="content"> | <div py:match="content"> |
<h1>Site Usage</h1> | <h1>Site Usage</h1> |
${usage_nav('Publishers')} | ${usage_nav('Publishers')} |
<form class="form-inline" action="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='publishers')}" method="get"> | <form class="form-inline" action="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='publishers')}" method="get"> |
<div class="controls"> | <div class="controls"> |
<select name="month"> | <select name="month"> |
<option value='' py:attrs="{'selected': 'selected' if not c.month else None}">All months</option> | <option value='' py:attrs="{'selected': 'selected' if not c.month else None}">All months</option> |
<py:for each="val,desc in c.months"> | <py:for each="val,desc in c.months"> |
<option value='${val}' py:attrs="{'selected': 'selected' if c.month == val else None}">${desc}</option> | <option value='${val}' py:attrs="{'selected': 'selected' if c.month == val else None}">${desc}</option> |
</py:for> | </py:for> |
</select> | </select> |
<input class="btn button btn-primary" type='submit' value="Update"/> | <input class="btn button btn-primary" type='submit' value="Update"/> |
</div> | </div> |
</form> | </form> |
<table class="table table-condensed table-bordered table-striped"> | <table class="table table-condensed table-bordered table-striped"> |
<tr> | <tr> |
<th>Publisher</th> | <th>Publisher</th> |
<!-- <th>Dataset Visits</th>--> | |
<th>Dataset Views</th> | <th>Dataset Views</th> |
<th>Dataset Visits</th> | |
</tr> | </tr> |
<py:for each="publisher, views, visits in c.top_publishers"> | <py:for each="publisher, views, visits in c.top_publishers"> |
<tr> | <tr> |
<td>${h.link_to(publisher.title, h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport', action='read_publisher', id=publisher.name))} | <td>${h.link_to(publisher.title, h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport', action='read_publisher', id=publisher.name))} |
</td> | </td> |
<!-- <td>${visits}</td> --> | |
<td>${views}</td> | <td>${views}</td> |
<td>${visits}</td> | |
</tr> | </tr> |
</py:for> | </py:for> |
</table> | </table> |
</div> | </div> |
<xi:include href="../../layout.html" /> | <xi:include href="../../layout.html" /> |
<py:def function="optional_footer"> | <py:def function="optional_footer"> |
<script type='text/javascript'> | <script type='text/javascript'> |
$('.nav-tabs li a').click(function (e) { | $('.nav-tabs li a').click(function (e) { |
e.preventDefault(); | e.preventDefault(); |
$(this).tab('show'); | $(this).tab('show'); |
}) | }) |
</script> | </script> |
</py:def> | </py:def> |
</html> | </html> |
<html xmlns:py="http://genshi.edgewall.org/" | <html xmlns:py="http://genshi.edgewall.org/" |
xmlns:i18n="http://genshi.edgewall.org/i18n" | xmlns:i18n="http://genshi.edgewall.org/i18n" |
xmlns:xi="http://www.w3.org/2001/XInclude" | xmlns:xi="http://www.w3.org/2001/XInclude" |
py:strip=""> | py:strip=""> |
<xi:include href="../ga_util.html" /> | <xi:include href="../ga_util.html" /> |
<py:def function="page_title">Usage by Dataset</py:def> | <py:def function="page_title">Usage by Dataset</py:def> |
<py:match path="primarysidebar"> | <py:match path="primarysidebar"> |
<li class="widget-container boxed widget_text"> | <li class="widget-container boxed widget_text"> |
<h4>Download</h4> | <h4>Download</h4> |
<p><center> | <p><center> |
<a class="btn button btn-primary" href="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='dataset_csv',id=c.publisher_name or 'all',month=c.month or 'all')}">Download as CSV</a></center> | <a class="btn button btn-primary" href="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='dataset_csv',id=c.publisher_name or 'all',month=c.month or 'all')}">Download as CSV</a></center> |
</p> | </p> |
</li> | </li> |
<xi:include href="../notes.html" /> | <xi:include href="../notes.html" /> |
</py:match> | </py:match> |
<div py:match="content"> | <div py:match="content"> |
<h1>Site Usage</h1> | <h1>Site Usage</h1> |
${usage_nav('Datasets')} | ${usage_nav('Datasets')} |
<form class="form-inline" action="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='read')}" method="get"> | <form class="form-inline" action="${h.url_for(controller='ckanext.ga_report.controller:GaDatasetReport',action='read')}" method="get"> |
<div class="controls"> | <div class="controls"> |
<select name="month"> | <select name="month"> |
<option value='' py:attrs="{'selected': 'selected' if not c.month else None}">All months</option> | <option value='' py:attrs="{'selected': 'selected' if not c.month else None}">All months</option> |
<py:for each="val,desc in c.months"> | <py:for each="val,desc in c.months"> |
<option value='${val}' py:attrs="{'selected': 'selected' if c.month == val else None}">${desc}</option> | <option value='${val}' py:attrs="{'selected': 'selected' if c.month == val else None}">${desc}</option> |
</py:for> | </py:for> |
</select> | </select> |
<select name="publisher"> | <select name="publisher"> |
<option value='' py:attrs="{'selected': 'selected' if not c.publisher else None}">All publishers</option> | <option value='' py:attrs="{'selected': 'selected' if not c.publisher else None}">All publishers</option> |
<py:for each="val,desc in c.publishers"> | <py:for each="val,desc in c.publishers"> |
<option value='${val}' py:attrs="{'selected': 'selected' if c.publisher_name == val else None}">${desc}</option> | <option value='${val}' py:attrs="{'selected': 'selected' if c.publisher_name == val else None}">${desc}</option> |
</py:for> | </py:for> |
</select> | </select> |
<input class="btn button btn-primary" type='submit' value="Update"/> | <input class="btn button btn-primary" type='submit' value="Update"/> |
</div> | </div> |
</form> | </form> |
<h3 py:if="c.publisher"><a href="${h.url_for(controller='ckanext.dgu.controllers.publisher:PublisherController',action='read',id=c.publisher.name)}">${c.publisher.title}</a></h3> | <h3 py:if="c.publisher"><a href="${h.url_for(controller='ckanext.dgu.controllers.publisher:PublisherController',action='read',id=c.publisher.name)}">${c.publisher.title}</a></h3> |
<p py:if="not c.top_packages">No page views in this period</p> | <p py:if="not c.top_packages">No page views in this period</p> |
<table py:if="c.top_packages" class="table table-condensed table-bordered table-striped"> | <table py:if="c.top_packages" class="table table-condensed table-bordered table-striped"> |
<tr> | <tr> |
<th>Dataset</th> | <th>Dataset</th> |
<!-- <th>Visits</th> --> | |
<th>Views</th> | <th>Views</th> |
<th>Visits</th> | |
</tr> | </tr> |
<py:for each="package, views, visits in c.top_packages"> | <py:for each="package, views, visits in c.top_packages"> |
<tr> | <tr> |
<td>${h.link_to(package.title or package.name, h.url_for(controller='package', action='read', id=package.name))} | <td>${h.link_to(package.title or package.name, h.url_for(controller='package', action='read', id=package.name))} |
</td> | </td> |
<!-- <td>${visits}</td> --> | |
<td>${views}</td> | <td>${views}</td> |
<td>${visits}</td> | |
</tr> | </tr> |
</py:for> | </py:for> |
</table> | </table> |
</div> | </div> |
<xi:include href="../../layout.html" /> | <xi:include href="../../layout.html" /> |
</html> | </html> |
from nose.tools import assert_equal | |
from ckanext.ga_report.ga_model import _normalize_url | |
class TestNormalizeUrl: | |
def test_normal(self): | |
assert_equal(_normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices'), | |
'/dataset/weekly_fuel_prices') | |
def test_www_dot(self): | |
assert_equal(_normalize_url('http://www.data.gov.uk/dataset/weekly_fuel_prices'), | |
'/dataset/weekly_fuel_prices') | |
def test_https(self): | |
assert_equal(_normalize_url('https://data.gov.uk/dataset/weekly_fuel_prices'), | |
'/dataset/weekly_fuel_prices') | |