ckanext-ga-report | ckanext-ga-report |
================= | ================= |
**Status:** Development | **Status:** Development |
**CKAN Version:** 1.7.1+ | **CKAN Version:** 1.7.1+ |
Overview | Overview |
-------- | -------- |
For creating detailed reports of CKAN analytics, including totals per group. | For creating detailed reports of CKAN analytics, including totals per group. |
Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). | Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). |
Contents of this extension: | Contents of this extension: |
* Use the CLI tool to download Google Analytics data for each time period into this extension's database tables | * Use the CLI tool to download Google Analytics data for each time period into this extension's database tables |
* Users can view the data as web page reports | * Users can view the data as web page reports |
Installation | Installation |
------------ | ------------ |
1. Activate you CKAN python environment and install this extension's software:: | 1. Activate you CKAN python environment and install this extension's software:: |
$ pyenv/bin/activate | $ pyenv/bin/activate |
$ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report | $ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report |
2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: | 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: |
googleanalytics.id = UA-1010101-1 | googleanalytics.id = UA-1010101-1 |
googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics) | googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics) |
googleanalytics.token.filepath = ~/pyenv/token.dat | |
ga-report.period = monthly | ga-report.period = monthly |
ga-report.bounce_url = / | ga-report.bounce_url = / |
The ga-report.bounce_url specifies a particular path to record the bounce rate for. Typically it is / (the home page). | The ga-report.bounce_url specifies a particular path to record the bounce rate for. Typically it is / (the home page). |
3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: | 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: |
$ paster initdb --config=../ckan/development.ini | $ paster initdb --config=../ckan/development.ini |
4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: | 4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: |
ckan.plugins = ga-report | ckan.plugins = ga-report |
Problem shooting | Problem shooting |
---------------- | ---------------- |
* ``(ProgrammingError) relation "ga_url" does not exist`` | * ``(ProgrammingError) relation "ga_url" does not exist`` |
This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension. | This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension. |
Authorization | Authorization |
-------------- | -------------- |
Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: | Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: |
1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ | 1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ |
2. Sign-in and create a project or use an existing project. | 2. Sign-in and create a project or use an existing project. |
3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. | 3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. |
4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ | 4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ |
5. Click Create an OAuth 2.0 client ID.... | 5. Click Create an OAuth 2.0 client ID.... |
6. Fill out the Branding Information fields and click Next. | 6. Fill out the Branding Information fields and click Next. |
7. In Client ID Settings, set Application type to Installed application. | 7. In Client ID Settings, set Application type to Installed application. |
8. Click Create client ID | 8. Click Create client ID |
9. The details you need below are Client ID, Client secret, and Redirect URIs | 9. The details you need below are Client ID, Client secret, and Redirect URIs |
Once you have set up your credentials.json file you can generate an oauth token file by using the | Once you have set up your credentials.json file you can generate an oauth token file by using the |
following command, which will store your oauth token in a file called token.dat once you have finished | following command, which will store your oauth token in a file called token.dat once you have finished |
giving permission in the browser:: | giving permission in the browser:: |
$ paster getauthtoken --config=../ckan/development.ini | $ paster getauthtoken --config=../ckan/development.ini |
Now ensure you reference the correct path to your token.dat in your CKAN config file (e.g. development.ini):: | |
googleanalytics.token.filepath = ~/pyenv/token.dat | |
Tutorial | Tutorial |
-------- | -------- |
Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: | Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: |
$ paster loadanalytics token.dat latest --config=../ckan/development.ini | $ paster loadanalytics latest --config=../ckan/development.ini |
The value after the token file is how much data you want to retrieve, this can be | The value after the token file is how much data you want to retrieve, this can be |
* **all** - data for all time (since 2010) | * **all** - data for all time (since 2010) |
* **latest** - (default) just the 'latest' data | * **latest** - (default) just the 'latest' data |
* **YYYY-MM-DD** - just data for all time periods going back to (and including) this date | * **YYYY-MM-DD** - just data for all time periods going back to (and including) this date |
Software Licence | Software Licence |
================ | ================ |
This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). | This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). |
OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ | OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ |
import logging | import logging |
import datetime | import datetime |
import os | |
from pylons import config | |
from ckan.lib.cli import CkanCommand | from ckan.lib.cli import CkanCommand |
# No other CKAN imports allowed until _load_config is run, | # No other CKAN imports allowed until _load_config is run, |
# or logging is disabled | # or logging is disabled |
class InitDB(CkanCommand): | class InitDB(CkanCommand): |
"""Initialise the extension's database tables | """Initialise the extension's database tables |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 0 | max_args = 0 |
min_args = 0 | min_args = 0 |
def command(self): | def command(self): |
self._load_config() | self._load_config() |
import ckan.model as model | import ckan.model as model |
model.Session.remove() | model.Session.remove() |
model.Session.configure(bind=model.meta.engine) | model.Session.configure(bind=model.meta.engine) |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
import ga_model | import ga_model |
ga_model.init_tables() | ga_model.init_tables() |
log.info("DB tables are setup") | log.info("DB tables are setup") |
class GetAuthToken(CkanCommand): | class GetAuthToken(CkanCommand): |
""" Get's the Google auth token | """ Get's the Google auth token |
Usage: paster getauthtoken <credentials_file> | Usage: paster getauthtoken <credentials_file> |
Where <credentials_file> is the file name containing the details | Where <credentials_file> is the file name containing the details |
for the service (obtained from https://code.google.com/apis/console). | for the service (obtained from https://code.google.com/apis/console). |
By default this is set to credentials.json | By default this is set to credentials.json |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 0 | max_args = 0 |
min_args = 0 | min_args = 0 |
def command(self): | def command(self): |
""" | """ |
In this case we don't want a valid service, but rather just to | In this case we don't want a valid service, but rather just to |
force the user through the auth flow. We allow this to complete to | force the user through the auth flow. We allow this to complete to |
act as a form of verification instead of just getting the token and | act as a form of verification instead of just getting the token and |
assuming it is correct. | assuming it is correct. |
""" | """ |
from ga_auth import init_service | from ga_auth import init_service |
init_service('token.dat', | init_service('token.dat', |
self.args[0] if self.args | self.args[0] if self.args |
else 'credentials.json') | else 'credentials.json') |
class LoadAnalytics(CkanCommand): | class LoadAnalytics(CkanCommand): |
"""Get data from Google Analytics API and save it | """Get data from Google Analytics API and save it |
in the ga_model | in the ga_model |
Usage: paster loadanalytics <tokenfile> <time-period> | Usage: paster loadanalytics <time-period> |
Where <tokenfile> is the name of the auth token file from | Where <time-period> is: |
the getauthtoken step. | |
And where <time-period> is: | |
all - data for all time | all - data for all time |
latest - (default) just the 'latest' data | latest - (default) just the 'latest' data |
YYYY-MM - just data for the specific month | YYYY-MM - just data for the specific month |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 2 | max_args = 1 |
min_args = 1 | min_args = 0 |
def __init__(self, name): | def __init__(self, name): |
super(LoadAnalytics, self).__init__(name) | super(LoadAnalytics, self).__init__(name) |
self.parser.add_option('-d', '--delete-first', | self.parser.add_option('-d', '--delete-first', |
action='store_true', | action='store_true', |
default=False, | default=False, |
dest='delete_first', | dest='delete_first', |
help='Delete data for the period first') | help='Delete data for the period first') |
self.parser.add_option('-s', '--skip_url_stats', | self.parser.add_option('-s', '--skip_url_stats', |
action='store_true', | action='store_true', |
default=False, | default=False, |
dest='skip_url_stats', | dest='skip_url_stats', |
help='Skip the download of URL data - just do site-wide stats') | help='Skip the download of URL data - just do site-wide stats') |
def command(self): | def command(self): |
self._load_config() | self._load_config() |
from download_analytics import DownloadAnalytics | from download_analytics import DownloadAnalytics |
from ga_auth import (init_service, get_profile_id) | from ga_auth import (init_service, get_profile_id) |
ga_token_filepath = os.path.expanduser(config.get('googleanalytics.token.filepath', '')) | |
if not ga_token_filepath: | |
print 'ERROR: In the CKAN config you need to specify the filepath of the ' \ | |
'Google Analytics token file under key: googleanalytics.token.filepath' | |
return | |
try: | try: |
svc = init_service(self.args[0], None) | svc = init_service(ga_token_filepath, None) |
except TypeError: | except TypeError: |
print ('Have you correctly run the getauthtoken task and ' | print ('Have you correctly run the getauthtoken task and ' |
'specified the correct token file?') | 'specified the correct token file in the CKAN config under ' |
'"googleanalytics.token.filepath"?') | |
return | return |
downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc), | downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc), |
delete_first=self.options.delete_first, | delete_first=self.options.delete_first, |
skip_url_stats=self.options.skip_url_stats) | skip_url_stats=self.options.skip_url_stats) |
time_period = self.args[1] if self.args and len(self.args) > 1 \ | time_period = self.args[0] if self.args else 'latest' |
else 'latest' | |
if time_period == 'all': | if time_period == 'all': |
downloader.all_() | downloader.all_() |
elif time_period == 'latest': | elif time_period == 'latest': |
downloader.latest() | downloader.latest() |
else: | else: |
# The month to use | # The month to use |
for_date = datetime.datetime.strptime(time_period, '%Y-%m') | for_date = datetime.datetime.strptime(time_period, '%Y-%m') |
downloader.specific_month(for_date) | downloader.specific_month(for_date) |
import re | import re |
import uuid | import uuid |
from sqlalchemy import Table, Column, MetaData, ForeignKey | from sqlalchemy import Table, Column, MetaData, ForeignKey |
from sqlalchemy import types | from sqlalchemy import types |
from sqlalchemy.sql import select | from sqlalchemy.sql import select |
from sqlalchemy.orm import mapper, relation | from sqlalchemy.orm import mapper, relation |
from sqlalchemy import func | from sqlalchemy import func |
import ckan.model as model | import ckan.model as model |
from ckan.lib.base import * | from ckan.lib.base import * |
log = __import__('logging').getLogger(__name__) | |
def make_uuid(): | def make_uuid(): |
return unicode(uuid.uuid4()) | return unicode(uuid.uuid4()) |
metadata = MetaData() | metadata = MetaData() |
class GA_Url(object): | class GA_Url(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
url_table = Table('ga_url', metadata, | url_table = Table('ga_url', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('period_complete_day', types.Integer), | Column('period_complete_day', types.Integer), |
Column('pageviews', types.UnicodeText), | Column('pageviews', types.UnicodeText), |
Column('visits', types.UnicodeText), | Column('visits', types.UnicodeText), |
Column('url', types.UnicodeText), | Column('url', types.UnicodeText), |
Column('department_id', types.UnicodeText), | Column('department_id', types.UnicodeText), |
Column('package_id', types.UnicodeText), | Column('package_id', types.UnicodeText), |
) | ) |
mapper(GA_Url, url_table) | mapper(GA_Url, url_table) |
class GA_Stat(object): | class GA_Stat(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
stat_table = Table('ga_stat', metadata, | stat_table = Table('ga_stat', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('stat_name', types.UnicodeText), | Column('stat_name', types.UnicodeText), |
Column('key', types.UnicodeText), | Column('key', types.UnicodeText), |
Column('value', types.UnicodeText), ) | Column('value', types.UnicodeText), ) |
mapper(GA_Stat, stat_table) | mapper(GA_Stat, stat_table) |
class GA_Publisher(object): | class GA_Publisher(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
pub_table = Table('ga_publisher', metadata, | pub_table = Table('ga_publisher', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('publisher_name', types.UnicodeText), | Column('publisher_name', types.UnicodeText), |
Column('views', types.UnicodeText), | Column('views', types.UnicodeText), |
Column('visits', types.UnicodeText), | Column('visits', types.UnicodeText), |
Column('toplevel', types.Boolean, default=False), | Column('toplevel', types.Boolean, default=False), |
Column('subpublishercount', types.Integer, default=0), | Column('subpublishercount', types.Integer, default=0), |
Column('parent', types.UnicodeText), | Column('parent', types.UnicodeText), |
) | ) |
mapper(GA_Publisher, pub_table) | mapper(GA_Publisher, pub_table) |
class GA_ReferralStat(object): | class GA_ReferralStat(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
referrer_table = Table('ga_referrer', metadata, | referrer_table = Table('ga_referrer', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('source', types.UnicodeText), | Column('source', types.UnicodeText), |
Column('url', types.UnicodeText), | Column('url', types.UnicodeText), |
Column('count', types.Integer), | Column('count', types.Integer), |
) | ) |
mapper(GA_ReferralStat, referrer_table) | mapper(GA_ReferralStat, referrer_table) |
def init_tables(): | def init_tables(): |
metadata.create_all(model.meta.engine) | metadata.create_all(model.meta.engine) |
cached_tables = {} | cached_tables = {} |
def get_table(name): | def get_table(name): |
if name not in cached_tables: | if name not in cached_tables: |
meta = MetaData() | meta = MetaData() |
meta.reflect(bind=model.meta.engine) | meta.reflect(bind=model.meta.engine) |
table = meta.tables[name] | table = meta.tables[name] |
cached_tables[name] = table | cached_tables[name] = table |
return cached_tables[name] | return cached_tables[name] |
def _normalize_url(url): | def _normalize_url(url): |
'''Strip off the hostname etc. Do this before storing it. | '''Strip off the hostname etc. Do this before storing it. |
>>> normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices') | >>> normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices') |
'/dataset/weekly_fuel_prices' | '/dataset/weekly_fuel_prices' |
''' | ''' |
return '/' + '/'.join(url.split('/')[3:]) | return '/' + '/'.join(url.split('/')[3:]) |
def _get_package_and_publisher(url): | def _get_package_and_publisher(url): |
# e.g. /dataset/fuel_prices | # e.g. /dataset/fuel_prices |
# e.g. /dataset/fuel_prices/resource/e63380d4 | # e.g. /dataset/fuel_prices/resource/e63380d4 |
dataset_match = re.match('/dataset/([^/]+)(/.*)?', url) | dataset_match = re.match('/dataset/([^/]+)(/.*)?', url) |
if dataset_match: | if dataset_match: |
dataset_ref = dataset_match.groups()[0] | dataset_ref = dataset_match.groups()[0] |
dataset = model.Package.get(dataset_ref) | dataset = model.Package.get(dataset_ref) |
if dataset: | if dataset: |
publisher_groups = dataset.get_groups('publisher') | publisher_groups = dataset.get_groups('publisher') |
if publisher_groups: | if publisher_groups: |
return dataset_ref,publisher_groups[0].name | return dataset_ref,publisher_groups[0].name |
return dataset_ref, None | return dataset_ref, None |
else: | else: |
publisher_match = re.match('/publisher/([^/]+)(/.*)?', url) | publisher_match = re.match('/publisher/([^/]+)(/.*)?', url) |
if publisher_match: | if publisher_match: |
return None, publisher_match.groups()[0] | return None, publisher_match.groups()[0] |
return None, None | return None, None |
def update_sitewide_stats(period_name, stat_name, data): | def update_sitewide_stats(period_name, stat_name, data): |
for k,v in data.iteritems(): | for k,v in data.iteritems(): |
item = model.Session.query(GA_Stat).\ | item = model.Session.query(GA_Stat).\ |
filter(GA_Stat.period_name==period_name).\ | filter(GA_Stat.period_name==period_name).\ |
filter(GA_Stat.key==k).\ | filter(GA_Stat.key==k).\ |
filter(GA_Stat.stat_name==stat_name).first() | filter(GA_Stat.stat_name==stat_name).first() |
if item: | if item: |
item.period_name = period_name | item.period_name = period_name |
item.key = k | item.key = k |
item.value = v | item.value = v |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'key': k, | 'key': k, |
'value': v, | 'value': v, |
'stat_name': stat_name | 'stat_name': stat_name |
} | } |
model.Session.add(GA_Stat(**values)) | model.Session.add(GA_Stat(**values)) |
model.Session.commit() | model.Session.commit() |
def pre_update_url_stats(period_name): | def pre_update_url_stats(period_name): |
model.Session.query(GA_Url).\ | model.Session.query(GA_Url).\ |
filter(GA_Url.period_name==period_name).delete() | filter(GA_Url.period_name==period_name).delete() |
model.Session.query(GA_Url).\ | model.Session.query(GA_Url).\ |
filter(GA_Url.period_name=='All').delete() | filter(GA_Url.period_name=='All').delete() |
def update_url_stats(period_name, period_complete_day, url_data): | def update_url_stats(period_name, period_complete_day, url_data): |
''' | ''' |
Given a list of urls and number of hits for each during a given period, | Given a list of urls and number of hits for each during a given period, |
stores them in GA_Url under the period and recalculates the totals for | stores them in GA_Url under the period and recalculates the totals for |
the 'All' period. | the 'All' period. |
''' | ''' |
for url, views, visits in url_data: | for url, views, visits in url_data: |
package, publisher = _get_package_and_publisher(url) | package, publisher = _get_package_and_publisher(url) |
item = model.Session.query(GA_Url).\ | item = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name==period_name).\ | filter(GA_Url.period_name==period_name).\ |
filter(GA_Url.url==url).first() | filter(GA_Url.url==url).first() |
if item: | if item: |
item.pageviews = item.pageviews + views | item.pageviews = item.pageviews + views |
item.visits = item.visits + visits | item.visits = item.visits + visits |
if not item.package_id: | if not item.package_id: |
item.package_id = package | item.package_id = package |
if not item.department_id: | if not item.department_id: |
item.department_id = publisher | item.department_id = publisher |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'period_complete_day': period_complete_day, | 'period_complete_day': period_complete_day, |
'url': url, | 'url': url, |
'pageviews': views, | 'pageviews': views, |
'visits': visits, | 'visits': visits, |
'department_id': publisher, | 'department_id': publisher, |
'package_id': package | 'package_id': package |
} | } |
model.Session.add(GA_Url(**values)) | model.Session.add(GA_Url(**values)) |
model.Session.commit() | model.Session.commit() |
if package: | if package: |
old_pageviews, old_visits = 0, 0 | old_pageviews, old_visits = 0, 0 |
old = model.Session.query(GA_Url).\ | old = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name=='All').\ | filter(GA_Url.period_name=='All').\ |
filter(GA_Url.url==url).all() | filter(GA_Url.url==url).all() |
old_pageviews = sum([int(o.pageviews) for o in old]) | old_pageviews = sum([int(o.pageviews) for o in old]) |
old_visits = sum([int(o.visits) for o in old]) | old_visits = sum([int(o.visits) for o in old]) |
entries = model.Session.query(GA_Url).\ | entries = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name!='All').\ | filter(GA_Url.period_name!='All').\ |
filter(GA_Url.url==url).all() | filter(GA_Url.url==url).all() |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': 'All', | 'period_name': 'All', |
'period_complete_day': 0, | 'period_complete_day': 0, |
'url': url, | 'url': url, |
'pageviews': sum([int(e.pageviews) for e in entries]) + old_pageviews, | 'pageviews': sum([int(e.pageviews) for e in entries]) + old_pageviews, |
'visits': sum([int(e.visits) for e in entries]) + old_visits, | 'visits': sum([int(e.visits or 0) for e in entries]) + old_visits, |
'department_id': publisher, | 'department_id': publisher, |
'package_id': package | 'package_id': package |
} | } |
model.Session.add(GA_Url(**values)) | model.Session.add(GA_Url(**values)) |
model.Session.commit() | model.Session.commit() |
def update_social(period_name, data): | def update_social(period_name, data): |
# Clean up first. | # Clean up first. |
model.Session.query(GA_ReferralStat).\ | model.Session.query(GA_ReferralStat).\ |
filter(GA_ReferralStat.period_name==period_name).delete() | filter(GA_ReferralStat.period_name==period_name).delete() |
for url,data in data.iteritems(): | for url,data in data.iteritems(): |
for entry in data: | for entry in data: |
source = entry[0] | source = entry[0] |
count = entry[1] | count = entry[1] |
item = model.Session.query(GA_ReferralStat).\ | item = model.Session.query(GA_ReferralStat).\ |
filter(GA_ReferralStat.period_name==period_name).\ | filter(GA_ReferralStat.period_name==period_name).\ |
filter(GA_ReferralStat.source==source).\ | filter(GA_ReferralStat.source==source).\ |
filter(GA_ReferralStat.url==url).first() | filter(GA_ReferralStat.url==url).first() |
if item: | if item: |
item.count = item.count + count | item.count = item.count + count |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'source': source, | 'source': source, |
'url': url, | 'url': url, |
'count': count, | 'count': count, |
} | } |
model.Session.add(GA_ReferralStat(**values)) | model.Session.add(GA_ReferralStat(**values)) |
model.Session.commit() | model.Session.commit() |
def update_publisher_stats(period_name): | def update_publisher_stats(period_name): |
""" | """ |
Updates the publisher stats from the data retrieved for /dataset/* | Updates the publisher stats from the data retrieved for /dataset/* |
and /publisher/*. Will run against each dataset and generates the | and /publisher/*. Will run against each dataset and generates the |
totals for the entire tree beneath each publisher. | totals for the entire tree beneath each publisher. |
""" | """ |
toplevel = get_top_level() | toplevel = get_top_level() |
publishers = model.Session.query(model.Group).\ | publishers = model.Session.query(model.Group).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
filter(model.Group.state=='active').all() | filter(model.Group.state=='active').all() |
for publisher in publishers: | for publisher in publishers: |
views, visits, subpub = update_publisher(period_name, publisher, publisher.name) | views, visits, subpub = update_publisher(period_name, publisher, publisher.name) |
parent, parents = '', publisher.get_groups('publisher') | parent, parents = '', publisher.get_groups('publisher') |
if parents: | if parents: |
parent = parents[0].name | parent = parents[0].name |
item = model.Session.query(GA_Publisher).\ | item = model.Session.query(GA_Publisher).\ |
filter(GA_Publisher.period_name==period_name).\ | filter(GA_Publisher.period_name==period_name).\ |
filter(GA_Publisher.publisher_name==publisher.name).first() | filter(GA_Publisher.publisher_name==publisher.name).first() |
if item: | if item: |
item.views = views | item.views = views |
item.visits = visits | item.visits = visits |
item.publisher_name = publisher.name | item.publisher_name = publisher.name |
item.toplevel = publisher in toplevel | item.toplevel = publisher in toplevel |
item.subpublishercount = subpub | item.subpublishercount = subpub |
item.parent = parent | item.parent = parent |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'publisher_name': publisher.name, | 'publisher_name': publisher.name, |
'views': views, | 'views': views, |
'visits': visits, | 'visits': visits, |
'toplevel': publisher in toplevel, | 'toplevel': publisher in toplevel, |
'subpublishercount': subpub, | 'subpublishercount': subpub, |
'parent': parent | 'parent': parent |
} | } |
model.Session.add(GA_Publisher(**values)) | model.Session.add(GA_Publisher(**values)) |
model.Session.commit() | model.Session.commit() |
def update_publisher(period_name, pub, part=''): | def update_publisher(period_name, pub, part=''): |
views,visits,subpub = 0, 0, 0 | views,visits,subpub = 0, 0, 0 |
for publisher in go_down_tree(pub): | for publisher in go_down_tree(pub): |
subpub = subpub + 1 | subpub = subpub + 1 |
items = model.Session.query(GA_Url).\ | items = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name==period_name).\ | filter(GA_Url.period_name==period_name).\ |
filter(GA_Url.department_id==publisher.name).all() | filter(GA_Url.department_id==publisher.name).all() |
for item in items: | for item in items: |
views = views + int(item.pageviews) | views = views + int(item.pageviews) |
visits = visits + int(item.visits) | visits = visits + int(item.visits) |
return views, visits, (subpub-1) | return views, visits, (subpub-1) |
def get_top_level(): | def get_top_level(): |
'''Returns the top level publishers.''' | '''Returns the top level publishers.''' |
return model.Session.query(model.Group).\ | return model.Session.query(model.Group).\ |
outerjoin(model.Member, model.Member.table_id == model.Group.id and \ | outerjoin(model.Member, model.Member.table_id == model.Group.id and \ |
model.Member.table_name == 'group' and \ | model.Member.table_name == 'group' and \ |
model.Member.state == 'active').\ | model.Member.state == 'active').\ |
filter(model.Member.id==None).\ | filter(model.Member.id==None).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
order_by(model.Group.name).all() | order_by(model.Group.name).all() |
def get_children(publisher): | def get_children(publisher): |
'''Finds child publishers for the given publisher (object). (Not recursive)''' | '''Finds child publishers for the given publisher (object). (Not recursive)''' |
from ckan.model.group import HIERARCHY_CTE | from ckan.model.group import HIERARCHY_CTE |
return model.Session.query(model.Group).\ | return model.Session.query(model.Group).\ |
from_statement(HIERARCHY_CTE).params(id=publisher.id, type='publisher').\ | from_statement(HIERARCHY_CTE).params(id=publisher.id, type='publisher').\ |
all() | all() |
def go_down_tree(publisher): | def go_down_tree(publisher): |
'''Provided with a publisher object, it walks down the hierarchy and yields each publisher, | '''Provided with a publisher object, it walks down the hierarchy and yields each publisher, |
including the one you supply.''' | including the one you supply.''' |
yield publisher | yield publisher |
for child in get_children(publisher): | for child in get_children(publisher): |
for grandchild in go_down_tree(child): | for grandchild in go_down_tree(child): |
yield grandchild | yield grandchild |
def delete(period_name): | def delete(period_name): |
''' | ''' |
Deletes table data for the specified period, or specify 'all' | Deletes table data for the specified period, or specify 'all' |
for all periods. | for all periods. |
''' | ''' |
for object_type in (GA_Url, GA_Stat, GA_Publisher, GA_ReferralStat): | for object_type in (GA_Url, GA_Stat, GA_Publisher, GA_ReferralStat): |
q = model.Session.query(object_type) | q = model.Session.query(object_type) |
if period_name != 'all': | if period_name != 'all': |
q = q.filter_by(period_name=period_name) | q = q.filter_by(period_name=period_name) |
q.delete() | q.delete() |
model.Session.commit() | model.Session.commit() |
def get_score_for_dataset(dataset_name): | def get_score_for_dataset(dataset_name): |
''' | |
Returns a "current popularity" score for a dataset, | |
based on how many views it has had recently. | |
''' | |
import datetime | import datetime |
now = datetime.datetime.now() | now = datetime.datetime.now() |
period_names = ['%s-%02d' % (now.year, now.month), | last_month = now - datetime.timedelta(days=30) |
'%s-%02d' % (now.year, now.month-1)] | period_names = ['%s-%02d' % (last_month.year, last_month.month), |
'%s-%02d' % (now.year, now.month), | |
entry = model.Session.query(GA_Url)\ | ] |
.filter(GA_Url.period_name==period_names[0])\ | |
.filter(GA_Url.package_id==dataset_name).first() | score = 0 |
score = int(entry.pageviews) if entry else 0 | for period_name in period_names: |
score /= 2 # previous periods are discounted by 50% | |
entry = model.Session.query(GA_Url)\ | entry = model.Session.query(GA_Url)\ |
.filter(GA_Url.period_name==period_names[1])\ | .filter(GA_Url.period_name==period_name)\ |
.filter(GA_Url.package_id==dataset_name).first() | .filter(GA_Url.package_id==dataset_name).first() |
val = int(entry.pageviews) if entry else 0 | # score |
score += val/2 if val else 0 | if entry: |
views = float(entry.pageviews) | |
return 0 | if entry.period_complete_day: |
views_per_day = views / entry.period_complete_day | |
else: | |
views_per_day = views / 15 # guess | |
score += views_per_day | |
score = int(score * 100) | |
log.debug('Popularity %s: %s', score, dataset_name) | |
return score | |
import logging | import logging |
import operator | import operator |
import ckan.lib.base as base | import ckan.lib.base as base |
import ckan.model as model | import ckan.model as model |
from ckan.logic import get_action | from ckan.logic import get_action |
from ckanext.ga_report.ga_model import GA_Url, GA_Publisher | from ckanext.ga_report.ga_model import GA_Url, GA_Publisher |
from ckanext.ga_report.controller import _get_publishers | from ckanext.ga_report.controller import _get_publishers |
_log = logging.getLogger(__name__) | _log = logging.getLogger(__name__) |
def popular_datasets(count=10): | def popular_datasets(count=10): |
import random | import random |
publisher = None | publisher = None |
publishers = _get_publishers(30) | publishers = _get_publishers(30) |
total = len(publishers) | total = len(publishers) |
while not publisher or not datasets: | while not publisher or not datasets: |
rand = random.randrange(0, total) | rand = random.randrange(0, total) |
publisher = publishers[rand][0] | publisher = publishers[rand][0] |
if not publisher.state == 'active': | if not publisher.state == 'active': |
publisher = None | publisher = None |
continue | continue |
datasets = _datasets_for_publisher(publisher, 10)[:count] | datasets = _datasets_for_publisher(publisher, 10)[:count] |
ctx = { | ctx = { |
'datasets': datasets, | 'datasets': datasets, |
'publisher': publisher | 'publisher': publisher |
} | } |
return base.render_snippet('ga_report/ga_popular_datasets.html', **ctx) | return base.render_snippet('ga_report/ga_popular_datasets.html', **ctx) |
def single_popular_dataset(top=20): | def single_popular_dataset(top=20): |
'''Returns a random dataset from the most popular ones. | '''Returns a random dataset from the most popular ones. |
:param top: the number of top datasets to select from | :param top: the number of top datasets to select from |
''' | ''' |
import random | import random |
top_datasets = model.Session.query(GA_Url).\ | top_datasets = model.Session.query(GA_Url).\ |
filter(GA_Url.url.like('/dataset/%')).\ | filter(GA_Url.url.like('/dataset/%')).\ |
order_by('ga_url.pageviews::int desc') | order_by('ga_url.pageviews::int desc') |
num_top_datasets = top_datasets.count() | num_top_datasets = top_datasets.count() |
dataset = None | dataset = None |
if num_top_datasets: | if num_top_datasets: |
count = 0 | count = 0 |
while not dataset: | while not dataset: |
rand = random.randrange(0, min(top, num_top_datasets)) | rand = random.randrange(0, min(top, num_top_datasets)) |
ga_url = top_datasets[rand] | ga_url = top_datasets[rand] |
dataset = model.Package.get(ga_url.url[len('/dataset/'):]) | dataset = model.Package.get(ga_url.url[len('/dataset/'):]) |
if dataset and not dataset.state == 'active': | if dataset and not dataset.state == 'active': |
dataset = None | dataset = None |
count += 1 | # When testing, it is possible that top datasets are not available |
if count > 10: | # so only go round this loop a few times before falling back on |
break | # a random dataset. |
count += 1 | |
if count > 10: | |
break | |
if not dataset: | if not dataset: |
# fallback | # fallback |
dataset = model.Session.query(model.Package)\ | dataset = model.Session.query(model.Package)\ |
.filter_by(state='active').first() | .filter_by(state='active').first() |
if not dataset: | if not dataset: |
return None | return None |
dataset_dict = get_action('package_show')({'model': model, | dataset_dict = get_action('package_show')({'model': model, |
'session': model.Session, | 'session': model.Session, |
'validate': False}, | 'validate': False}, |
{'id':dataset.id}) | {'id':dataset.id}) |
return dataset_dict | return dataset_dict |
def single_popular_dataset_html(top=20): | def single_popular_dataset_html(top=20): |
dataset_dict = single_popular_dataset(top) | dataset_dict = single_popular_dataset(top) |
groups = package.get('groups', []) | groups = package.get('groups', []) |
publishers = [ g for g in groups if g.get('type') == 'publisher' ] | publishers = [ g for g in groups if g.get('type') == 'publisher' ] |
publisher = publishers[0] if publishers else {'name':'', 'title': ''} | publisher = publishers[0] if publishers else {'name':'', 'title': ''} |
context = { | context = { |
'dataset': dataset_dict, | 'dataset': dataset_dict, |
'publisher': publisher_dict | 'publisher': publisher_dict |
} | } |
return base.render_snippet('ga_report/ga_popular_single.html', **context) | return base.render_snippet('ga_report/ga_popular_single.html', **context) |
def most_popular_datasets(publisher, count=20): | def most_popular_datasets(publisher, count=20): |
if not publisher: | if not publisher: |
_log.error("No valid publisher passed to 'most_popular_datasets'") | _log.error("No valid publisher passed to 'most_popular_datasets'") |
return "" | return "" |
results = _datasets_for_publisher(publisher, count) | results = _datasets_for_publisher(publisher, count) |
ctx = { | ctx = { |
'dataset_count': len(results), | 'dataset_count': len(results), |
'datasets': results, | 'datasets': results, |
'publisher': publisher | 'publisher': publisher |
} | } |
return base.render_snippet('ga_report/publisher/popular.html', **ctx) | return base.render_snippet('ga_report/publisher/popular.html', **ctx) |
def _datasets_for_publisher(publisher, count): | def _datasets_for_publisher(publisher, count): |
datasets = {} | datasets = {} |
entries = model.Session.query(GA_Url).\ | entries = model.Session.query(GA_Url).\ |
filter(GA_Url.department_id==publisher.name).\ | filter(GA_Url.department_id==publisher.name).\ |
filter(GA_Url.url.like('/dataset/%')).\ | filter(GA_Url.url.like('/dataset/%')).\ |
order_by('ga_url.pageviews::int desc').all() | order_by('ga_url.pageviews::int desc').all() |
for entry in entries: | for entry in entries: |
if len(datasets) < count: | if len(datasets) < count: |
p = model.Package.get(entry.url[len('/dataset/'):]) | p = model.Package.get(entry.url[len('/dataset/'):]) |
if not p in datasets: | if not p in datasets: |
datasets[p] = {'views':0, 'visits': 0} | datasets[p] = {'views':0, 'visits': 0} |
datasets[p]['views'] = datasets[p]['views'] + int(entry.pageviews) | datasets[p]['views'] = datasets[p]['views'] + int(entry.pageviews) |
datasets[p]['visits'] = datasets[p]['visits'] + int(entry.visits) | datasets[p]['visits'] = datasets[p]['visits'] + int(entry.visits) |
results = [] | results = [] |
for k, v in datasets.iteritems(): | for k, v in datasets.iteritems(): |
results.append((k,v['views'],v['visits'])) | results.append((k,v['views'],v['visits'])) |
return sorted(results, key=operator.itemgetter(1), reverse=True) | return sorted(results, key=operator.itemgetter(1), reverse=True) |