ckanext-ga-report | ckanext-ga-report |
================= | ================= |
**Status:** Development | **Status:** Development |
**CKAN Version:** 1.7.1+ | **CKAN Version:** 1.7.1+ |
Overview | Overview |
-------- | -------- |
For creating detailed reports of CKAN analytics, including totals per group. | For creating detailed reports of CKAN analytics, including totals per group. |
Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). | Whereas ckanext-googleanalytics focusses on providing page view stats a recent period and for all time (aimed at end users), ckanext-ga-report is more interested in building regular periodic reports (more for site managers to monitor). |
Contents of this extension: | Contents of this extension: |
* Use the CLI tool to download Google Analytics data for each time period into this extension's database tables | * Use the CLI tool to download Google Analytics data for each time period into this extension's database tables |
* Users can view the data as web page reports | * Users can view the data as web page reports |
Installation | Installation |
------------ | ------------ |
1. Activate you CKAN python environment and install this extension's software:: | 1. Activate you CKAN python environment and install this extension's software:: |
$ pyenv/bin/activate | $ pyenv/bin/activate |
$ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report | $ pip install -e git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report |
2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: | 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration:: |
googleanalytics.id = UA-1010101-1 | googleanalytics.id = UA-1010101-1 |
googleanalytics.account = Account name (i.e. data.gov.uk, see top level item at https://www.google.com/analytics) | googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics) |
ga-report.period = monthly | ga-report.period = monthly |
ga-report.bounce_url = /data | |
Note that your credentials will be readable by system administrators on your server. Rather than use sensitive account details, it is suggested you give access to the GA account to a new Google account that you create just for this purpose. | The ga-report.bounce_url specifies the path to use when calculating bounces. For DGU this is /data |
but you may want to set this to /. | |
3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: | 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file):: |
$ paster initdb --config=../ckan/development.ini | $ paster initdb --config=../ckan/development.ini |
4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: | 4. Enable the extension in your CKAN config file by adding it to ``ckan.plugins``:: |
ckan.plugins = ga-report | ckan.plugins = ga-report |
Problem shooting | |
---------------- | |
* ``(ProgrammingError) relation "ga_url" does not exist`` | |
This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension. | |
Authorization | Authorization |
-------------- | -------------- |
Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: | Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience: |
1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ | 1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_ |
2. Sign-in and create a project or use an existing project. | 2. Sign-in and create a project or use an existing project. |
3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. | 3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service. |
4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ | 4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_ |
5. Click Create an OAuth 2.0 client ID.... | 5. Click Create an OAuth 2.0 client ID.... |
6. Fill out the Branding Information fields and click Next. | 6. Fill out the Branding Information fields and click Next. |
7. In Client ID Settings, set Application type to Installed application. | 7. In Client ID Settings, set Application type to Installed application. |
8. Click Create client ID | 8. Click Create client ID |
9. The details you need below are Client ID, Client secret, and Redirect URIs | 9. The details you need below are Client ID, Client secret, and Redirect URIs |
Once you have set up your credentials.json file you can generate an oauth token file by using the | Once you have set up your credentials.json file you can generate an oauth token file by using the |
following command, which will store your oauth token in a file called token.dat once you have finished | following command, which will store your oauth token in a file called token.dat once you have finished |
giving permission in the browser:: | giving permission in the browser:: |
$ paster getauthtoken --config=../ckan/development.ini | $ paster getauthtoken --config=../ckan/development.ini |
Tutorial | Tutorial |
-------- | -------- |
Download some GA data and store it in CKAN's db. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: | Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step:: |
$ paster loadanalytics token.dat latest --config=../ckan/development.ini | $ paster loadanalytics token.dat latest --config=../ckan/development.ini |
The value after the token file is how much data you want to retrieve, this can be | The value after the token file is how much data you want to retrieve, this can be |
* **all** - data for all time (since 2010) | * **all** - data for all time (since 2010) |
* **latest** - (default) just the 'latest' data | * **latest** - (default) just the 'latest' data |
* **YYYY-MM-DD** - just data for all time periods going back to (and including) this date | * **YYYY-MM-DD** - just data for all time periods going back to (and including) this date |
Software Licence | Software Licence |
================ | ================ |
This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). | This software is developed by Cabinet Office. It is Crown Copyright and opened up under the Open Government Licence (OGL) (which is compatible with Creative Commons Attibution License). |
OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ | OGL terms: http://www.nationalarchives.gov.uk/doc/open-government-licence/ |
import logging | import logging |
import datetime | import datetime |
from ckan.lib.cli import CkanCommand | from ckan.lib.cli import CkanCommand |
# No other CKAN imports allowed until _load_config is run, | # No other CKAN imports allowed until _load_config is run, |
# or logging is disabled | # or logging is disabled |
class InitDB(CkanCommand): | class InitDB(CkanCommand): |
"""Initialise the extension's database tables | """Initialise the extension's database tables |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 0 | max_args = 0 |
min_args = 0 | min_args = 0 |
def command(self): | def command(self): |
self._load_config() | self._load_config() |
import ckan.model as model | import ckan.model as model |
model.Session.remove() | model.Session.remove() |
model.Session.configure(bind=model.meta.engine) | model.Session.configure(bind=model.meta.engine) |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
import ga_model | import ga_model |
ga_model.init_tables() | ga_model.init_tables() |
log.info("DB tables are setup") | log.info("DB tables are setup") |
class GetAuthToken(CkanCommand): | class GetAuthToken(CkanCommand): |
""" Get's the Google auth token | """ Get's the Google auth token |
Usage: paster getauthtoken <credentials_file> | Usage: paster getauthtoken <credentials_file> |
Where <credentials_file> is the file name containing the details | Where <credentials_file> is the file name containing the details |
for the service (obtained from https://code.google.com/apis/console). | for the service (obtained from https://code.google.com/apis/console). |
By default this is set to credentials.json | By default this is set to credentials.json |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 0 | max_args = 0 |
min_args = 0 | min_args = 0 |
def command(self): | def command(self): |
""" | """ |
In this case we don't want a valid service, but rather just to | In this case we don't want a valid service, but rather just to |
force the user through the auth flow. We allow this to complete to | force the user through the auth flow. We allow this to complete to |
act as a form of verification instead of just getting the token and | act as a form of verification instead of just getting the token and |
assuming it is correct. | assuming it is correct. |
""" | """ |
from ga_auth import init_service | from ga_auth import init_service |
init_service('token.dat', | init_service('token.dat', |
self.args[0] if self.args | self.args[0] if self.args |
else 'credentials.json') | else 'credentials.json') |
class LoadAnalytics(CkanCommand): | class LoadAnalytics(CkanCommand): |
"""Get data from Google Analytics API and save it | """Get data from Google Analytics API and save it |
in the ga_model | in the ga_model |
Usage: paster loadanalytics <tokenfile> <time-period> | Usage: paster loadanalytics <tokenfile> <time-period> |
Where <tokenfile> is the name of the auth token file from | Where <tokenfile> is the name of the auth token file from |
the getauthtoken step. | the getauthtoken step. |
And where <time-period> is: | And where <time-period> is: |
all - data for all time | all - data for all time |
latest - (default) just the 'latest' data | latest - (default) just the 'latest' data |
YYYY-MM - just data for the specific month | YYYY-MM - just data for the specific month |
""" | """ |
summary = __doc__.split('\n')[0] | summary = __doc__.split('\n')[0] |
usage = __doc__ | usage = __doc__ |
max_args = 2 | max_args = 2 |
min_args = 1 | min_args = 1 |
def __init__(self, name): | |
super(LoadAnalytics, self).__init__(name) | |
self.parser.add_option('-d', '--delete-first', | |
action='store_true', | |
default=False, | |
dest='delete_first', | |
help='Delete data for the period first') | |
def command(self): | def command(self): |
self._load_config() | self._load_config() |
from download_analytics import DownloadAnalytics | from download_analytics import DownloadAnalytics |
from ga_auth import (init_service, get_profile_id) | from ga_auth import (init_service, get_profile_id) |
try: | try: |
svc = init_service(self.args[0], None) | svc = init_service(self.args[0], None) |
except TypeError: | except TypeError: |
print ('Have you correctly run the getauthtoken task and ' | print ('Have you correctly run the getauthtoken task and ' |
'specified the correct file here') | 'specified the correct token file?') |
return | return |
downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc)) | downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc), |
delete_first=self.options.delete_first) | |
time_period = self.args[1] if self.args and len(self.args) > 1 \ | time_period = self.args[1] if self.args and len(self.args) > 1 \ |
else 'latest' | else 'latest' |
if time_period == 'all': | if time_period == 'all': |
downloader.all_() | downloader.all_() |
elif time_period == 'latest': | elif time_period == 'latest': |
downloader.latest() | downloader.latest() |
else: | else: |
# The month to use | # The month to use |
for_date = datetime.datetime.strptime(time_period, '%Y-%m') | for_date = datetime.datetime.strptime(time_period, '%Y-%m') |
downloader.specific_month(for_date) | downloader.specific_month(for_date) |
import re | |
import csv | |
import sys | |
import logging | import logging |
import operator | import operator |
from ckan.lib.base import BaseController, c, render, request, response, abort | import collections |
from ckan.lib.base import (BaseController, c, g, render, request, response, abort) | |
import sqlalchemy | import sqlalchemy |
from sqlalchemy import func, cast, Integer | from sqlalchemy import func, cast, Integer |
import ckan.model as model | import ckan.model as model |
from ga_model import GA_Url, GA_Stat | from ga_model import GA_Url, GA_Stat, GA_ReferralStat, GA_Publisher |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
def _get_month_name(strdate): | def _get_month_name(strdate): |
import calendar | import calendar |
from time import strptime | from time import strptime |
d = strptime(strdate, '%Y-%m') | d = strptime(strdate, '%Y-%m') |
return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year) | return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year) |
def _month_details(cls): | def _month_details(cls): |
'''Returns a list of all the month names''' | |
months = [] | months = [] |
vals = model.Session.query(cls.period_name).distinct().all() | vals = model.Session.query(cls.period_name).filter(cls.period_name!='All').distinct().all() |
for m in vals: | for m in vals: |
months.append( (m[0], _get_month_name(m[0]))) | months.append( (m[0], _get_month_name(m[0]))) |
return sorted(months, key=operator.itemgetter(0), reverse=True) | return sorted(months, key=operator.itemgetter(0), reverse=True) |
class GaReport(BaseController): | class GaReport(BaseController): |
def csv(self, month): | def csv(self, month): |
import csv | import csv |
entries = model.Session.query(GA_Stat).\ | q = model.Session.query(GA_Stat) |
filter(GA_Stat.period_name==month).\ | if month != 'all': |
order_by('GA_Stat.stat_name, GA_Stat.key').all() | q = q.filter(GA_Stat.period_name==month) |
entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all() | |
response.headers['Content-Type'] = "text/csv; charset=utf-8" | response.headers['Content-Type'] = "text/csv; charset=utf-8" |
response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,)) | |
writer = csv.writer(response) | writer = csv.writer(response) |
writer.writerow(["Period", "Statistic", "Key", "Value"]) | writer.writerow(["Period", "Statistic", "Key", "Value"]) |
for entry in entries: | for entry in entries: |
writer.writerow([entry.period_name.encode('utf-8'), | writer.writerow([entry.period_name.encode('utf-8'), |
entry.stat_name.encode('utf-8'), | entry.stat_name.encode('utf-8'), |
entry.key.encode('utf-8'), | entry.key.encode('utf-8'), |
entry.value.encode('utf-8')]) | entry.value.encode('utf-8')]) |
def index(self): | def index(self): |
# Get the month details by fetching distinct values and determining the | # Get the month details by fetching distinct values and determining the |
# month names from the values. | # month names from the values. |
c.months = _month_details(GA_Stat) | c.months = _month_details(GA_Stat) |
# Work out which month to show, based on query params of the first item | # Work out which month to show, based on query params of the first item |
c.month = request.params.get('month', c.months[0][0] if c.months else '') | c.month_desc = 'all months' |
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) | c.month = request.params.get('month', '') |
if c.month: | |
entries = model.Session.query(GA_Stat).\ | c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) |
filter(GA_Stat.stat_name=='Totals').\ | |
filter(GA_Stat.period_name==c.month).\ | q = model.Session.query(GA_Stat).\ |
order_by('ga_stat.key').all() | filter(GA_Stat.stat_name=='Totals') |
if c.month: | |
c.global_totals = [] | q = q.filter(GA_Stat.period_name==c.month) |
for e in entries: | entries = q.order_by('ga_stat.key').all() |
val = e.value | |
if e.key in ['Average time on site', 'Pages per visit', 'Percent new visits']: | def clean_key(key, val): |
val = "%.2f" % round(float(e.value), 2) | if key in ['Average time on site', 'Pages per visit', 'New visits', 'Bounces']: |
if e.key == 'Average time on site': | val = "%.2f" % round(float(val), 2) |
if key == 'Average time on site': | |
mins, secs = divmod(float(val), 60) | mins, secs = divmod(float(val), 60) |
hours, mins = divmod(mins, 60) | hours, mins = divmod(mins, 60) |
val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val) | val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val) |
e.key = '%s *' % e.key | if key in ['New visits','Bounces']: |
c.global_totals.append((e.key, val)) | val = "%s%%" % val |
if key in ['Total page views', 'Total visits']: | |
val = int(val) | |
return key, val | |
c.global_totals = [] | |
if c.month: | |
for e in entries: | |
key, val = clean_key(e.key, e.value) | |
c.global_totals.append((key, val)) | |
else: | |
d = collections.defaultdict(list) | |
for e in entries: | |
d[e.key].append(float(e.value)) | |
for k, v in d.iteritems(): | |
if k in ['Total page views', 'Total visits']: | |
v = sum(v) | |
else: | |
v = float(sum(v))/len(v) | |
key, val = clean_key(k,v) | |
c.global_totals.append((key, val)) | |
c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0)) | |
keys = { | keys = { |
'Browser versions': 'browsers', | 'Browser versions': 'browser_versions', |
'Operating Systems versions': 'os', | 'Browsers': 'browsers', |
'Operating Systems versions': 'os_versions', | |
'Operating Systems': 'os', | |
'Social sources': 'social_networks', | 'Social sources': 'social_networks', |
'Languages': 'languages', | 'Languages': 'languages', |
'Country': 'country' | 'Country': 'country' |
} | } |
def shorten_name(name, length=60): | |
return (name[:length] + '..') if len(name) > 60 else name | |
def fill_out_url(url): | |
import urlparse | |
return urlparse.urljoin(g.site_url, url) | |
c.social_referrer_totals, c.social_referrers = [], [] | |
q = model.Session.query(GA_ReferralStat) | |
q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q | |
q = q.order_by('ga_referrer.count::int desc') | |
for entry in q.all(): | |
c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url), | |
entry.source,entry.count)) | |
q = model.Session.query(GA_ReferralStat.url, | |
func.sum(GA_ReferralStat.count).label('count')) | |
q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q | |
q = q.order_by('count desc').group_by(GA_ReferralStat.url) | |
for entry in q.all(): | |
c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'', | |
entry[1])) | |
for k, v in keys.iteritems(): | for k, v in keys.iteritems(): |
entries = model.Session.query(GA_Stat).\ | q = model.Session.query(GA_Stat).\ |
filter(GA_Stat.stat_name==k).\ | filter(GA_Stat.stat_name==k) |
filter(GA_Stat.period_name==c.month).\ | if c.month: |
order_by('ga_stat.value::int desc').all() | entries = [] |
setattr(c, v, [(s.key, s.value) for s in entries ]) | q = q.filter(GA_Stat.period_name==c.month).\ |
order_by('ga_stat.value::int desc') | |
d = collections.defaultdict(int) | |
for e in q.all(): | |
d[e.key] += int(e.value) | |
entries = [] | |
for key, val in d.iteritems(): | |
entries.append((key,val,)) | |
entries = sorted(entries, key=operator.itemgetter(1), reverse=True) | |
# Get the total for each set of values and then set the value as | |
# a percentage of the total | |
if k == 'Social sources': | |
total = sum([x for n,x in c.global_totals if n == 'Total visits']) | |
else: | |
total = sum([num for _,num in entries]) | |
setattr(c, v, [(k,_percent(v,total)) for k,v in entries ]) | |
return render('ga_report/site/index.html') | return render('ga_report/site/index.html') |
class GaPublisherReport(BaseController): | class GaDatasetReport(BaseController): |
""" | """ |
Displays the pageview and visit count for specific publishers based on | Displays the pageview and visit count for datasets |
the datasets associated with the publisher. | with options to filter by publisher and time period. |
""" | """ |
def publisher_csv(self, month): | |
def index(self): | ''' |
Returns a CSV of each publisher with the total number of dataset | |
views & visits. | |
''' | |
c.month = month if not month == 'all' else '' | |
response.headers['Content-Type'] = "text/csv; charset=utf-8" | |
response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,)) | |
writer = csv.writer(response) | |
writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"]) | |
for publisher,view,visit in _get_top_publishers(None): | |
writer.writerow([publisher.title.encode('utf-8'), | |
publisher.name.encode('utf-8'), | |
view, | |
visit, | |
month]) | |
def dataset_csv(self, id='all', month='all'): | |
''' | |
Returns a CSV with the number of views & visits for each dataset. | |
:param id: A Publisher ID or None if you want for all | |
:param month: The time period, or 'all' | |
''' | |
c.month = month if not month == 'all' else '' | |
if id != 'all': | |
c.publisher = model.Group.get(id) | |
if not c.publisher: | |
abort(404, 'A publisher with that name could not be found') | |
packages = self._get_packages(c.publisher) | |
response.headers['Content-Type'] = "text/csv; charset=utf-8" | |
response.headers['Content-Disposition'] = \ | |
str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,)) | |
writer = csv.writer(response) | |
writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"]) | |
for package,view,visit in packages: | |
writer.writerow([package.title.encode('utf-8'), | |
package.name.encode('utf-8'), | |
view, | |
visit, | |
month]) | |
def publishers(self): | |
'''A list of publishers and the number of views/visits for each''' | |
# Get the month details by fetching distinct values and determining the | # Get the month details by fetching distinct values and determining the |
# month names from the values. | # month names from the values. |
c.months = _month_details(GA_Url) | c.months = _month_details(GA_Url) |
# Work out which month to show, based on query params of the first item | # Work out which month to show, based on query params of the first item |
c.month = request.params.get('month', c.months[0][0] if c.months else '') | c.month = request.params.get('month', '') |
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) | c.month_desc = 'all months' |
if c.month: | |
connection = model.Session.connection() | c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) |
q = """ | |
select department_id, sum(pageviews::int) views, sum(visitors::int) visits | c.top_publishers = _get_top_publishers() |
from ga_url | |
where department_id <> '' | |
and period_name=%s | |
group by department_id order by views desc limit 20; | |
""" | |
# Add this back (before and period_name =%s) if you want to ignore publisher | |
# homepage views | |
# and not url like '/publisher/%%' | |
c.top_publishers = [] | |
res = connection.execute(q, c.month) | |
for row in res: | |
c.top_publishers.append((model.Group.get(row[0]), row[1], row[2])) | |
return render('ga_report/publisher/index.html') | return render('ga_report/publisher/index.html') |
def _get_packages(self, publisher=None, count=-1): | |
def read(self, id): | '''Returns the datasets in order of visits''' |
if count == -1: | |
c.publisher = model.Group.get(id) | count = sys.maxint |
if not c.publisher: | |
abort(404, 'A publisher with that name could not be found') | month = c.month or 'All' |
q = model.Session.query(GA_Url,model.Package)\ | |
.filter(model.Package.name==GA_Url.package_id)\ | |
.filter(GA_Url.url.like('/dataset/%')) | |
if publisher: | |
q = q.filter(GA_Url.department_id==publisher.name) | |
q = q.filter(GA_Url.period_name==month) | |
q = q.order_by('ga_url.visitors::int desc') | |
top_packages = [] | |
for entry,package in q.limit(count): | |
if package: | |
top_packages.append((package, entry.pageviews, entry.visitors)) | |
else: | |
log.warning('Could not find package associated package') | |
return top_packages | |
def read(self): | |
''' | |
Lists the most popular datasets across all publishers | |
''' | |
return self.read_publisher(None) | |
def read_publisher(self, id): | |
''' | |
Lists the most popular datasets for a publisher (or across all publishers) | |
''' | |
count = 20 | |
c.publishers = _get_publishers() | |
id = request.params.get('publisher', id) | |
if id and id != 'all': | |
c.publisher = model.Group.get(id) | |
if not c.publisher: | |
abort(404, 'A publisher with that name could not be found') | |
c.publisher_name = c.publisher.name | |
c.top_packages = [] # package, dataset_views in c.top_packages | c.top_packages = [] # package, dataset_views in c.top_packages |
# Get the month details by fetching distinct values and determining the | # Get the month details by fetching distinct values and determining the |
# month names from the values. | # month names from the values. |
c.months = _month_details(GA_Url) | c.months = _month_details(GA_Url) |
# Work out which month to show, based on query params of the first item | # Work out which month to show, based on query params of the first item |
c.month = request.params.get('month', c.months[0][0] if c.months else '') | c.month = request.params.get('month', '') |
c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) | if not c.month: |
c.month_desc = 'all months' | |
entry = model.Session.query(GA_Url).\ | else: |
filter(GA_Url.url=='/publisher/%s' % c.publisher.name).\ | c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month]) |
filter(GA_Url.period_name==c.month).first() | |
month = c.month or 'All' | |
c.publisher_page_views = 0 | |
q = model.Session.query(GA_Url).\ | |
filter(GA_Url.url=='/publisher/%s' % c.publisher_name) | |
entry = q.filter(GA_Url.period_name==c.month).first() | |
c.publisher_page_views = entry.pageviews if entry else 0 | c.publisher_page_views = entry.pageviews if entry else 0 |
entries = model.Session.query(GA_Url).\ | c.top_packages = self._get_packages(c.publisher, 20) |
filter(GA_Url.department_id==c.publisher.name).\ | |
filter(GA_Url.period_name==c.month).\ | |
order_by('ga_url.pageviews::int desc')[:20] | |
for entry in entries: | |
if entry.url.startswith('/dataset/'): | |
p = model.Package.get(entry.url[len('/dataset/'):]) | |
c.top_packages.append((p,entry.pageviews,entry.visitors)) | |
return render('ga_report/publisher/read.html') | return render('ga_report/publisher/read.html') |
def _get_top_publishers(limit=20): | |
''' | |
Returns a list of the top 20 publishers by dataset visits. | |
(The number to show can be varied with 'limit') | |
''' | |
month = c.month or 'All' | |
connection = model.Session.connection() | |
q = """ | |
select department_id, sum(pageviews::int) views, sum(visitors::int) visits | |
from ga_url | |
where department_id <> '' | |
and period_name=%s | |
group by department_id order by visits desc | |
""" | |
if limit: | |
q = q + " limit %s;" % (limit) | |
top_publishers = [] | |
res = connection.execute(q, month) | |
for row in res: | |
g = model.Group.get(row[0]) | |
if g: | |
top_publishers.append((g, row[1], row[2])) | |
return top_publishers | |
def _get_publishers(): | |
''' | |
Returns a list of all publishers. Each item is a tuple: | |
(names, title) | |
''' | |
publishers = [] | |
for pub in model.Session.query(model.Group).\ | |
filter(model.Group.type=='publisher').\ | |
filter(model.Group.state=='active').\ | |
order_by(model.Group.name): | |
publishers.append((pub.name, pub.title)) | |
return publishers | |
def _percent(num, total): | |
p = 100 * float(num)/float(total) | |
return "%.2f%%" % round(p, 2) | |
import os | import os |
import logging | import logging |
import datetime | import datetime |
import collections | |
from pylons import config | from pylons import config |
from ga_model import _normalize_url | |
import ga_model | import ga_model |
#from ga_client import GA | #from ga_client import GA |
log = logging.getLogger('ckanext.ga-report') | log = logging.getLogger('ckanext.ga-report') |
FORMAT_MONTH = '%Y-%m' | FORMAT_MONTH = '%Y-%m' |
MIN_VIEWS = 50 | |
MIN_VISITS = 20 | |
class DownloadAnalytics(object): | class DownloadAnalytics(object): |
'''Downloads and stores analytics info''' | '''Downloads and stores analytics info''' |
def __init__(self, service=None, profile_id=None): | def __init__(self, service=None, profile_id=None, delete_first=False): |
self.period = config['ga-report.period'] | self.period = config['ga-report.period'] |
self.service = service | self.service = service |
self.profile_id = profile_id | self.profile_id = profile_id |
self.delete_first = delete_first | |
def specific_month(self, date): | def specific_month(self, date): |
import calendar | import calendar |
first_of_this_month = datetime.datetime(date.year, date.month, 1) | first_of_this_month = datetime.datetime(date.year, date.month, 1) |
_, last_day_of_month = calendar.monthrange(int(date.year), int(date.month)) | _, last_day_of_month = calendar.monthrange(int(date.year), int(date.month)) |
last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month) | last_of_this_month = datetime.datetime(date.year, date.month, last_day_of_month) |
periods = ((date.strftime(FORMAT_MONTH), | periods = ((date.strftime(FORMAT_MONTH), |
last_day_of_month, | last_day_of_month, |
first_of_this_month, last_of_this_month),) | first_of_this_month, last_of_this_month),) |
self.download_and_store(periods) | self.download_and_store(periods) |
def latest(self): | def latest(self): |
if self.period == 'monthly': | if self.period == 'monthly': |
# from first of this month to today | # from first of this month to today |
now = datetime.datetime.now() | now = datetime.datetime.now() |
first_of_this_month = datetime.datetime(now.year, now.month, 1) | first_of_this_month = datetime.datetime(now.year, now.month, 1) |
periods = ((now.strftime(FORMAT_MONTH), | periods = ((now.strftime(FORMAT_MONTH), |
now.day, | now.day, |
first_of_this_month, now),) | first_of_this_month, now),) |
else: | else: |
raise NotImplementedError | raise NotImplementedError |
self.download_and_store(periods) | self.download_and_store(periods) |
def for_date(self, for_date): | def for_date(self, for_date): |
assert isinstance(since_date, datetime.datetime) | assert isinstance(since_date, datetime.datetime) |
periods = [] # (period_name, period_complete_day, start_date, end_date) | periods = [] # (period_name, period_complete_day, start_date, end_date) |
if self.period == 'monthly': | if self.period == 'monthly': |
first_of_the_months_until_now = [] | first_of_the_months_until_now = [] |
year = for_date.year | year = for_date.year |
month = for_date.month | month = for_date.month |
now = datetime.datetime.now() | now = datetime.datetime.now() |
first_of_this_month = datetime.datetime(now.year, now.month, 1) | first_of_this_month = datetime.datetime(now.year, now.month, 1) |
while True: | while True: |
first_of_the_month = datetime.datetime(year, month, 1) | first_of_the_month = datetime.datetime(year, month, 1) |
if first_of_the_month == first_of_this_month: | if first_of_the_month == first_of_this_month: |
periods.append((now.strftime(FORMAT_MONTH), | periods.append((now.strftime(FORMAT_MONTH), |
now.day, | now.day, |
first_of_this_month, now)) | first_of_this_month, now)) |
break | break |
elif first_of_the_month < first_of_this_month: | elif first_of_the_month < first_of_this_month: |
in_the_next_month = first_of_the_month + datetime.timedelta(40) | in_the_next_month = first_of_the_month + datetime.timedelta(40) |
last_of_the_month = datetime.datetime(in_the_next_month.year, | last_of_the_month = datetime.datetime(in_the_next_month.year, |
in_the_next_month.month, 1)\ | in_the_next_month.month, 1)\ |
- datetime.timedelta(1) | - datetime.timedelta(1) |
periods.append((now.strftime(FORMAT_MONTH), 0, | periods.append((now.strftime(FORMAT_MONTH), 0, |
first_of_the_month, last_of_the_month)) | first_of_the_month, last_of_the_month)) |
else: | else: |
# first_of_the_month has got to the future somehow | # first_of_the_month has got to the future somehow |
break | break |
month += 1 | month += 1 |
if month > 12: | if month > 12: |
year += 1 | year += 1 |
month = 1 | month = 1 |
else: | else: |
raise NotImplementedError | raise NotImplementedError |
self.download_and_store(periods) | self.download_and_store(periods) |
@staticmethod | @staticmethod |
def get_full_period_name(period_name, period_complete_day): | def get_full_period_name(period_name, period_complete_day): |
if period_complete_day: | if period_complete_day: |
return period_name + ' (up to %ith)' % period_complete_day | return period_name + ' (up to %ith)' % period_complete_day |
else: | else: |
return period_name | return period_name |
def download_and_store(self, periods): | def download_and_store(self, periods): |
for period_name, period_complete_day, start_date, end_date in periods: | for period_name, period_complete_day, start_date, end_date in periods: |
log.info('Downloading Analytics for period "%s" (%s - %s)', | log.info('Period "%s" (%s - %s)', |
self.get_full_period_name(period_name, period_complete_day), | self.get_full_period_name(period_name, period_complete_day), |
start_date.strftime('%Y %m %d'), | start_date.strftime('%Y-%m-%d'), |
end_date.strftime('%Y %m %d')) | end_date.strftime('%Y-%m-%d')) |
data = self.download(start_date, end_date, '~/dataset/[a-z0-9-_]+') | if self.delete_first: |
log.info('Storing Dataset Analytics for period "%s"', | log.info('Deleting existing Analytics for this period "%s"', |
self.get_full_period_name(period_name, period_complete_day)) | period_name) |
ga_model.delete(period_name) | |
# Clean up the entries before we run this | |
ga_model.pre_update_url_stats(period_name) | |
accountName = config.get('googleanalytics.account') | |
log.info('Downloading analytics for dataset views') | |
data = self.download(start_date, end_date, '~/%s/dataset/[a-z0-9-_]+' % accountName) | |
log.info('Storing dataset views (%i rows)', len(data.get('url'))) | |
self.store(period_name, period_complete_day, data, ) | self.store(period_name, period_complete_day, data, ) |
data = self.download(start_date, end_date, '~/publisher/[a-z0-9-_]+') | log.info('Downloading analytics for publisher views') |
log.info('Storing Publisher Analytics for period "%s"', | data = self.download(start_date, end_date, '~/%s/publisher/[a-z0-9-_]+' % accountName) |
self.get_full_period_name(period_name, period_complete_day)) | |
log.info('Storing publisher views (%i rows)', len(data.get('url'))) | |
self.store(period_name, period_complete_day, data,) | self.store(period_name, period_complete_day, data,) |
log.info('Aggregating datasets by publisher') | |
ga_model.update_publisher_stats(period_name) # about 30 seconds. | ga_model.update_publisher_stats(period_name) # about 30 seconds. |
log.info('Downloading and storing analytics for site-wide stats') | |
self.sitewide_stats( period_name ) | self.sitewide_stats( period_name ) |
log.info('Downloading and storing analytics for social networks') | |
def download(self, start_date, end_date, path='~/dataset/[a-z0-9-_]+'): | self.update_social_info(period_name, start_date, end_date) |
def update_social_info(self, period_name, start_date, end_date): | |
start_date = start_date.strftime('%Y-%m-%d') | |
end_date = end_date.strftime('%Y-%m-%d') | |
query = 'ga:hasSocialSourceReferral=~Yes$' | |
metrics = 'ga:entrances' | |
sort = '-ga:entrances' | |
# Supported query params at | |
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference | |
results = self.service.data().ga().get( | |
ids='ga:' + self.profile_id, | |
filters=query, | |
start_date=start_date, | |
metrics=metrics, | |
sort=sort, | |
dimensions="ga:landingPagePath,ga:socialNetwork", | |
max_results=10000, | |
end_date=end_date).execute() | |
data = collections.defaultdict(list) | |
rows = results.get('rows',[]) | |
for row in rows: | |
data[_normalize_url(row[0])].append( (row[1], int(row[2]),) ) | |
ga_model.update_social(period_name, data) | |
def download(self, start_date, end_date, path=None): | |
'''Get data from GA for a given time period''' | '''Get data from GA for a given time period''' |
start_date = start_date.strftime('%Y-%m-%d') | start_date = start_date.strftime('%Y-%m-%d') |
end_date = end_date.strftime('%Y-%m-%d') | end_date = end_date.strftime('%Y-%m-%d') |
query = 'ga:pagePath=%s$' % path | query = 'ga:pagePath=%s$' % path |
metrics = 'ga:uniquePageviews, ga:visitors' | metrics = 'ga:uniquePageviews, ga:visits' |
sort = '-ga:uniquePageviews' | sort = '-ga:uniquePageviews' |
# Supported query params at | # Supported query params at |
# https://developers.google.com/analytics/devguides/reporting/core/v3/reference | # https://developers.google.com/analytics/devguides/reporting/core/v3/reference |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
filters=query, | filters=query, |
start_date=start_date, | start_date=start_date, |
metrics=metrics, | metrics=metrics, |
sort=sort, | sort=sort, |
dimensions="ga:pagePath", | dimensions="ga:pagePath", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
if os.getenv('DEBUG'): | |
import pprint | |
pprint.pprint(results) | |
print 'Total results: %s' % results.get('totalResults') | |
packages = [] | packages = [] |
for entry in results.get('rows'): | for entry in results.get('rows'): |
(loc,pageviews,visits) = entry | (loc,pageviews,visits) = entry |
packages.append( ('http:/' + loc, pageviews, visits,) ) # Temporary hack | url = _normalize_url('http:/' + loc) |
if not url.startswith('/dataset/') and not url.startswith('/publisher/'): | |
continue | |
packages.append( (url, pageviews, visits,) ) # Temporary hack | |
return dict(url=packages) | return dict(url=packages) |
def store(self, period_name, period_complete_day, data): | def store(self, period_name, period_complete_day, data): |
if 'url' in data: | if 'url' in data: |
ga_model.update_url_stats(period_name, period_complete_day, data['url']) | ga_model.update_url_stats(period_name, period_complete_day, data['url']) |
def sitewide_stats(self, period_name): | def sitewide_stats(self, period_name): |
import calendar | import calendar |
year, month = period_name.split('-') | year, month = period_name.split('-') |
_, last_day_of_month = calendar.monthrange(int(year), int(month)) | _, last_day_of_month = calendar.monthrange(int(year), int(month)) |
start_date = '%s-01' % period_name | start_date = '%s-01' % period_name |
end_date = '%s-%s' % (period_name, last_day_of_month) | end_date = '%s-%s' % (period_name, last_day_of_month) |
print 'Sitewide_stats for %s (%s -> %s)' % (period_name, start_date, end_date) | |
funcs = ['_totals_stats', '_social_stats', '_os_stats', | funcs = ['_totals_stats', '_social_stats', '_os_stats', |
'_locale_stats', '_browser_stats', '_mobile_stats'] | '_locale_stats', '_browser_stats', '_mobile_stats'] |
for f in funcs: | for f in funcs: |
print ' + Fetching %s stats' % f.split('_')[1] | log.info('Downloading analytics for %s' % f.split('_')[1]) |
getattr(self, f)(start_date, end_date, period_name) | getattr(self, f)(start_date, end_date, period_name) |
def _get_results(result_data, f): | def _get_results(result_data, f): |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
key = f(result) | key = f(result) |
data[key] = data.get(key,0) + result[1] | data[key] = data.get(key,0) + result[1] |
return data | return data |
def _totals_stats(self, start_date, end_date, period_name): | def _totals_stats(self, start_date, end_date, period_name): |
""" Fetches distinct totals, total pageviews etc """ | """ Fetches distinct totals, total pageviews etc """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:uniquePageviews', |
sort='-ga:uniquePageviews', | sort='-ga:uniquePageviews', |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
ga_model.update_sitewide_stats(period_name, "Totals", {'Total pageviews': result_data[0][0]}) | ga_model.update_sitewide_stats(period_name, "Totals", {'Total page views': result_data[0][0]}) |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:pageviewsPerVisit,ga:bounces,ga:avgTimeOnSite,ga:percentNewVisits', | metrics='ga:pageviewsPerVisit,ga:avgTimeOnSite,ga:percentNewVisits,ga:visits', |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = { | data = { |
'Pages per visit': result_data[0][0], | 'Pages per visit': result_data[0][0], |
'Bounces': result_data[0][1], | 'Average time on site': result_data[0][1], |
'Average time on site': result_data[0][2], | 'New visits': result_data[0][2], |
'Percent new visits': result_data[0][3], | 'Total visits': result_data[0][3], |
} | } |
ga_model.update_sitewide_stats(period_name, "Totals", data) | ga_model.update_sitewide_stats(period_name, "Totals", data) |
# Bounces from /data. This url is specified in configuration because | |
# for DGU we don't want /. | |
path = config.get('ga-report.bounce_url','/') | |
print path | |
results = self.service.data().ga().get( | |
ids='ga:' + self.profile_id, | |
filters='ga:pagePath=~%s$' % (path,), | |
start_date=start_date, | |
metrics='ga:bounces,ga:uniquePageviews', | |
dimensions='ga:pagePath', | |
max_results=10000, | |
end_date=end_date).execute() | |
result_data = results.get('rows') | |
for results in result_data: | |
if results[0] == path: | |
bounce, total = [float(x) for x in results[1:]] | |
pct = 100 * bounce/total | |
print "%d bounces from %d total == %s" % (bounce, total, pct) | |
ga_model.update_sitewide_stats(period_name, "Totals", {'Bounces': pct}) | |
def _locale_stats(self, start_date, end_date, period_name): | def _locale_stats(self, start_date, end_date, period_name): |
""" Fetches stats about language and country """ | """ Fetches stats about language and country """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:uniquePageviews', |
sort='-ga:uniquePageviews', | sort='-ga:uniquePageviews', |
dimensions="ga:language,ga:country", | dimensions="ga:language,ga:country", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Languages", data) | ga_model.update_sitewide_stats(period_name, "Languages", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[1]] = data.get(result[1], 0) + int(result[2]) | data[result[1]] = data.get(result[1], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Country", data) | ga_model.update_sitewide_stats(period_name, "Country", data) |
def _social_stats(self, start_date, end_date, period_name): | def _social_stats(self, start_date, end_date, period_name): |
""" Finds out which social sites people are referred from """ | """ Finds out which social sites people are referred from """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:uniquePageviews', |
sort='-ga:uniquePageviews', | sort='-ga:uniquePageviews', |
dimensions="ga:socialNetwork,ga:referralPath", | dimensions="ga:socialNetwork,ga:referralPath", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
twitter_links = [] | |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
if not result[0] == '(not set)': | if not result[0] == '(not set)': |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
if result[0] == 'Twitter': | self._filter_out_long_tail(data, 3) |
twitter_links.append(result[1]) | |
ga_model.update_sitewide_stats(period_name, "Social sources", data) | ga_model.update_sitewide_stats(period_name, "Social sources", data) |
def _os_stats(self, start_date, end_date, period_name): | def _os_stats(self, start_date, end_date, period_name): |
""" Operating system stats """ | """ Operating system stats """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:uniquePageviews', |
sort='-ga:uniquePageviews', | sort='-ga:uniquePageviews', |
dimensions="ga:operatingSystem,ga:operatingSystemVersion", | dimensions="ga:operatingSystem,ga:operatingSystemVersion", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Operating Systems", data) | ga_model.update_sitewide_stats(period_name, "Operating Systems", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
key = "%s (%s)" % (result[0],result[1]) | if int(result[2]) >= MIN_VIEWS: |
data[key] = result[2] | key = "%s %s" % (result[0],result[1]) |
data[key] = result[2] | |
ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data) | ga_model.update_sitewide_stats(period_name, "Operating Systems versions", data) |
def _browser_stats(self, start_date, end_date, period_name): | def _browser_stats(self, start_date, end_date, period_name): |
""" Information about browsers and browser versions """ | """ Information about browsers and browser versions """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:uniquePageviews', |
sort='-ga:uniquePageviews', | sort='-ga:uniquePageviews', |
dimensions="ga:browser,ga:browserVersion", | dimensions="ga:browser,ga:browserVersion", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
# e.g. [u'Firefox', u'19.0', u'20'] | |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Browsers", data) | ga_model.update_sitewide_stats(period_name, "Browsers", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
key = "%s (%s)" % (result[0], result[1]) | key = "%s %s" % (result[0], self._filter_browser_version(result[0], result[1])) |
data[key] = result[2] | data[key] = data.get(key, 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Browser versions", data) | ga_model.update_sitewide_stats(period_name, "Browser versions", data) |
@classmethod | |
def _filter_browser_version(cls, browser, version_str): | |
''' | |
Simplifies a browser version string if it is detailed. | |
i.e. groups together Firefox 3.5.1 and 3.5.2 to be just 3. | |
This is helpful when viewing stats and good to protect privacy. | |
''' | |
ver = version_str | |
parts = ver.split('.') | |
if len(parts) > 1: | |
if parts[1][0] == '0': | |
ver = parts[0] | |
else: | |
ver = "%s" % (parts[0]) | |
# Special case complex version nums | |
if browser in ['Safari', 'Android Browser']: | |
ver = parts[0] | |
if len(ver) > 2: | |
num_hidden_digits = len(ver) - 2 | |
ver = ver[0] + ver[1] + 'X' * num_hidden_digits | |
return ver | |
def _mobile_stats(self, start_date, end_date, period_name): | def _mobile_stats(self, start_date, end_date, period_name): |
""" Info about mobile devices """ | """ Info about mobile devices """ |
results = self.service.data().ga().get( | results = self.service.data().ga().get( |
ids='ga:' + self.profile_id, | ids='ga:' + self.profile_id, |
start_date=start_date, | start_date=start_date, |
metrics='ga:uniquePageviews', | metrics='ga:uniquePageviews', |
sort='-ga:uniquePageviews', | sort='-ga:uniquePageviews', |
dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo", | dimensions="ga:mobileDeviceBranding, ga:mobileDeviceInfo", |
max_results=10000, | max_results=10000, |
end_date=end_date).execute() | end_date=end_date).execute() |
result_data = results.get('rows') | result_data = results.get('rows') |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[0]] = data.get(result[0], 0) + int(result[2]) | data[result[0]] = data.get(result[0], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Mobile brands", data) | ga_model.update_sitewide_stats(period_name, "Mobile brands", data) |
data = {} | data = {} |
for result in result_data: | for result in result_data: |
data[result[1]] = data.get(result[1], 0) + int(result[2]) | data[result[1]] = data.get(result[1], 0) + int(result[2]) |
self._filter_out_long_tail(data, MIN_VIEWS) | |
ga_model.update_sitewide_stats(period_name, "Mobile devices", data) | ga_model.update_sitewide_stats(period_name, "Mobile devices", data) |
@classmethod | |
def _filter_out_long_tail(cls, data, threshold=10): | |
''' | |
Given data which is a frequency distribution, filter out | |
results which are below a threshold count. This is good to protect | |
privacy. | |
''' | |
for key, value in data.items(): | |
if value < threshold: | |
del data[key] | |
import os | import os |
import httplib2 | import httplib2 |
from apiclient.discovery import build | from apiclient.discovery import build |
from oauth2client.client import flow_from_clientsecrets | from oauth2client.client import flow_from_clientsecrets |
from oauth2client.file import Storage | from oauth2client.file import Storage |
from oauth2client.tools import run | from oauth2client.tools import run |
from pylons import config | from pylons import config |
def _prepare_credentials(token_filename, credentials_filename): | def _prepare_credentials(token_filename, credentials_filename): |
""" | """ |
Either returns the user's oauth credentials or uses the credentials | Either returns the user's oauth credentials or uses the credentials |
file to generate a token (by forcing the user to login in the browser) | file to generate a token (by forcing the user to login in the browser) |
""" | """ |
storage = Storage(token_filename) | storage = Storage(token_filename) |
credentials = storage.get() | credentials = storage.get() |
if credentials is None or credentials.invalid: | if credentials is None or credentials.invalid: |
flow = flow_from_clientsecrets(credentials_filename, | flow = flow_from_clientsecrets(credentials_filename, |
scope='https://www.googleapis.com/auth/analytics.readonly', | scope='https://www.googleapis.com/auth/analytics.readonly', |
message="Can't find the credentials file") | message="Can't find the credentials file") |
credentials = run(flow, storage) | credentials = run(flow, storage) |
return credentials | return credentials |
def init_service(token_file, credentials_file): | def init_service(token_file, credentials_file): |
""" | """ |
Given a file containing the user's oauth token (and another with | Given a file containing the user's oauth token (and another with |
credentials in case we need to generate the token) will return a | credentials in case we need to generate the token) will return a |
service object representing the analytics API. | service object representing the analytics API. |
""" | """ |
http = httplib2.Http() | http = httplib2.Http() |
credentials = _prepare_credentials(token_file, credentials_file) | credentials = _prepare_credentials(token_file, credentials_file) |
http = credentials.authorize(http) # authorize the http object | http = credentials.authorize(http) # authorize the http object |
return build('analytics', 'v3', http=http) | return build('analytics', 'v3', http=http) |
def get_profile_id(service): | def get_profile_id(service): |
""" | """ |
Get the profile ID for this user and the service specified by the | Get the profile ID for this user and the service specified by the |
'googleanalytics.id' configuration option. This function iterates | 'googleanalytics.id' configuration option. This function iterates |
over all of the accounts available to the user who invoked the | over all of the accounts available to the user who invoked the |
service to find one where the account name matches (in case the | service to find one where the account name matches (in case the |
user has several). | user has several). |
""" | """ |
accounts = service.management().accounts().list().execute() | accounts = service.management().accounts().list().execute() |
if not accounts.get('items'): | if not accounts.get('items'): |
return None | return None |
accountName = config.get('googleanalytics.account') | accountName = config.get('googleanalytics.account') |
if not accountName: | |
raise Exception('googleanalytics.account needs to be configured') | |
webPropertyId = config.get('googleanalytics.id') | webPropertyId = config.get('googleanalytics.id') |
if not webPropertyId: | |
raise Exception('googleanalytics.id needs to be configured') | |
for acc in accounts.get('items'): | for acc in accounts.get('items'): |
if acc.get('name') == accountName: | if acc.get('name') == accountName: |
accountId = acc.get('id') | accountId = acc.get('id') |
webproperties = service.management().webproperties().list(accountId=accountId).execute() | webproperties = service.management().webproperties().list(accountId=accountId).execute() |
profiles = service.management().profiles().list( | profiles = service.management().profiles().list( |
accountId=accountId, webPropertyId=webPropertyId).execute() | accountId=accountId, webPropertyId=webPropertyId).execute() |
if profiles.get('items'): | if profiles.get('items'): |
return profiles.get('items')[0].get('id') | return profiles.get('items')[0].get('id') |
return None | return None |
import re | import re |
import uuid | import uuid |
from sqlalchemy import Table, Column, MetaData | from sqlalchemy import Table, Column, MetaData, ForeignKey |
from sqlalchemy import types | from sqlalchemy import types |
from sqlalchemy.sql import select | from sqlalchemy.sql import select |
from sqlalchemy.orm import mapper | from sqlalchemy.orm import mapper, relation |
from sqlalchemy import func | from sqlalchemy import func |
import ckan.model as model | import ckan.model as model |
from ckan.lib.base import * | from ckan.lib.base import * |
def make_uuid(): | def make_uuid(): |
return unicode(uuid.uuid4()) | return unicode(uuid.uuid4()) |
metadata = MetaData() | |
class GA_Url(object): | class GA_Url(object): |
def __init__(self, **kwargs): | def __init__(self, **kwargs): |
for k,v in kwargs.items(): | for k,v in kwargs.items(): |
setattr(self, k, v) | setattr(self, k, v) |
class GA_Stat(object): | |
def __init__(self, **kwargs): | |
for k,v in kwargs.items(): | |
setattr(self, k, v) | |
class GA_Publisher(object): | |
def __init__(self, **kwargs): | |
for k,v in kwargs.items(): | |
setattr(self, k, v) | |
metadata = MetaData() | |
url_table = Table('ga_url', metadata, | url_table = Table('ga_url', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('period_complete_day', types.Integer), | Column('period_complete_day', types.Integer), |
Column('pageviews', types.UnicodeText), | Column('pageviews', types.UnicodeText), |
Column('visitors', types.UnicodeText), | Column('visitors', types.UnicodeText), |
Column('url', types.UnicodeText), | Column('url', types.UnicodeText), |
Column('department_id', types.UnicodeText), | Column('department_id', types.UnicodeText), |
Column('package_id', types.UnicodeText), | |
) | ) |
mapper(GA_Url, url_table) | mapper(GA_Url, url_table) |
class GA_Stat(object): | |
def __init__(self, **kwargs): | |
for k,v in kwargs.items(): | |
setattr(self, k, v) | |
stat_table = Table('ga_stat', metadata, | stat_table = Table('ga_stat', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('stat_name', types.UnicodeText), | Column('stat_name', types.UnicodeText), |
Column('key', types.UnicodeText), | Column('key', types.UnicodeText), |
Column('value', types.UnicodeText), ) | Column('value', types.UnicodeText), ) |
mapper(GA_Stat, stat_table) | mapper(GA_Stat, stat_table) |
class GA_Publisher(object): | |
def __init__(self, **kwargs): | |
for k,v in kwargs.items(): | |
setattr(self, k, v) | |
pub_table = Table('ga_publisher', metadata, | pub_table = Table('ga_publisher', metadata, |
Column('id', types.UnicodeText, primary_key=True, | Column('id', types.UnicodeText, primary_key=True, |
default=make_uuid), | default=make_uuid), |
Column('period_name', types.UnicodeText), | Column('period_name', types.UnicodeText), |
Column('publisher_name', types.UnicodeText), | Column('publisher_name', types.UnicodeText), |
Column('views', types.UnicodeText), | Column('views', types.UnicodeText), |
Column('visitors', types.UnicodeText), | Column('visitors', types.UnicodeText), |
Column('toplevel', types.Boolean, default=False), | Column('toplevel', types.Boolean, default=False), |
Column('subpublishercount', types.Integer, default=0), | Column('subpublishercount', types.Integer, default=0), |
Column('parent', types.UnicodeText), | Column('parent', types.UnicodeText), |
) | ) |
mapper(GA_Publisher, pub_table) | mapper(GA_Publisher, pub_table) |
class GA_ReferralStat(object): | |
def __init__(self, **kwargs): | |
for k,v in kwargs.items(): | |
setattr(self, k, v) | |
referrer_table = Table('ga_referrer', metadata, | |
Column('id', types.UnicodeText, primary_key=True, | |
default=make_uuid), | |
Column('period_name', types.UnicodeText), | |
Column('source', types.UnicodeText), | |
Column('url', types.UnicodeText), | |
Column('count', types.Integer), | |
) | |
mapper(GA_ReferralStat, referrer_table) | |
def init_tables(): | def init_tables(): |
metadata.create_all(model.meta.engine) | metadata.create_all(model.meta.engine) |
cached_tables = {} | cached_tables = {} |
def get_table(name): | def get_table(name): |
if name not in cached_tables: | if name not in cached_tables: |
meta = MetaData() | meta = MetaData() |
meta.reflect(bind=model.meta.engine) | meta.reflect(bind=model.meta.engine) |
table = meta.tables[name] | table = meta.tables[name] |
cached_tables[name] = table | cached_tables[name] = table |
return cached_tables[name] | return cached_tables[name] |
def _normalize_url(url): | def _normalize_url(url): |
'''Strip off the hostname etc. Do this before storing it. | '''Strip off the hostname etc. Do this before storing it. |
>>> normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices') | >>> normalize_url('http://data.gov.uk/dataset/weekly_fuel_prices') |
'/dataset/weekly_fuel_prices' | '/dataset/weekly_fuel_prices' |
''' | ''' |
url = re.sub('https?://(www\.)?data.gov.uk', '', url) | # Deliberately leaving a / |
return url | url = url.replace('http:/','') |
return '/' + '/'.join(url.split('/')[2:]) | |
def _get_department_id_of_url(url): | def _get_department_id_of_url(url): |
# e.g. /dataset/fuel_prices | # e.g. /dataset/fuel_prices |
# e.g. /dataset/fuel_prices/resource/e63380d4 | # e.g. /dataset/fuel_prices/resource/e63380d4 |
dataset_match = re.match('/dataset/([^/]+)(/.*)?', url) | dataset_match = re.match('/dataset/([^/]+)(/.*)?', url) |
if dataset_match: | if dataset_match: |
dataset_ref = dataset_match.groups()[0] | dataset_ref = dataset_match.groups()[0] |
dataset = model.Package.get(dataset_ref) | dataset = model.Package.get(dataset_ref) |
if dataset: | if dataset: |
publisher_groups = dataset.get_groups('publisher') | publisher_groups = dataset.get_groups('publisher') |
if publisher_groups: | if publisher_groups: |
return publisher_groups[0].name | return publisher_groups[0].name |
else: | else: |
publisher_match = re.match('/publisher/([^/]+)(/.*)?', url) | publisher_match = re.match('/publisher/([^/]+)(/.*)?', url) |
if publisher_match: | if publisher_match: |
return publisher_match.groups()[0] | return publisher_match.groups()[0] |
def update_sitewide_stats(period_name, stat_name, data): | def update_sitewide_stats(period_name, stat_name, data): |
for k,v in data.iteritems(): | for k,v in data.iteritems(): |
item = model.Session.query(GA_Stat).\ | item = model.Session.query(GA_Stat).\ |
filter(GA_Stat.period_name==period_name).\ | filter(GA_Stat.period_name==period_name).\ |
filter(GA_Stat.key==k).\ | filter(GA_Stat.key==k).\ |
filter(GA_Stat.stat_name==stat_name).first() | filter(GA_Stat.stat_name==stat_name).first() |
if item: | if item: |
item.period_name = period_name | item.period_name = period_name |
item.key = k | item.key = k |
item.value = v | item.value = v |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'key': k, | 'key': k, |
'value': v, | 'value': v, |
'stat_name': stat_name | 'stat_name': stat_name |
} | } |
model.Session.add(GA_Stat(**values)) | model.Session.add(GA_Stat(**values)) |
model.Session.commit() | model.Session.commit() |
def update_url_stat_totals(period_name): | |
""" | |
items = model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name != "All").\ | |
filter(GA_Url.url==url).all() | |
values = {'id': make_uuid(), | |
'period_name': "All", | |
'period_complete_day': "0", | |
'url': url, | |
'pageviews': sum([int(x.pageviews) for x in items]), | |
'visitors': sum([int(x.visitors) for x in items]), | |
'department_id': department_id, | |
'package_id': package | |
} | |
model.Session.add(GA_Url(**values)) | |
model.Session.commit() | |
""" | |
def pre_update_url_stats(period_name): | |
model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name==period_name).delete() | |
model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name=='All').delete() | |
def update_url_stats(period_name, period_complete_day, url_data): | def update_url_stats(period_name, period_complete_day, url_data): |
''' | |
Given a list of urls and number of hits for each during a given period, | |
stores them in GA_Url under the period and recalculates the totals for | |
the 'All' period. | |
''' | |
for url, views, visitors in url_data: | for url, views, visitors in url_data: |
url = _normalize_url(url) | |
department_id = _get_department_id_of_url(url) | department_id = _get_department_id_of_url(url) |
# see if the row for this url & month is in the table already | package = None |
item = model.Session.query(GA_Url).\ | if url.startswith('/dataset/'): |
filter(GA_Url.period_name==period_name).\ | package = url[len('/dataset/'):] |
filter(GA_Url.url==url).first() | |
if item: | values = {'id': make_uuid(), |
item.period_name = period_name | 'period_name': period_name, |
item.pageviews = views | 'period_complete_day': period_complete_day, |
item.visitors = visitors | 'url': url, |
item.department_id = department_id | 'pageviews': views, |
model.Session.add(item) | 'visitors': visitors, |
else: | 'department_id': department_id, |
# create the row | 'package_id': package |
} | |
model.Session.add(GA_Url(**values)) | |
model.Session.commit() | |
if package: | |
entries = model.Session.query(GA_Url).\ | |
filter(GA_Url.period_name!='All').\ | |
filter(GA_Url.url==url).all() | |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': 'All', |
'period_complete_day': period_complete_day, | 'period_complete_day': 0, |
'url': url, | 'url': url, |
'pageviews': views, | 'pageviews': sum([int(e.pageviews) for e in entries]), |
'visitors': visitors, | 'visitors': sum([int(e.visitors) for e in entries]), |
'department_id': department_id | 'department_id': department_id, |
'package_id': package | |
} | } |
model.Session.add(GA_Url(**values)) | model.Session.add(GA_Url(**values)) |
model.Session.commit() | model.Session.commit() |
def update_social(period_name, data): | |
# Clean up first. | |
model.Session.query(GA_ReferralStat).\ | |
filter(GA_ReferralStat.period_name==period_name).delete() | |
for url,data in data.iteritems(): | |
for entry in data: | |
source = entry[0] | |
count = entry[1] | |
item = model.Session.query(GA_ReferralStat).\ | |
filter(GA_ReferralStat.period_name==period_name).\ | |
filter(GA_ReferralStat.source==source).\ | |
filter(GA_ReferralStat.url==url).first() | |
if item: | |
item.count = item.count + count | |
model.Session.add(item) | |
else: | |
# create the row | |
values = {'id': make_uuid(), | |
'period_name': period_name, | |
'source': source, | |
'url': url, | |
'count': count, | |
} | |
model.Session.add(GA_ReferralStat(**values)) | |
model.Session.commit() | |
def update_publisher_stats(period_name): | def update_publisher_stats(period_name): |
""" | """ |
Updates the publisher stats from the data retrieved for /dataset/* | Updates the publisher stats from the data retrieved for /dataset/* |
and /publisher/*. Will run against each dataset and generates the | and /publisher/*. Will run against each dataset and generates the |
totals for the entire tree beneath each publisher. | totals for the entire tree beneath each publisher. |
""" | """ |
toplevel = get_top_level() | toplevel = get_top_level() |
publishers = model.Session.query(model.Group).\ | publishers = model.Session.query(model.Group).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
filter(model.Group.state=='active').all() | filter(model.Group.state=='active').all() |
for publisher in publishers: | for publisher in publishers: |
views, visitors, subpub = update_publisher(period_name, publisher, publisher.name) | views, visitors, subpub = update_publisher(period_name, publisher, publisher.name) |
parent, parents = '', publisher.get_groups('publisher') | parent, parents = '', publisher.get_groups('publisher') |
if parents: | if parents: |
parent = parents[0].name | parent = parents[0].name |
item = model.Session.query(GA_Publisher).\ | item = model.Session.query(GA_Publisher).\ |
filter(GA_Publisher.period_name==period_name).\ | filter(GA_Publisher.period_name==period_name).\ |
filter(GA_Publisher.publisher_name==publisher.name).first() | filter(GA_Publisher.publisher_name==publisher.name).first() |
if item: | if item: |
item.views = views | item.views = views |
item.visitors = visitors | item.visitors = visitors |
item.publisher_name = publisher.name | item.publisher_name = publisher.name |
item.toplevel = publisher in toplevel | item.toplevel = publisher in toplevel |
item.subpublishercount = subpub | item.subpublishercount = subpub |
item.parent = parent | item.parent = parent |
model.Session.add(item) | model.Session.add(item) |
else: | else: |
# create the row | # create the row |
values = {'id': make_uuid(), | values = {'id': make_uuid(), |
'period_name': period_name, | 'period_name': period_name, |
'publisher_name': publisher.name, | 'publisher_name': publisher.name, |
'views': views, | 'views': views, |
'visitors': visitors, | 'visitors': visitors, |
'toplevel': publisher in toplevel, | 'toplevel': publisher in toplevel, |
'subpublishercount': subpub, | 'subpublishercount': subpub, |
'parent': parent | 'parent': parent |
} | } |
model.Session.add(GA_Publisher(**values)) | model.Session.add(GA_Publisher(**values)) |
model.Session.commit() | model.Session.commit() |
def update_publisher(period_name, pub, part=''): | def update_publisher(period_name, pub, part=''): |
views,visitors,subpub = 0, 0, 0 | views,visitors,subpub = 0, 0, 0 |
for publisher in go_down_tree(pub): | for publisher in go_down_tree(pub): |
subpub = subpub + 1 | subpub = subpub + 1 |
items = model.Session.query(GA_Url).\ | items = model.Session.query(GA_Url).\ |
filter(GA_Url.period_name==period_name).\ | filter(GA_Url.period_name==period_name).\ |
filter(GA_Url.department_id==publisher.name).all() | filter(GA_Url.department_id==publisher.name).all() |
for item in items: | for item in items: |
views = views + int(item.pageviews) | views = views + int(item.pageviews) |
visitors = visitors + int(item.visitors) | visitors = visitors + int(item.visitors) |
return views, visitors, (subpub-1) | return views, visitors, (subpub-1) |
def get_top_level(): | def get_top_level(): |
'''Returns the top level publishers.''' | '''Returns the top level publishers.''' |
return model.Session.query(model.Group).\ | return model.Session.query(model.Group).\ |
outerjoin(model.Member, model.Member.table_id == model.Group.id and \ | outerjoin(model.Member, model.Member.table_id == model.Group.id and \ |
model.Member.table_name == 'group' and \ | model.Member.table_name == 'group' and \ |
model.Member.state == 'active').\ | model.Member.state == 'active').\ |
filter(model.Member.id==None).\ | filter(model.Member.id==None).\ |
filter(model.Group.type=='publisher').\ | filter(model.Group.type=='publisher').\ |
order_by(model.Group.name).all() | order_by(model.Group.name).all() |
def get_children(publisher): | def get_children(publisher): |
'''Finds child publishers for the given publisher (object). (Not recursive)''' | '''Finds child publishers for the given publisher (object). (Not recursive)''' |
from ckan.model.group import HIERARCHY_CTE | from ckan.model.group import HIERARCHY_CTE |
return model.Session.query(model.Group).\ | return model.Session.query(model.Group).\ |
from_statement(HIERARCHY_CTE).params(id=publisher.id, type='publisher').\ | from_statement(HIERARCHY_CTE).params(id=publisher.id, type='publisher').\ |
all() | all() |
def go_down_tree(publisher): | def go_down_tree(publisher): |
'''Provided with a publisher object, it walks down the hierarchy and yields each publisher, | '''Provided with a publisher object, it walks down the hierarchy and yields each publisher, |
including the one you supply.''' | including the one you supply.''' |
yield publisher | yield publisher |
for child in get_children(publisher): | for child in get_children(publisher): |
for grandchild in go_down_tree(child): | for grandchild in go_down_tree(child): |
yield grandchild | yield grandchild |
def delete(period_name): | |
''' | |
Deletes table data for the specified period, or specify 'all' | |
for all periods. | |
''' | |
for object_type in (GA_Url, GA_Stat, GA_Publisher, GA_ReferralStat): | |
q = model.Session.query(object_type) | |
if period_name != 'all': | |
q = q.filter_by(period_name=period_name) | |
q.delete() | |
model.Session.commit() | |
import logging | import logging |
import operator | import operator |
import ckan.lib.base as base | import ckan.lib.base as base |
import ckan.model as model | import ckan.model as model |
from ckan.logic import get_action | |
from ckanext.ga_report.ga_model import GA_Url, GA_Publisher | |
from ckanext.ga_report.controller import _get_publishers | |
_log = logging.getLogger(__name__) | _log = logging.getLogger(__name__) |
def popular_datasets(count=10): | |