Change the entry for the current month to include the date that reports are available to.
Change the entry for the current month to include the date that reports are available to.

- Requires a migration
alter table ga_stat add column period_complete_day text;
- Makes sure that GA_Stat entries have the period_complete_day set
- Cleans up the template and moves the drop-down to the util template

file:a/.gitignore -> file:b/.gitignore
--- a/.gitignore
+++ b/.gitignore
@@ -15,6 +15,10 @@
 develop-eggs
 .installed.cfg
 
+# Private info
+credentials.json
+token.dat
+
 # Installer logs
 pip-log.txt
 

file:a/README.rst -> file:b/README.rst
--- a/README.rst
+++ b/README.rst
@@ -26,16 +26,17 @@
 1. Activate you CKAN python environment and install this extension's software::
 
     $ pyenv/bin/activate
-    $ pip install -e  git+https://github.com/okfn/ckanext-ga-report.git#egg=ckanext-ga-report
+    $ pip install -e  git+https://github.com/datagovuk/ckanext-ga-report.git#egg=ckanext-ga-report
 
 2. Ensure you development.ini (or similar) contains the info about your Google Analytics account and configuration::
 
       googleanalytics.id = UA-1010101-1
-      googleanalytics.username = googleaccount@gmail.com
-      googleanalytics.password = googlepassword
+      googleanalytics.account = Account name (e.g. data.gov.uk, see top level item at https://www.google.com/analytics)
+      googleanalytics.token.filepath = ~/pyenv/token.dat
       ga-report.period = monthly
+      ga-report.bounce_url = /
 
-   Note that your password will be readable by system administrators on your server. Rather than use sensitive account details, it is suggested you give access to the GA account to a new Google account that you create just for this purpose.
+   The ga-report.bounce_url specifies a particular path to record the bounce rate for. Typically it is / (the home page).
 
 3. Set up this extension's database tables using a paster command. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file)::
 
@@ -45,13 +46,63 @@
 
     ckan.plugins = ga-report
 
+Problem shooting
+----------------
+
+* ``(ProgrammingError) relation "ga_url" does not exist``
+  This means that the ``paster initdb`` step has not been run successfully. Refer to the installation instructions for this extension.
+
+
+Authorization
+--------------
+
+Before you can access the data, you need to set up the OAUTH details which you can do by following the `instructions <https://developers.google.com/analytics/resources/tutorials/hello-analytics-api>`_ the outcome of which will be a file called credentials.json which should look like credentials.json.template with the relevant fields completed. These steps are below for convenience:
+
+1. Visit the `Google APIs Console <https://code.google.com/apis/console>`_
+
+2. Sign-in and create a project or use an existing project.
+
+3. In the `Services pane <https://code.google.com/apis/console#:services>`_ , activate Analytics API for your project. If prompted, read and accept the terms of service.
+
+4. Go to the `API Access pane <https://code.google.com/apis/console/#:access>`_
+
+5. Click Create an OAuth 2.0 client ID....
+
+6. Fill out the Branding Information fields and click Next.
+
+7. In Client ID Settings, set Application type to Installed application.
+
+8. Click Create client ID
+
+9. The details you need below are Client ID, Client secret, and  Redirect URIs
+
+
+Once you have set up your credentials.json file you can generate an oauth token file by using the
+following command, which will store your oauth token in a file called token.dat once you have finished
+giving permission in the browser::
+
+    $ paster getauthtoken --config=../ckan/development.ini
+
+Now ensure you reference the correct path to your token.dat in your CKAN config file (e.g. development.ini)::
+
+    googleanalytics.token.filepath = ~/pyenv/token.dat
+
 
 Tutorial
 --------
 
-Download some GA data and store it in CKAN's db. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file)::
+Download some GA data and store it in CKAN's database. (Ensure your CKAN pyenv is still activated, run the command from ``src/ckanext-ga-report``, alter the ``--config`` option to point to your site config file) and specifying the name of your auth file (token.dat by default) from the previous step::
 
     $ paster loadanalytics latest --config=../ckan/development.ini
+
+The value after the token file is how much data you want to retrieve, this can be
+
+* **all**         - data for all time (since 2010)
+
+* **latest**      - (default) just the 'latest' data
+
+* **YYYY-MM-DD**  - just data for all time periods going back to (and including) this date
+
 
 
 Software Licence

--- a/ckanext/ga_report/command.py
+++ b/ckanext/ga_report/command.py
@@ -1,7 +1,13 @@
 import logging
+import datetime
+import os
+
+from pylons import config
 
 from ckan.lib.cli import CkanCommand
-# No other CKAN imports allowed until _load_config is run, or logging is disabled
+# No other CKAN imports allowed until _load_config is run,
+# or logging is disabled
+
 
 class InitDB(CkanCommand):
     """Initialise the extension's database tables
@@ -23,6 +29,34 @@
         ga_model.init_tables()
         log.info("DB tables are setup")
 
+
+class GetAuthToken(CkanCommand):
+    """ Get's the Google auth token
+
+    Usage: paster getauthtoken <credentials_file>
+
+    Where <credentials_file> is the file name containing the details
+    for the service (obtained from https://code.google.com/apis/console).
+    By default this is set to credentials.json
+    """
+    summary = __doc__.split('\n')[0]
+    usage = __doc__
+    max_args = 0
+    min_args = 0
+
+    def command(self):
+        """
+        In this case we don't want a valid service, but rather just to
+        force the user through the auth flow. We allow this to complete to
+        act as a form of verification instead of just getting the token and
+        assuming it is correct.
+        """
+        from ga_auth import init_service
+        init_service('token.dat',
+                      self.args[0] if self.args
+                                   else 'credentials.json')
+
+
 class LoadAnalytics(CkanCommand):
     """Get data from Google Analytics API and save it
     in the ga_model
@@ -32,27 +66,57 @@
     Where <time-period> is:
         all         - data for all time
         latest      - (default) just the 'latest' data
-        YYYY-MM-DD  - just data for all time periods going
-                      back to (and including) this date
+        YYYY-MM     - just data for the specific month
     """
     summary = __doc__.split('\n')[0]
     usage = __doc__
     max_args = 1
     min_args = 0
 
+    def __init__(self, name):
+        super(LoadAnalytics, self).__init__(name)
+        self.parser.add_option('-d', '--delete-first',
+                               action='store_true',
+                               default=False,
+                               dest='delete_first',
+                               help='Delete data for the period first')
+        self.parser.add_option('-s', '--skip_url_stats',
+                               action='store_true',
+                               default=False,
+                               dest='skip_url_stats',
+                               help='Skip the download of URL data - just do site-wide stats')
+
     def command(self):
         self._load_config()
 
         from download_analytics import DownloadAnalytics
-        downloader = DownloadAnalytics()
-        
+        from ga_auth import (init_service, get_profile_id)
+
+        ga_token_filepath = os.path.expanduser(config.get('googleanalytics.token.filepath', ''))
+        if not ga_token_filepath:
+            print 'ERROR: In the CKAN config you need to specify the filepath of the ' \
+                  'Google Analytics token file under key: googleanalytics.token.filepath'
+            return
+
+        try:
+            svc = init_service(ga_token_filepath, None)
+        except TypeError:
+            print ('Have you correctly run the getauthtoken task and '
+                   'specified the correct token file in the CKAN config under '
+                   '"googleanalytics.token.filepath"?')
+            return
+
+        downloader = DownloadAnalytics(svc, profile_id=get_profile_id(svc),
+                                       delete_first=self.options.delete_first,
+                                       skip_url_stats=self.options.skip_url_stats)
+
         time_period = self.args[0] if self.args else 'latest'
         if time_period == 'all':
             downloader.all_()
         elif time_period == 'latest':
             downloader.latest()
         else:
-            since_date = datetime.datetime.strptime(time_period, '%Y-%m-%d')
-            downloader.since_date(since_date)
+            # The month to use
+            for_date = datetime.datetime.strptime(time_period, '%Y-%m')
+            downloader.specific_month(for_date)
 
-

--- a/ckanext/ga_report/controller.py
+++ b/ckanext/ga_report/controller.py
@@ -1,10 +1,362 @@
+import re
+import csv
+import sys
 import logging
-from ckan.lib.base import BaseController, c, render
-import report_model
+import operator
+import collections
+from ckan.lib.base import (BaseController, c, g, render, request, response, abort)
+
+import sqlalchemy
+from sqlalchemy import func, cast, Integer
+import ckan.model as model
+from ga_model import GA_Url, GA_Stat, GA_ReferralStat, GA_Publisher
 
 log = logging.getLogger('ckanext.ga-report')
 
+
+def _get_month_name(strdate):
+    import calendar
+    from time import strptime
+    d = strptime(strdate, '%Y-%m')
+    return '%s %s' % (calendar.month_name[d.tm_mon], d.tm_year)
+
+
+def _month_details(cls):
+    '''
+    Returns a list of all the periods for which we have data, unfortunately
+    knows too much about the type of the cls being passed as GA_Url has a
+    more complex query
+
+    This may need extending if we add a period_name to the stats
+    '''
+    months = []
+    day = None
+
+    vals = model.Session.query(cls.period_name,cls.period_complete_day)\
+        .filter(cls.period_name!='All').distinct(cls.period_name)\
+        .order_by("period_name desc").all()
+    if vals and vals[0][1]:
+        day = int(vals[0][1])
+        ordinal = 'th' if 11 <= day <= 13 \
+            else {1:'st',2:'nd',3:'rd'}.get(day % 10, 'th')
+        day = "{day}{ordinal}".format(day=day, ordinal=ordinal)
+
+    for m in vals:
+        months.append( (m[0], _get_month_name(m[0])))
+
+    return months, day
+
+
 class GaReport(BaseController):
+
+    def csv(self, month):
+        import csv
+
+        q = model.Session.query(GA_Stat)
+        if month != 'all':
+            q = q.filter(GA_Stat.period_name==month)
+        entries = q.order_by('GA_Stat.period_name, GA_Stat.stat_name, GA_Stat.key').all()
+
+        response.headers['Content-Type'] = "text/csv; charset=utf-8"
+        response.headers['Content-Disposition'] = str('attachment; filename=stats_%s.csv' % (month,))
+
+        writer = csv.writer(response)
+        writer.writerow(["Period", "Statistic", "Key", "Value"])
+
+        for entry in entries:
+            writer.writerow([entry.period_name.encode('utf-8'),
+                             entry.stat_name.encode('utf-8'),
+                             entry.key.encode('utf-8'),
+                             entry.value.encode('utf-8')])
+
     def index(self):
-        return render('index.html')
-
+
+        # Get the month details by fetching distinct values and determining the
+        # month names from the values.
+        c.months, c.day = _month_details(GA_Stat)
+
+        # Work out which month to show, based on query params of the first item
+        c.month_desc = 'all months'
+        c.month = request.params.get('month', '')
+        if c.month:
+            c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
+
+        q = model.Session.query(GA_Stat).\
+            filter(GA_Stat.stat_name=='Totals')
+        if c.month:
+            q = q.filter(GA_Stat.period_name==c.month)
+        entries = q.order_by('ga_stat.key').all()
+
+        def clean_key(key, val):
+            if key in ['Average time on site', 'Pages per visit', 'New visits', 'Bounce rate (home page)']:
+                val =  "%.2f" % round(float(val), 2)
+                if key == 'Average time on site':
+                    mins, secs = divmod(float(val), 60)
+                    hours, mins = divmod(mins, 60)
+                    val = '%02d:%02d:%02d (%s seconds) ' % (hours, mins, secs, val)
+                if key in ['New visits','Bounce rate (home page)']:
+                    val = "%s%%" % val
+            if key in ['Total page views', 'Total visits']:
+                val = int(val)
+
+            return key, val
+
+        c.global_totals = []
+        if c.month:
+            for e in entries:
+                key, val = clean_key(e.key, e.value)
+                c.global_totals.append((key, val))
+        else:
+            d = collections.defaultdict(list)
+            for e in entries:
+                d[e.key].append(float(e.value))
+            for k, v in d.iteritems():
+                if k in ['Total page views', 'Total visits']:
+                    v = sum(v)
+                else:
+                    v = float(sum(v))/len(v)
+                key, val = clean_key(k,v)
+
+                c.global_totals.append((key, val))
+                c.global_totals = sorted(c.global_totals, key=operator.itemgetter(0))
+
+        keys = {
+            'Browser versions': 'browser_versions',
+            'Browsers': 'browsers',
+            'Operating Systems versions': 'os_versions',
+            'Operating Systems': 'os',
+            'Social sources': 'social_networks',
+            'Languages': 'languages',
+            'Country': 'country'
+        }
+
+        def shorten_name(name, length=60):
+            return (name[:length] + '..') if len(name) > 60 else name
+
+        def fill_out_url(url):
+            import urlparse
+            return urlparse.urljoin(g.site_url, url)
+
+        c.social_referrer_totals, c.social_referrers = [], []
+        q = model.Session.query(GA_ReferralStat)
+        q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
+        q = q.order_by('ga_referrer.count::int desc')
+        for entry in q.all():
+            c.social_referrers.append((shorten_name(entry.url), fill_out_url(entry.url),
+                                       entry.source,entry.count))
+
+        q = model.Session.query(GA_ReferralStat.url,
+                                func.sum(GA_ReferralStat.count).label('count'))
+        q = q.filter(GA_ReferralStat.period_name==c.month) if c.month else q
+        q = q.order_by('count desc').group_by(GA_ReferralStat.url)
+        for entry in q.all():
+            c.social_referrer_totals.append((shorten_name(entry[0]), fill_out_url(entry[0]),'',
+                                            entry[1]))
+
+        for k, v in keys.iteritems():
+            q = model.Session.query(GA_Stat).\
+                filter(GA_Stat.stat_name==k)
+            if c.month:
+                entries = []
+                q = q.filter(GA_Stat.period_name==c.month).\
+                          order_by('ga_stat.value::int desc')
+
+            d = collections.defaultdict(int)
+            for e in q.all():
+                d[e.key] += int(e.value)
+            entries = []
+            for key, val in d.iteritems():
+                entries.append((key,val,))
+            entries = sorted(entries, key=operator.itemgetter(1), reverse=True)
+
+            # Get the total for each set of values and then set the value as
+            # a percentage of the total
+            if k == 'Social sources':
+                total = sum([x for n,x in c.global_totals if n == 'Total visits'])
+            else:
+                total = sum([num for _,num in entries])
+            setattr(c, v, [(k,_percent(v,total)) for k,v in entries ])
+
+        return render('ga_report/site/index.html')
+
+
+class GaDatasetReport(BaseController):
+    """
+    Displays the pageview and visit count for datasets
+    with options to filter by publisher and time period.
+    """
+    def publisher_csv(self, month):
+        '''
+        Returns a CSV of each publisher with the total number of dataset
+        views & visits.
+        '''
+        c.month = month if not month == 'all' else ''
+        response.headers['Content-Type'] = "text/csv; charset=utf-8"
+        response.headers['Content-Disposition'] = str('attachment; filename=publishers_%s.csv' % (month,))
+
+        writer = csv.writer(response)
+        writer.writerow(["Publisher Title", "Publisher Name", "Views", "Visits", "Period Name"])
+
+        for publisher,view,visit in _get_top_publishers(None):
+            writer.writerow([publisher.title.encode('utf-8'),
+                             publisher.name.encode('utf-8'),
+                             view,
+                             visit,
+                             month])
+
+    def dataset_csv(self, id='all', month='all'):
+        '''
+        Returns a CSV with the number of views & visits for each dataset.
+
+        :param id: A Publisher ID or None if you want for all
+        :param month: The time period, or 'all'
+        '''
+        c.month = month if not month == 'all' else ''
+        if id != 'all':
+            c.publisher = model.Group.get(id)
+            if not c.publisher:
+                abort(404, 'A publisher with that name could not be found')
+
+        packages = self._get_packages(c.publisher)
+        response.headers['Content-Type'] = "text/csv; charset=utf-8"
+        response.headers['Content-Disposition'] = \
+            str('attachment; filename=datasets_%s_%s.csv' % (c.publisher_name, month,))
+
+        writer = csv.writer(response)
+        writer.writerow(["Dataset Title", "Dataset Name", "Views", "Visits", "Period Name"])
+
+        for package,view,visit in packages:
+            writer.writerow([package.title.encode('utf-8'),
+                             package.name.encode('utf-8'),
+                             view,
+                             visit,
+                             month])
+
+    def publishers(self):
+        '''A list of publishers and the number of views/visits for each'''
+
+        # Get the month details by fetching distinct values and determining the
+        # month names from the values.
+        c.months, c.day = _month_details(GA_Url)
+
+        # Work out which month to show, based on query params of the first item
+        c.month = request.params.get('month', '')
+        c.month_desc = 'all months'
+        if c.month:
+            c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
+
+        c.top_publishers = _get_top_publishers()
+        return render('ga_report/publisher/index.html')
+
+    def _get_packages(self, publisher=None, count=-1):
+        '''Returns the datasets in order of views'''
+        if count == -1:
+            count = sys.maxint
+
+        month = c.month or 'All'
+
+        q = model.Session.query(GA_Url,model.Package)\
+            .filter(model.Package.name==GA_Url.package_id)\
+            .filter(GA_Url.url.like('/dataset/%'))
+        if publisher:
+            q = q.filter(GA_Url.department_id==publisher.name)
+        q = q.filter(GA_Url.period_name==month)
+        q = q.order_by('ga_url.pageviews::int desc')
+        top_packages = []
+        for entry,package in q.limit(count):
+            if package:
+                top_packages.append((package, entry.pageviews, entry.visits))
+            else:
+                log.warning('Could not find package associated package')
+
+        return top_packages
+
+    def read(self):
+        '''
+        Lists the most popular datasets across all publishers
+        '''
+        return self.read_publisher(None)
+
+    def read_publisher(self, id):
+        '''
+        Lists the most popular datasets for a publisher (or across all publishers)
+        '''
+        count = 20
+
+        c.publishers = _get_publishers()
+
+        id = request.params.get('publisher', id)
+        if id and id != 'all':
+            c.publisher = model.Group.get(id)
+            if not c.publisher:
+                abort(404, 'A publisher with that name could not be found')
+            c.publisher_name = c.publisher.name
+        c.top_packages = [] # package, dataset_views in c.top_packages
+
+        # Get the month details by fetching distinct values and determining the
+        # month names from the values.
+        c.months, c.day = _month_details(GA_Url)
+
+        # Work out which month to show, based on query params of the first item
+        c.month = request.params.get('month', '')
+        if not c.month:
+            c.month_desc = 'all months'
+        else:
+            c.month_desc = ''.join([m[1] for m in c.months if m[0]==c.month])
+
+        month = c.month or 'All'
+        c.publisher_page_views = 0
+        q = model.Session.query(GA_Url).\
+            filter(GA_Url.url=='/publisher/%s' % c.publisher_name)
+        entry = q.filter(GA_Url.period_name==c.month).first()
+        c.publisher_page_views = entry.pageviews if entry else 0
+
+        c.top_packages = self._get_packages(c.publisher, 20)
+
+        return render('ga_report/publisher/read.html')
+
+def _get_top_publishers(limit=20):
+    '''
+    Returns a list of the top 20 publishers by dataset visits.
+    (The number to show can be varied with 'limit')
+    '''
+    month = c.month or 'All'
+    connection = model.Session.connection()
+    q = """
+        select department_id, sum(pageviews::int) views, sum(visits::int) visits
+        from ga_url
+        where department_id <> ''
+          and package_id <> ''
+          and url like '/dataset/%%'
+          and period_name=%s
+        group by department_id order by views desc
+        """
+    if limit:
+        q = q + " limit %s;" % (limit)
+
+    top_publishers = []
+    res = connection.execute(q, month)
+    for row in res:
+        g = model.Group.get(row[0])
+        if g:
+            top_publishers.append((g, row[1], row[2]))
+    return top_publishers
+
+
+def _get_publishers():
+    '''
+    Returns a list of all publishers. Each item is a tuple:
+      (name, title)
+    '''
+    publishers = []
+    for pub in model.Session.query(model.Group).\
+               filter(model.Group.type=='publisher').\
+               filter(model.Group.state=='active').\
+               order_by(model.Group.name):
+        publishers.append((pub.name, pub.title))
+    return publishers
+
+def _percent(num, total):
+    p = 100 * float(num)/float(total)
+    return "%.2f%%" % round(p, 2)
+

--- a/ckanext/ga_report/download_analytics.py
+++ b/ckanext/ga_report/download_analytics.py
@@ -1,23 +1,42 @@
+import os
 import logging
 import datetime
-
+import collections
 from pylons import config
-
+from ga_model import _normalize_url
 import ga_model
-from ga_client import GA
+
+#from ga_client import GA
 
 log = logging.getLogger('ckanext.ga-report')
 
 FORMAT_MONTH = '%Y-%m'
+MIN_VIEWS = 50
+MIN_VISITS = 20
 
 class DownloadAnalytics(object):
     '''Downloads and stores analytics info'''
-    def __init__(self):
+
+    def __init__(self, service=None, profile_id=None, delete_first=False,
+                 skip_url_stats=False):
         self.period = config['ga-report.period']
-    
-    def all_(self):
-        pass
-    
+        self.service = service
+        self.profile_id = profile_id
+        self.delete_first = delete_first
+        self.skip_url_stats = skip_url_stats
+
+    def specific_month(self, date):
+        import calendar
+
+        first_of_this_month = datetime.datetime(date.year, date.month, 1)
+        _, last_day_of_month = calendar.monthrange(int(date.year), int(date.month))
+        last_of_this_month =  datetime.datetime(date.year, date.month, last_day_of_month)
+        periods = ((date.strftime(FORMAT_MONTH),
+                    last_day_of_month,
+                    first_of_this_month, last_of_this_month),)
+        self.download_and_store(periods)
+
+
     def latest(self):
         if self.period == 'monthly':
             # from first of this month to today
@@ -31,13 +50,13 @@
         self.download_and_store(periods)
 
 
-    def since_date(self, since_date):
+    def for_date(self, for_date):
         assert isinstance(since_date, datetime.datetime)
         periods = [] # (period_name, period_complete_day, start_date, end_date)
         if self.period == 'monthly':
             first_of_the_months_until_now = []
-            year = since_date.year
-            month = since_date.month
+            year = for_date.year
+            month = for_date.month
             now = datetime.datetime.now()
             first_of_this_month = datetime.datetime(now.year, now.month, 1)
             while True:
@@ -49,8 +68,8 @@
                     break
                 elif first_of_the_month < first_of_this_month:
                     in_the_next_month = first_of_the_month + datetime.timedelta(40)
-                    last_of_the_month == datetime.datetime(in_the_next_month.year,
-                                                           in_the_next_month.month, a)\
+                    last_of_the_month = datetime.datetime(in_the_next_month.year,
+                                                           in_the_next_month.month, 1)\
                                                            - datetime.timedelta(1)
                     periods.append((now.strftime(FORMAT_MONTH), 0,
                                     first_of_the_month, last_of_the_month))
@@ -71,46 +90,331 @@
             return period_name + ' (up to %ith)' % period_complete_day
         else:
             return period_name
-        
+
 
     def download_and_store(self, periods):
         for period_name, period_complete_day, start_date, end_date in periods:
-            log.info('Downloading Analytics for period "%s" (%s - %s)',
+            log.info('Period "%s" (%s - %s)',
                      self.get_full_period_name(period_name, period_complete_day),
-                     start_date.strftime('%Y %m %d'),
-                     end_date.strftime('%Y %m %d'))
-            data = self.download(start_date, end_date)
-            log.info('Storing Analytics for period "%s"',
-                     self.get_full_period_name(period_name, period_complete_day))
-            self.store(period_name, period_complete_day, data)
-
-    @classmethod
-    def download(cls, start_date, end_date):
+                     start_date.strftime('%Y-%m-%d'),
+                     end_date.strftime('%Y-%m-%d'))
+
+            if self.delete_first:
+                log.info('Deleting existing Analytics for this period "%s"',
+                         period_name)
+                ga_model.delete(period_name)
+
+            if not self.skip_url_stats:
+                # Clean out old url data before storing the new
+                ga_model.pre_update_url_stats(period_name)
+
+                accountName = config.get('googleanalytics.account')
+
+                log.info('Downloading analytics for dataset views')
+                data = self.download(start_date, end_date, '~/%s/dataset/[a-z0-9-_]+' % accountName)
+
+                log.info('Storing dataset views (%i rows)', len(data.get('url')))
+                self.store(period_name, period_complete_day, data, )
+
+                log.info('Downloading analytics for publisher views')
+                data = self.download(start_date, end_date, '~/%s/publisher/[a-z0-9-_]+' % accountName)
+
+                log.info('Storing publisher views (%i rows)', len(data.get('url')))
+                self.store(period_name, period_complete_day, data,)
+
+                log.info('Aggregating datasets by publisher')
+                ga_model.update_publisher_stats(period_name) # about 30 seconds.
+
+            log.info('Downloading and storing analytics for site-wide stats')
+            self.sitewide_stats( period_name, period_complete_day )
+
+            log.info('Downloading and storing analytics for social networks')
+            self.update_social_info(period_name, start_date, end_date)
+
+
+    def update_social_info(self, period_name, start_date, end_date):
+        start_date = start_date.strftime('%Y-%m-%d')
+        end_date = end_date.strftime('%Y-%m-%d')
+        query = 'ga:hasSocialSourceReferral=~Yes$'
+        metrics = 'ga:entrances'
+        sort = '-ga:entrances'
+
+        # Supported query params at
+        # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
+        results = self.service.data().ga().get(
+                                 ids='ga:' + self.profile_id,
+                                 filters=query,
+                                 start_date=start_date,
+                                 metrics=metrics,
+                                 sort=sort,
+                                 dimensions="ga:landingPagePath,ga:socialNetwork",
+                                 max_results=10000,
+                                 end_date=end_date).execute()
+        data = collections.defaultdict(list)
+        rows = results.get('rows',[])
+        for row in rows:
+            data[_normalize_url(row[0])].append( (row[1], int(row[2]),) )
+        ga_model.update_social(period_name, data)
+
+
+    def download(self, start_date, end_date, path=None):
         '''Get data from GA for a given time period'''
         start_date = start_date.strftime('%Y-%m-%d')
         end_date = end_date.strftime('%Y-%m-%d')
-        # url
-        #query = 'ga:pagePath=~^%s,ga:pagePath=~^%s' % \
-        #        (PACKAGE_URL, self.resource_url_tag)
-        query = 'ga:pagePath=~^/dataset/'
-        metrics = 'ga:uniquePageviews'
-        sort = '-ga:uniquePageviews'
-        for entry in GA.ga_query(query_filter=query,
-                                 from_date=start_date,
+        query = 'ga:pagePath=%s$' % path
+        metrics = 'ga:pageviews, ga:visits'
+        sort = '-ga:pageviews'
+
+        # Supported query params at
+        # https://developers.google.com/analytics/devguides/reporting/core/v3/reference
+        results = self.service.data().ga().get(
+                                 ids='ga:' + self.profile_id,
+                                 filters=query,
+                                 start_date=start_date,
                                  metrics=metrics,
                                  sort=sort,
-                                 to_date=end_date):
-            print entry
-            import pdb; pdb.set_trace()
-            for dim in entry.dimension:
-                if dim.name == "ga:pagePath":
-                    package = dim.value
-                    count = entry.get_metric(
-                        'ga:uniquePageviews').value or 0
-                    packages[package] = int(count)
-        return packages
+                                 dimensions="ga:pagePath",
+                                 max_results=10000,
+                                 end_date=end_date).execute()
+
+        packages = []
+        for entry in results.get('rows'):
+            (loc,pageviews,visits) = entry
+            url = _normalize_url('http:/' + loc) # strips off domain e.g. www.data.gov.uk or data.gov.uk