Loading data from different sources using Backends
These set of examples will show you how you can load data from different
sources such as Google Docs or the DataHub using Recline
Backends connect Recline Datasets to data from a specific ‘Backend’ data
They provide methods for loading and saving Datasets and individuals
Documents as well as for bulk loading via a query API and doing bulk transforms
on the backend.
Backends come in 2 flavours:
Loader backends - only implement fetch method. The data is then cached in a
Memory.Store on the Dataset and interacted with there. This is best for
sources which just allow you to load data or where you want to load the data
once and work with it locally.
Store backends - these support fetch, query and, if write-enabled, save.
These are suited to cases where the source datastore contains a lot of data
(infeasible to load locally - for examples a million rows) or where the
backend has, for example, query capabilities you want to take advantage of.
Instantiation and Use
You can use a backend directly e.g.
But more usually the backend will be created or loaded for you by Recline and
all you need is provide the identifier for that Backend e.g.
How do you know the backend identifier for a given Backend? It's just the name
of the 'class' in recline.Backend module (but case-insensitive). E.g.
recline.Backend.ElasticSearch can be identified as 'ElasticSearch' or
Backend you’d like to see not available? It’s easy to write your own
– see the Backend reference docs for details
of the required API.
Preparing your app
This is as per the quickstart but the set of files is
much more limited if you are just using a Backend. Specifically:
Loading Data from Google Docs
We will be using the following Google
For Recline to be able to access a Google Spreadsheet it must have been
‘Published to the Web’ (enabled via File -> Publish to the Web menu).
Loading Data from ElasticSearch
Recline supports ElasticSearch as a full read/write/query backend via the
ElasticSearch.js library. See the library for examples.
Loading data from CSV files
For loading data from CSV files there are 3 cases:
CSV is online but on same domain or supporting CORS (S3 and Google Storage support CORS!) – we can then load using AJAX (as no problems with same origin policy)
CSV is on local disk – if your browser supports HTML5 File API we can load the CSV file off disk
CSV is online but not on same domain – use DataProxy (see below)
In all cases we’ll need to have loaded the Recline CSV backend (for your own
app you’ll probably want this locally):
Local online CSV file
Let’s start with first case: loading a “local” online CSV file. We’ll be using this example file.
CSV file on disk
This requires your browser to support the HTML5 file API. Suppose we have a file input like:
Then we can load the file into a Recline Dataset as follows:
Try it out!
Try it out by clicking on the file input above, selecting a CSV file and seeing what happens.
Loading data from CSV and Excel files online using DataProxy
The DataProxy is a web-service run by the Open Knowledge Foundation that converts CSV and Excel files to JSON. It has a convenient JSON-p-able API which means we can use it to load data from online CSV and Excel into Recline Datasets.
Recline ships with a simple DataProxy “backend” that takes care of fetching data from the DataProxy source.
The main limitation of the DataProxy is that it can only handle Excel files up to a certain size (5mb) and that as we must use JSONP to access it error information can be limited.
Customizing the timeout
As we must use JSONP in this backend we have the problem that if DataProxy errors (e.g. 500) this won’t be picked up. To deal with this and prevent the case where the request never finishes We have a timeout on the request after which the Backend sends back an error stating that request timed out.
You can customize the length of this timeout by setting the following constant: