Skip navigation
All People > JMilneresriuk-esridist > James Milner's Blog

Esri Polymer

Posted by JMilneresriuk-esridist Employee Oct 19, 2015

Esri Polymer


This is a quick post to talk a little bit about Esri Polymer! It's a project I've been working on for a little while (very intermittently) but never put on GeoNet, so thought I would make this post!


Before I begin to start talking about the project I would like to point out what Polymer is. Polymer is a library from Google for building web components. It allows developers to create custom compartmentalised HTML elements. In a way you can think of a web component similar to a 'widget'. Some contrived examples might be a an image carousel, a YouTube video, or a PayPal checkout, or rather a <image-carousel> , a <youtube-video> and a <paypal-checkout>


Polymer provides a light wrapper around some native/polyfilled web standards that make up web components, namely:


  • Custom Elements - for defining a unique custom element
  • HTML Imports - for importing HTML
  • Shadow Dom - for encapsulating sub trees of DOM
  • HTML Template - for reusable inert pieces of DOM


The necessary polyfills come curtsy of webcomponents.js which allow web components to run on all modern evergreen browsers.


Great stuff James, but why would I ever want to use web components?


The major reason is that it allows us to write markup and code an abstracted level. You can import a web component into your page and mark it up in your HTML page just like you would a div or a span, or any other element without having to worry about the underlying implementation.


This works great with things like basemaps, web maps, feature layers, and markers. Each one of those becomes its own element, and we can have feature layers as child elements of a map add them to it.


For example:


<esri-map basemap="dark-gray" centerLng="-0.122" centerLat="51.514" zoom="7">




  <esri-marker lng="-0.5" lat="51.3">

  <esri-marker-title>Hello World</esri-marker-title>

  <esri-marker-content>Some Content</esri-marker-content>




will produce the following once rendered:





  • Web components are only supported with polyfills on modern browsers
  • Polymer is a relatively early stage project
  • Some argue that you shouldn't use web components in production


If you can live with these things than they are great fun to experiment with!




You can find the project here: JamesMilnerUK/esri-polymer · GitHub


The Latest Version


I've just updated the project, hopefully with some certain niceties:


  • Polymer 1.1.5
  • ArcGIS JS 3.14
  • Cleaner code
  • Less hacky hacks for element lifecycles and interacting with parent and child elements

It's been a while since I've posted anything to GeoNet, but I thought I'd share some samples that I was working on this weekend at Hackference to show case how to do some basic functions In ArcGIS. The repo has examples of how to do the first steps with the API, with code samples such as:


  • Creating a map
  • Creating a map from a Web Map ID
  • Adding markers (AKA Graphics, Points) to the map
  • Adding a Feature Layer to a map
  • Adding a CSV file to a map
  • Adding a KML file to a map
  • Creating a basic heatmap


These samples are meant to compliment some of the samples available from the JavaScript samples page. There focus is on simplicity, commenting and on the bare minimum amount of code to do a requirement ('add a point to a map'). This is mostly useful for things such as teaching, hackathons and coding sessions with people completely new to the ArcGIS JavaScript API.


You can see the repo on GitHub here:


               JamesMilnerUK/ArcGISHelloWorld · GitHub   

               The Zip File


Got some simple,  commented, single function examples to add? Why not make a pull request!

Recently I came across an article that had used the GitHub API to scrape information regarding the number of users in major cities in the US. The article gave me the idea to take this a little further and see if we could map out the number of users in each city, or perhaps more importantly the percentage of people in that city with a GitHub accounts. Before we begin let met point out the obvious flaws with the methodology of this application:


  • Populations are estimations (plus the UK census is now 4 years old)
  • Populations for cities can be difficult to define (city, urban, metro area)
  • GitHub accounts can be owned by companies as well as people
  • Not everyone gives their location on their GitHub accounts, or people may lie/not update


Having said this, it's still interesting to explore the available data and try to see or explain any patterns. Plus it's fun!


Data Scraping


Firstly to get the data into a format that could be mapped, it was necessary to instantiate a list of cities that I was interested in, and assign these their populations (I used Wikipedia).


Then using Python and the GitHub API I scraped the number accounts that matched the town name. Here it was necessary to try multiple different matches to get an accurate data. For example, with London it was necessary to try "London, England", "London, Great Britain", "London, United Kingdom" and "London, UK" as these are all valid locations representing the same place. You will need a GitHub account and a token to avoid rate limiting.


The Results


CityGitHub AccountsCity PopulationRate
Cambridge, England13131285151.022
Brighton, England5881630000.361
Oxford, England5511713800.322
Bath, England231888590.260
Reading, England2911608250.181
Durham, England68480690.141
Bristol, England8376170000.136
York, England2602044390.127
Norwich, England1651404520.117
Edinburgh, Scotland8017820000.102
London, England929197874260.095
Glasgow, Scotland5585899000.095
Dundee, Scotland1331539900.086
Exeter, England981218000.080
Belfast, Northern-Ireland2162767050.078
Bangor, Wales11163580.067
Aberdeen, Scotland1251891200.066
Cardiff, Wales2834472870.063
Bournemouth, England1161834910.063
Sheffield, England3626407200.056
Nottingham, England3897299770.053
Liverpool, England2364664150.051
Manchester, England129125533790.051
Plymouth, England1222566000.048
Swansea, Wales1012390230.042
Newcastle, England3518799960.040
Southampton, England3128555690.036
Inverness, Scotland21579600.036
Leicester, England1435090000.028
Leeds, England49617779340.028
Gloucester, England291256490.023
Warwick, England291393960.021
Birmingham, England50324409860.021
Newport, Wales261457000.018
Derry, Northern-Ireland6836520.007
Aylesbury, England131845600.007
Lisburn, Northern-Ireland4714030.006




We can see highest on the list is Cambridge with over 1 % of the population having a GitHub account. Lowest on the list was Lisburn with 0.006 closely followed by Aylesbury (where I live ) and Derry with 0.007%. To put this into perspective the original Hirily analysis found 3% of San Francisco's population had a GitHub account!


Making the Map


The script outputs a CSV which was then uploaded into ArcGIS Online content pane using a developer account. When uploading the CSV we can set the city column to be geocoded. This allows us to take the address of the city and turn it into a latitude and longitude, in turn allowing us to map the data.




The process asks if you want to review (probably worth while as some points can end up astray). Once this was done, I gained a Feature Service of the data (a REST end point we can get our data from). From here I took this into a Esri Leaflet map (one of Esri's GitHub projects!). The main bulk of the mapping is outlined in the JavaScript code below:


    var map ='map').setView([ 54.514, -2.122], 6);


    var ukGitHub =
    var gh = L.esri.featureLayer(ukGitHub, {
        pointToLayer: function (geojson, latlng) {
            var rate =;
            var size;

            if (rate >= 0.361 && rate < 1.2 ) {
                size = [65, 63];
            else if (rate >= 0.181 && rate < 0.361 ) {
                size = [55, 53];
            else if  (rate >= 0.095 && rate < 0.181 ) {
                size = [45, 43];
            else if  (rate >= 0.046 && rate < 0.095 ) {
                size = [35, 33];
            else if  (rate >= 0 && rate < 0.046 )  {
                size = [25, 23];

            return L.marker(latlng, {
                icon: L.icon({
                    iconUrl: 'imgs/github4.png',
                    iconSize: size,
                    iconAnchor: [size[0] / 2, size[1] / 2],
                    popupAnchor: [0, -11]



Screenshot and Live Demo


A screenshot of the map can be seen below, a live demo can be seen here.




Where's the code?


You can find the code on my GitHub account: JamesMilnerUK/github-mapping · GitHub 

set the city column to be geocoded. This allows us to take the address of the city and turn it into a latitude and longitude, in turn allowing us to map the data.

What is it ?


A simple map that allows users to search for tweets referring to a specific keyword in a given view extent. Currently the Twitter search API only allows you to pull back a maximum of 100 tweets, but it gives you a flavour for what you might be able to do if you had/have access to the Twitter firehose! The application also demonstrates how to do common tasks you might do in jQuery via Dojo. For example DOM querying, styling and AJAX using the requests module. This allows us as developers to not have to import an extra library whilst using the ArcGIS JavaScript API, reducing page weight.


What does it look like?



Live Demo


You can see the live demo at by following the link here. The Twitter API is rate limited so there is some possibility that the auth credentials used may hit their allocated limit.


GitHub Repository


The project is available from:


The Twitter Search API


The Twitter search API is defined here. The API allows users to get hold of tweets although it is important to note:


"[the] Twitter’s search service and, by extension, the Search API is not meant to be an exhaustive source of Tweets. Not all Tweets will be indexed or made available via the search interface. "


So, not suitable if you are looking for a full picture, but still useful for a bit of fun mapping tweets. An alternative if you need an exhaustive list of tweets may be to examine companies with Twitter Firehose access; for example Gnip (recently bought by Twitter) and DataSift (although their access appears to be ending in August 2015).


The Twitter Search API requires authentication. You can register a new app which will provide you with the authentication credentials required at Twitter Application Management. Once you have registered you can get hold of four things:


  • Consumer Key (API Key)
  • Consumer Secret (API Secret)
  • Access Token
  • Access Token Secret


From here I found the easiest way to get going with the API without having to worry about authentication implementation details was to use a helper library. In this case I've used TwitterAPIExchange (PHP) by J7mbo, but it appears there is a node.js module and also a Ruby Gem depending on how you roll.


The Twitter Search API has a few parameters that we are specifically interested in, in this case, geocode and count. Geocode represents a latitude, longitude (note the order!) and radius that we are interested in returning tweets from (km or miles are accepted as units). In the case of this application we use the centre point of the map and then get the extent in kilometres divided by two as the radius. For the count we keep this capped at 100, as this is the maximum!


Frontend and ArcGIS JavaScript API


We firstly check if HTML5 geolocation, if so then we instantiate a map at that part of the the world using the provide coordinates. If not then we instantiate a map near the middle of the Earth (0, 30) at a reasonable zoom level (8).  We also provide the user with a box to provide the keyword we are looking for. We then do an AJAX request to the PHP script 'gettweets.php'. As mentioned we pass it the latitude, longitude, radius and the Twitter keyword we're interested. This is returned back to our map, and we create a graphic for each of the tweets at its geotagged location (from the geo object of the Twitter API JSON payload). We do this using a PictureMarkerSymbol and also a suitable InfoTemplate with the users avatar, the content of the tweet and the date (again from the payload). The layout uses Bootstrap to allow for a responsive design that is suitable for most devices. We also make use of the Search widget, to allow for simple location finding for users.


Where could this be taken?


You (or I for that matter!) could go on to compare how different tweets compare geographically, go on to use a heatmap renderer or cluster layer to explore distribution better (may be a little redundant with the 100 tweet limit however). Alternatively if you have Twitter firehose access it might be easy to switch out the API.


Other things that might interest you


ComeFlyWithMe is an application built by two university students Max Maybury and Darren Gilbert. It allows you to see what's directly underneath a flight at any given any point in time, using a combination of the ArcGIS JavaScript API and the the FlightAware API. The project started at SpaceApps London, held at Inmarsat.




Max:"We were provided with a list of challenges that we could pick from, and we chose one where the challenge was to provide the user with a view from a plane. We had a few ideas to use Google Earth, but found the API didn't really exist, so we just decided to integrate Google Maps, and move the map to represent the movement of the plane. We use the altitude of the plane to affect the zoom of the map, so that the map becomes more zoomed in as the plane begins its descent."


But they didn't feel like this application was enough to wow the judges, so they began to look deeper. They started thinking about how they could make the app more realistic and visually appealing, look to the plugin free web graphics library WebGL.


Max: "[we started to] look into the weather, and how we could represent that on the screen. We use WebGL to dynamically generate cloud coverage, based on a percentage we get from a weather API. We also add a filter for flights which are flying through the night.




After the event, we realised that the maps that Esri provide are better quality, so we decided to integrate them into the system. We had a few problems which we had overcome here, trying to find out if the zoom levels of the Esri maps differ from Google Maps, and how we could render the map more quickly. We removed the pan duration, pan rate, zoom duration, and zoom rate, when we initially set the centre point of the map. This meant that the map loaded a lot quicker when a flight was entered.


We used the Flight Aware API to get more accurate location information for the flights. We didn't want to call the API too much, as this would slow the server down a lot, so we wrote our own calculations for the flight path based on the bearing of the plane, and the speed it was travelling at. We called the flight API every minute or so, so that it could fix its position. The website uses Node.js in the backend, with WebGL and the Esri maps API in the front end.""


Max and Darren won the People's Choice Award at SpaceApps London, with the judges citing that "Come Fly with me was the best example of a fully working demo after 24 hours and something we could see airlines being interested to show where you are in flight in as well as friends / family interested in where loved ones are currently mid flight."




They also exhibited the app at the Inmarsat booth during the Esri UK Annual Conference on May 19th at QEII in Westminster, London, where Inmarsat was showing off innovative use of flight location data. In the future they see opportunities to feature the flights on  in-flight entertainment systems, so that users can find out more about where they are travelling over. They state that this could also lead all sorts of additional exciting data being overlaid onto the surface imagery.


If you are interested in the code you can check out its GitHub account here:

As most starters to the ArcGIS JavaScript API, I had to start getting to grips with Dojo. I will make a bit of a concession in saying I've put off learning Dojo but I have been feeling recently that is a necessity to use the JS API to its full potential and avoid bring redundant code. I'll be going back to remove jQuery from some of my examples over the coming months. Apart from it being unnecessary in conjunction to Dojo, jQuery also adds page weight and subsequently wastes of bandwidth and increases page load times. The vast majority of things jQuery does, Dojo can also do. I've been digging into Dojo (there docs are now beautiful!) that I realised it was no way near as 'hard' or 'verbose' as people were making it out to be. In fact when it comes to DOM manipulation and traversal, it's more or less identical to jQuery in most scenarios.


Lets assume we have the following set of very basic HTML in our web page:


<div class="box red">
    <p id="one">1</p>
    <p id="two">2</p>
    <p id="three">3</p>
<div class="box blue">
    <p class="letter">a</p>
    <p class="letter">b</p>
    <p class="letter">c</p>


Here's how we might go around selecting elements using jQuery:


     var jQueryOne = $("#red");
     var jQueryRed = $(".red");
     var jQueryRedAndBlue = $(".red .blue");


But realistically this is also trivial using Dojo:


// We're using dojo query
require(["dojo/query", "dojo/domReady"], function(query){

    //query methoods - 
    var dojoOne = query("#one");
    var dojoRed = query(".red");
    var dojoRedAndBlue = query(".red .blue");
    var dojoQueryChildren = query(".red >"); // This will return children of .red



Notice that there is essentially no difference in the code structure (minus AMD module loading; we load 'query' for DOM selection and domReady to wait until the DOM has loaded). Dojo also returns a NodeList, which is essentially a JavaScript Array of the selected DOM elements, decorated with helper functions.


But what if we want to begin to traverse the DOM? In jQuery you may be used to doing something like:


    var jQueryChildren = $(".red").children();
    var jQueryParent = $("#one").parent();
    var jQueryParents = $("#one").parents();
    var jQuerySiblings = $("#one").siblings();
    var jQueryNext =  $("#one").next();


And so forth.   Again using Dojo this is actually very similar and simple:


require(["dojo/query", "dojo/NodeList-traverse", "dojo/domReady!"], function(query){
     //NodeList-traverse methods
    var dojoChildren = query(".red").children();
    var dojoParent = query("#one").parent();
    var dojoParents = query("#one").parents();
    var dojoSiblings = query("#one").siblings();
    var dojoNext =  query("#one").next();
    // Full list of traversals here: 


Again notice that the implementations are very similar. This time we loaded in NodeList-traverse as well as query.


Another aspect of dealing with the DOM is wanting to hide, show, style and destroy DOM elements (and such things). In jQuery we may do something like this:


    $("#one").css("color", "orange"); // Change the css color to organge
    $("#two").remove(); // Here for similar methods:
    $(".blue").removeClass(); // Will remove the blue background color
    $("#one").hide(); // Hides the element
    $("#one").show(); // Shows the element again


You can achieve the same affect using:


    query("#one").style("color", "orange");
    query("#one").style("visibility", "hidden");
    query("#one").style("visibility", "visible");


If you ever want to iterate over a returned list of DOM nodes in Dojo you can dojo.forEach (allows forEach functionality from ES5 in older browsers also). In jQuery you can use $.each. Another solution is just to use a native for loop:


for (var i = 0; i < someNodes.length; i++) {


You can see the full example here on this gist: jQuery to Dojo.html


Hopefully this post as evidenced how simple it is to switch out your jQuery without any real detriment to your code or readability! I'll be doing another couple posts on how to convert your jQuery code; one on AJAX calls another on animations.


Additional Resources:


Alongside the Twilio, I've recently been exploring a few other interesting APIs to make use of in projects. One of which is the Soundcloud API. For those of you who are unfamiliar Soundcloud is a platform for people to share music they've made with other users.


The application allows users to search for a genre of music (i.e. Dance) and then sifts through tracks pulled through the API with this attributed genre and attempts to geocode their location. It does this by looking at the city and country information provided by Soundcloud and passing that into the ArcGIS geocoding service. Once geocoded they are placed onto the map. The Soundcloud tracks API provides information relating to the tracks (artist name etc.) enlisted with that genre and also a nicely embeddable iframe encapsulating a Souncloud music player for that track. The app takes this and puts the music player within the popup when the user selects one of the pins on the map.


The page makes use of the Esri Leaflet API as I wanted to explore making use of this library, especially as this was a slightly more lightweight application. The app also makes use of the spiderify leaflet plugin which helps deal with the issue of overlapping pins at the same location. The plugin allows you to click on a cluster of pins and it spins them out from their original center so you can click them individually. This is useful in this scenario as we only get city level granularity at for track locations.


What did you use?



What does it look like?



Just reminded me, did you map the songs again?


Soundcloud's API does not provide latitude and longitude for their tracks. However the do provide a textual location for the track. Via the ArcGIS geocoding service it was possible to geocode the addresses. Some checking is done to avoid empty strings, "Global", or "Worldwide"


Can I have a play? Where is the code?


The live demo is available here. You can also find the code on GitHub at this url,

For a long while I've been wanting to make an app that implements both the Twilio and ArcGIS APIs. Thankfully I've had a little more time recently and over the past couple weeks I have been working on On My Way. On My Way is a basic web application that highlights the powers of the ArcGIS Routing REST API and the Twilio SMS API. If you haven't tried it yet, the routing API allows you to route between two given locations as a service. It has a host of parameters to try out including method of transport (Walking, Driving, Trucking), directions and barriers (places you don't want to route through). The Twilio SMS API provides developers a simple interface for the sending of texts (SMSs) to numbers of their choosing, and also has a selection of helper libraries in various languages.. You can check out Twilio' s website for more information on their services.


OK cool, what does it do?


A basic outline of the functionality of 'On My Way' is as follows:


  1. User A inputs a friends Postcode (ZIP Number?!)  and friends mobile number. The user then selects if they are driving or walking.
  2. User A is then presented with a screen informing them there friend has been texted with their mode of transport and how long it will take them. They are also given a map demonstrating the suggested route
  3. User B is delivered a text with this information.




How was it Implemented?


Front End

The front end made use of some of some really nice input effects (built by Codrops; see here for GitHub repo and here for the article). I went with the 'Yoko' style which gives this cool 'popup'-like user input with the label text underneath. The app also made use of some Google Fonts; Pacifico font for the title and Raleway for the input labels / main text. The background image was a open image from


Once User A has submitted the details (User Bs phone number and Postcode) they receive a map with the suggested route to User B on. The map takes the route JSON passed back from the REST request made in the PHP script via AJAX.


Back End

The back end used PHP (Boo, hiss), making ArcGIS REST requests using cURL and outputting the results to both an SMS via Twilio and back to the client using AJAX. The Twilio SMS API was very easily implemented using their PHP SKD. I imagine it would be a similar task to implement the functionality in other languages such as Python or Ruby.


Can I have a play?


I've open source the project and left it on my GItHub account for everyone to have a tinker with if they so wish. You can find the link here. You can also see a live demo here.

KML or Keyhole Markup Language is an XML data structure for storing geographic information. The format was made popular by Google when they acquired Keyhole in 2004 and used it as their de facto file type for geographic data storage for use with their mapping products. It is now a OGC standard.


What does KML look like?


Below we can examine a snippet of KML (thanks Wikipedia!). At the first line it is possible to see how the markup starts with standard XML tags. The second line defines the markup as being KML. On line 4  we then define a 'Placemark' with child nodes defining things like the place name and description. Google use Placemarks as very simple markers for positioning a data point on a map. The default symbology is a yellow pin. If we jump to line 7, we can it is possible to see where we explicitly state that this is a Point geometry with the coordinates coming as a child node (line 8) of this.


<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="">
  <name>New York City</name>
  <description>New York City</description>


How can we use KML in our ArcGIS JavaScript app?


The ArcGIS JavaScript API provides a module for dealing with KML files: KMLLayer. KMLLayer allows us to provide a URL to a public facing KML file and use that geographic data in our ArcGIS JavaScript app. Indeed because of the nature of KML the API actually turns the KML into a series of Feature Layers using a ArcGIS service, so its easier to think of the KMLLayer as a group of layers as opposed to one homogeneous layer.


The KMLLayer converts the KML file down into one of three feature layers; one for points, one for lines and one for polygons. For most intents and purposes you can use a KML layer like you would any other Feature Layer. For example you can query them and use them in geoprocessing tasks.


It is important to note that we must use explicit absolute path names to our KML file to for the KMLLayer to work! This is because the file path gets sent to a utility service to process the KML into the returned Feature Layers.



A Hello World Example


    function(Map, KMLLayer, PictureMarkerSymbol, SimpleRenderer) {  

        var map = new Map("map", {    
           basemap: "gray",    
           center: [0, 30], // longitude, latitude    
           zoom: 4  

        // A KML Layer: We must explicitly state the full URL (relative URLs will throw errors!)
        var layer = new KMLLayer("", {  });
        layer.on("load", function() {
            var layers = layer.getLayers()

        map.addLayer(layer); // Add the layer to the map  



Here we create a KMLLayer (line 16) and then wait for it to return the relative Feature Layers from the utility service. We then use the .on method for wait for it to have loaded and console log it's layers (line 18). This demonstrates how we can quite simply take a KML url and have a layer on the map in a few lines of code. The KMLLayer will also maintain symbology and attributes.


How can I use firewalled KML files?


You will need to use your own utility service which requires Portal for ArcGIS. After this you can configure your JavaScript to use the service:


require(["esri/config"], function(esriConfig) {
  esriConfig.defaults.kmlService = "http://servername.domain.suffix/arcgis/sharing/kml";



KML Samples:


Following on from my previous post examining GraphicsLayers and FeatureLayers, I now want to look at CSVLayers (We'll look at KMLLayers next time!).




CSVLayers allow you to add CSV data to a map. When we create a new layer, the module will work out the geographic attributes in the csv table provided and use these to place the point data to the map (see below the code for allowed geographic field names). Lets imagine we have a csv file with Boris Bike stations (a cycle share scheme) in London, alongside how many available docks there currently are at the station. One of the fields is titled 'long' and another is 'lat'. We can simply create a new layer by creating a new CSVLayer object and passing the first argument as a string with the location of the CSV file (line 18 in code below). In this example for simplicity the file is stored on the same web host in a folder called 'csv'.


We must then pass a object with two key value pairs to express explicitly which field data we wish to pull through from the CSV file. First key value pair consists of the key 'name' and the value which is "INSERT_FIELD_NAME". The second key value pair is the key 'type' and then a string expressing the type of the field. Allowed types are "Date", "Number" and "String".


Lets take a look at a working example as it will make a lot more sense! Take a look at line 19 for a working example of how to setup the fields we wish to pull through with the appropriate type.


            function(Map, CSVLayer, PictureMarkerSymbol, SimpleRenderer) {

                var map = new Map("map", {  
                   basemap: "gray",  
                   center: [-0.122, 51.514], // longitude, latitude  
                   zoom: 13 

                // CSV Layer created using the fields we want to bring through to the client
                var layer = new CSVLayer("csv/borisbikes.csv", {
                  fields: [{name: "name", type: "String"}, {name: "empty_docks", type: "Number"}]
                var logo = new PictureMarkerSymbol("imgs/logo.png", 16, 11); // Define a marker image 
                var simpleRenderer = new SimpleRenderer(logo); // Define a new renderer
                layer.setRenderer(simpleRenderer); //Set the simple point renderer to the feature layer
                map.addLayer(layer); // Add the layer to the map



I've tested some longitude and latitude field titles and the following have worked for me:


  •      long, lat
  •      lon, lat
  •      longitude, latitude
  •      Longitude,  Latitude
  •      x, y
  •      X, Y
  •      Mixtures of these appear to work, although I would avoid it for sanity reasons!


It is important to note the key differences between the CSVLayer and  the FeatureLayer, which are expressed in the documentation.


  • "edits are applied on the client not posted to the server."
  • "The feature layer generates a unique object id for new features."
  • "[CSVLayer] Does not support queries that need to be performed on the server, e.g. queries with a where clause or non-extent based spatial queries."
  • "The feature layer toJson method returns an object with the same properties as the feature collection. The returned object includes all the features that are in the layer when the method is called. This method can be used to access a serializable representation of the features that can be saved on the server."


With regards to crossdomain issues, if you are using a CSV file that is not from the same domain as the hosting website, you require a server with CORS enabled or a proxy.

Introducing Layers - Part 1


Layers are data sets (generally of the same theme) that can be added to your map; for example you might have a data layer expressing local police stations, or cafes. Your web map might have multiple layers and in ArcGIS JavaScript API you can mix am match you layers. Think of the map as your canvas and your layers as your paint. There are several different types of layers available to you, but in this blog post I will cover two of the most used and fundamental layer types; the Graphics Layer and the Feature Layer.


Graphics Layers


Graphics Layers allow you to add arbitrary markers (graphics) to your map. A graphic layer might contain markers with the location of benches, bins, or trees in your local area for example. When you initiate a map using the ArcGIS JavaScript API, it comes with a graphics layer initially ( however you can add more layers where necessary.


GraphicsLayers can add graphics, which are constructed using a geometry, a symbol, attributes, and/or an infoTemplate (an infoTemplate is a template for the popup that appears when you click on a graphic on the map).


Attributes are just a JavaScript object, with named value pairs i.e. { key1: value1, key2: value2 }. You can assign attributes by using the graphic.attributes property, and assigning a JavaScript object.


Geometries can be constructed coordinates from arbitrary data sources, such as databases or iterated data API returns, just make sure that you know which coordinate system is being used with such data.


You can also create a graphic using a json object. Visibility can be turned on an off using the visibility property (Boolean).

We can use Multipoint  Point  Polygon  Polyline geometries with specific types of symbols:



Text symbols require point geometries to add to the map, although a good example to get a point geometry from an arbitrary polygon can be seen here.


Here is an example of setting the a graphics layer with a SimpleMarkerSymbol, a custom geometry point and infoTemplate:


var map;

     function(Map, Graphic, Point, SimpleMarkerSymbol, InfoTemplate) {
         map = new Map("map", {
          basemap: "streets",
          center: [-1.268388,51.753249], // longitude, latitude
          zoom: 17

         var locationLayer = new esri.layers.GraphicsLayer();
         var point = new Point(-1.268388,51.753249);
         var symbol = new SimpleMarkerSymbol().setColor("#1036DE").setSize(20);
         var graphic = new Graphic(point, symbol);
         var infoTemplate = new InfoTemplate();
         infoTemplate.setTitle("<b>Saïd Business School</b>");
         infoTemplate.setContent("Park End St, Oxford, OX1 1HP <br> <b>Coordinates:</b> 51.753249,  -1.268388");
         map.addLayer(locationLayer)  // Makes sure that map is loaded



Feature Layer


The FeatureLayer is similar to the GraphicsLayer (it inherits it) however it allows the user to make use of a Feature Service to provide its geometry data, attributes and symbology. You can create feature services from your developer account, (Hosted Data -> Create Feature Service) and also from CSV files or uploading a Shapefile, or GeoJSON. You can GUI edit Feature Services through the ArcGIS Online interface. When the feature layer is added to the map, symbols and attribute data are pulled through for the current view extent.


The feature service has four modes that can be used in its constructor:


  • MODE_AUTO: “If the total number of features in a layer are less than maxRecordCount and total vertexes is less than 250,000, snapshot mode is used. Otherwise, on-demand mode is used.
  • MODE_ONDEMAND: “In on-demand mode, the feature layer retrieves features from the server when needed.
  • MODE_SELECTION: “In selection mode, features are retrieved from the server only when they are selected.
  • MODE_SNAPSHOT: “In snapshot mode, the feature layer retrieves all the features from the associated layer resource and displays them as graphics on the client.”


Default mode is MODE_ONDEMAND. We can also use attributes from the feature service in the infoTemplate (the popup on click) using string substitution:


Here is an example of using a hosted Feature Service on a map using MODE_ONDEMAND:


var map;  
function(Map, Graphic, Point, SimpleMarkerSymbol, InfoTemplate, FeatureLayer) {  
     map = new Map("map", {  
       basemap: "streets",  
       center: [-98.2926121, 38.49233], // longitude, latitude  
       zoom: 8  
     var infoTemplate = new InfoTemplate("${FIELD_NAME}", "Oil and Gas Field");  
     var featureLayer = new FeatureLayer(  
          mode: FeatureLayer.MODE_ONDEMAND,  
          infoTemplate: infoTemplate,  
          outFields: ["*"]   


To get arbitrary data from the feature layer you must do a query. The query method is fairly extensive, but one of its most important features is the ability to use query.where which allows for SQL like querying.


//initialize & execute query
var queryTask = new esri.tasks.QueryTask(

var query = new esri.tasks.Query();
query.where = "STATE_NAME = 'Washington'";
query.outSpatialReference = {wkid:102100};
query.returnGeometry = true;
query.outFields = ["CITY_NAME"];
queryTask.execute(query, addPointsToMap);

//add points to map and set their symbology + info template
function addPointsToMap(featureSet) { 
    dojo.forEach(featureSet.features,function(feature){     ;


Next in the series I aim to examine CSVLayers, KMLLayers and potentially MapImageLayers.

When using the ArcGIS JavaScript and REST APIs for building an app to target public users (as opposed to ArcGIS named users), it is common to want to use credit consuming services such as routing, geoenrichment, batch geocoding. In order to do this you must authenticate your application with the target service. The recommended way of doing this is through OAuth2 and application credentials.



      var map;

      require(["esri/map", "dojo/domReady!"], function(Map) {
            map = new Map("map", {
                  basemap: "topo",
                  center: [-122.45, 37.75], // longitude, latitude
                  maxZoom: 4, // Minimum zoom level
                  minZoom: 15, // Maximum zoom level
                  sliderStyle: "small", // Small slider, other option is large
                  zoom: 13 // Zoom level
      // Setup the proxy rule
            proxyUrl: "/proxy/proxy.ashx",
            urlPrefix: ""



Here you can see a standard ArcGIS JavaScript API ‘Hello World’ map. However we set it up to use the urlUtils to redirect through a proxy to attach our credentials for authentication. The location of the proxy can be seen in the proxyURL value (line 18). The service we want to use it for can be seen in the urlPrefix (line 19).


In order to authenticate our request for credit consuming services (routing, geoenrichment, etc) we need to register for a developer account, and register a new application. You can register for a developer account at .


Once you have registered your free developer account, you can create a set of application credentials from your control panel, by clicking the applications tab.




When you do this you will receive a Client ID and a Client Secret (also a Token however this will expire). We can also get some nicely auto generated code for server side REST requests using Python, Node.js, Ruby, Go, cURL. You may also be interested in the ‘Usage’ tab that gives you a dashboard about credit usage.





As the JavaScript API wraps around the REST API, we could potentially go about appending credentials to REST API calls (via AJAX) in your JavaScript. However this is considered bad practice and a security vulnerability as someone could easily steal these by looking at your plain text JavaScript which is downloaded by the client.


Instead it’s best to append them to the requests server side, through a proxy. A proxy runs on your web server and allows you to have you traffic come through the file proxy. The proxy can append the Client ID and Client Secret to the URL and thus saves you putting the application credentials in your client side code.


Esri maintains three file proxies for you to use, with fitting documentation for their usage. The file proxies are available in PHP, .Net and Java, all hosted for download on GitHub under an Apache License.


As an example let’s imagine you are a web developer running IIS on as your localhost web server. You can simply put the proxy in your route file folder (normally something like C:\inetpub\wwwroot) and update the proxy to deal with REST requests made from the JavaScript API.


The only file we are directly interested is the proxy.config. The config is an XML file that we can update to choose specific URLs to apply our credentials too. For example (fill in your own clientId and clientSecret):


<serverUrl url="//"


Notice how we use a http agnostic // instead of https: or http:. There are a couple parameters we can change including rate limiting (limiting the number of requests, to prevent overusage). And also “matchAll” which “when true all requests that begin with the specified URL are forwarded. Otherwise, the URL requested must match exactly”.


From here we can use the proxy in our JavaScript code via the urlUtils module and this code snippet


                  proxyUrl: "/proxy/proxy.ashx",
                  urlPrefix: ""


You can also change your config to always use a proxy for ArcGIS REST endpoints using: = "/proxy/proxy.ashx"; = true;


Now that we have configured the proxy we can begin to start using credit consuming services in our JavaScript application. In my next post I will explain how we can use routing services using our application credentials.