Maps

VGI Map Progress

Progress on the VGI map

The broad goal of this project is to create a scalable, easy to deploy map that enables user input of point, line and polygon into a DB that supports some level of moderation. In addition, the UI will support layer control, pan and zoom. While these elements are available though the Leaflet JS library (and it only makes sense to use this library) there isn't a "ready to deploy" iteration of this build for someone who cannot code in JS. As such, I think that there is a need for researchers, or PPGIS geeks like me. Please see below for a stack concept.

VGI Stack Concept
VGI Stack Concept

Within this context, I am asking for help from the broader developer community, and specifically those geonerds with the right skills, by way of GitHub. I have created a VGI Project repo on GitHub with some project parameters, wireframes and a base template (shamelessly stolen from CartoDB), and I will be working with Zachary Schoenberger to fill out more details. My plea:

  1. Help with the project parameters and management;
  2. Poke holes and generally make suggestions on what can be better;
  3. Contribute code.

What do you think?

Building a VGI Web Map 1: Goal and workplan

It's clear from my previous two posts that I want to learn how to build a web map. So, this post to lay out my roadmap forward.

Outcome

Front end: A beautiful web-map with an intuitive UI, layer controls and the ability for a user to add point, line and polygon features, and to describe those features.

Back end: Upload and display data via layers on the map, moderation controls, make selected data 'private', download the spatial data (+ meta data) in a GIS friendly format such as .shp or .geojson.

Map Models

I am modeling my outcome on the look, feel and function of two maps:

Calgary Flood Map

salish sea spill map

The Tech

After some discussion and a great email from Hugh Stimson where he explained the components, and range of options he considered to build his Salish Sea Spill Map, I think I have a way forward - my specific project needs include:

Database:  The options for this include CartoDB, Fusion Tables and MySQL or PostGIS.  I think that I have settled on CartoDB as a middle road - perhaps a little harder to learn than Fusion Tables, but with some more functionality (i.e. I understand that the logic of CartoDB can include moderation by 'queuing' the VGI content for approval by a moderator).  I am learning PostGIS as the standard DB, but find it frustrating and will switch over if I get good enough to manage it.

Base Map: I have not settled on this yet, but think that I will use a simple OSM derivative map, something like the MapBox Streets or Stamen Toner. The point of this base-map is to provide enough geographic detail that (1) the additional layers of data make sense, and (2) so that a VGI contributor can select the correct geographic area in which to add their data.

User Interface:  There are a range of functions that need to be accommodated, which include:

  • Zoom control
  • Layer control
  • Dialog boxes
  • Navigation bar
  • Forms (and perhaps some form validation)
  • VGI Input

To do this, I think we will need to run Bootstrap and Leaflet, but I am not certain and need to delve into this in greater detail.

The final configuration will be CartoDB + Leaflet + JS.

The Plan

  1. Consult with my team to ensure my thinking is correct, and to rough out a more detailed requirements list.
  2. Start using CartDB and Leaflet to build a basic web map as a prototype
  3. Build this basic map out until I meet the requirements.

The Plan B

Deploy a 'personal' instance of Ushahidi if Plan A takes to long or is simply beyond my skill.

Conclusion

I plan on regular updates to document my progress and to ask for help. If you are reading this, and are interested in contributing some skill, time, or simply advice, please let me know by email or twitter. I can't pay you, but I will buy you coffee, maybe lunch and, if you build something, will defiantly give you credit on the map.

uMap Test Deployment

I am testing uMap to see if it may work for the purpose of crowdsourcing spatial data. Please have a look and add some content:

uMap looked promising.  I loved that it supported some map customization (i.e. the Stamen Toner or the OSM monochrome map theme) and that the interface for adding point, line and polygon was intuitive.  I did not see any moderation tools, but was interested in the embed function... so I tried it.  Not impressed.  A map in a WP post, I can zoom but not add any VGI. This will not work for my purpose.

NOTE: 30 October 2015.  uMap was not quickly connecting to it's servers, which was slowing down by blog load time. As such, I removed all links to uMap.

VGI GeoStack - Some Questions

I am working on a mapping project where, once deployed, we will be looking to gather VGI or crowdsourced geographic information from a specific group of Edmontonians.  Ethical issues aside (I am looking at Matt Wilson's Critical GIS Reading List) I am trying to get my head around the technology that I will need to deploy to enable a customizable, flexible and light stack for displaying a number of map layers, and collecting point, line and polygon data as well as a narrative that may accompany these spatial data.  I considered a deployment of Ushahidi's Crowdmap, but feel that it does not offer the speed and UI / UX flexibility that we need. The stack I am considering, and would like feedback and suggestion on, is:

  • PostGIS as the database
  • QGIS + TileMill – to process and style the base-map and layers, and to make the MBTiles.
  • PHP Tile Server – To serve the MBTiles onto the internet.
  • Leaflet JS – for the UI/UX design
  • OSM's iD Editor - for the VGI component to manage the contribution of spatial data.

I have some questions regarding this:

  1. Is this the best / easiest way to go?
  2. Can the iD Editor be linked to PostGIS, or is there a better way in this instance to glean VGI?
  3. What role can GeoJSON and GitHub play in this stack?

I am still getting my head around this and would appreciate any thoughts.

UPDATE 02 December 2013

I received some good discussion and suggestions for a platform via Twitter from Alan McConchie and Hugh Stimson (Hugh pointed to his Salish Sea Spill Map as a map that incorporates VGI - it's an interesting project and cool map):

twitter_chat

I plan on breaking down both uMap and MapStory, and will update this post again with my results.

uMap Update on 03 December can be found here.

 

QGIS, TileMill and MapBox, oh my. Or, web mapping tools I am learning to use.

How do you make a web map?  

This is the question I have been exploring for the past while as I try to expand my basic knowledge of GIS beyond the ARC.  As a result of this exploration, I have put a few maps on-line, developed a keen passion for map making and an interest in expanding my skills.  This post comprises a list of my mapping tools - those I currently use, those I am actively learning, and those on my list to learn. The geo-stack that I am working toward comprises of the following:

  • PostGIS - works as the spatial database for storing and serving the data to either QGIS or TileMill
  • QGIS + TileMill - QGIS is a great tool for analyzing and processing data, TileMill makes it look good and allows an export of MBTiles.
  • PHP Tile Server - This serves the MBTiles onto the internet.
  • Leaflet JS - Leaflet provides the user interface allowing someone on-line to interact with the MBTiles.

While I am learning the components of this stack, I use other things, described below.

Web mapping tools I use

Open DataOpen data forms the basis for most of my base map data. Open Street Map extracts allows me to build interesting, complete and free base maps, and various open data portals offer data for mashing. My goto data portals are:

OpenStreetMap - I am a minor contributor to OSM, and mainly use it as a database for urban, Edmonton, data.  For instance, an ongoing project is to classify each building by type (apartment, commercial, etc) in downtown Edmonton so that I can update my DTYEG map and create an accurate land use map of #yegdt. Cartographica - I mainly use Cartographica as a desktop geocoder, quick and dirty data viz tool.  I love how simple it is to dump data into the view window, and how quick it renders large data sets.  It is a light and easy way to quickly get a sense of a dataset, plus it has a 'live' map feed of OpenStreetMap or Bing. It can import or export to KML, and complete some lightweight data analysis like heat maps. QGIS - Where Cartographica is light, QGIS is robust.  A free way to get a full GIS on your desktop, and because I run an iMac, the easiest way to do spatial analysis without loading a Windows VM (and much cheaper too, as in free).  I love QGIS, but it requires a set of skills comparable to those used in ArcGIS.  I am still building this skill set. TileMill - TileMill is awesome. A super easy to use map style machine by MapBox, TileMill uses CartoCSS (Cartographic Cascading Style Sheets) to code the look of each point, line, polygon and raster within your map.  It renders maps fast and beautiful, and dumps them in a variety of formats, including MBTiles, which you can then load onto the MapBox site for a fully interactive map experience. MapBox - MapBox provides two services that I find vital - (1) web hosting and (2) base maps that can be styled. I am not yet skilled enough to take the MBTimes and put them online myself, so I rely on a MapBox subscription to host my maps.  If I am working with a large geographic area, and am not yet skilled at dealing with huge data sets, so I also use MapBox's base map, from OSM, which can be made custom. Also, MapBox provides some great satellite imagery as a base map, and an awesome blog on what is new in mapping.

Web mapping tools I am learning

PostGIS - I learned recently that the cool kids pronounce this as Poist-jis NOT Post G-I-S.  PostJis is hard and I don't really get it - it is a OSS project that adds support to geographic data within a PostSQL database.  I have been working with a Refractions Research tutorial, and have been able to install PostgreSQL and enable PostGIS, but I am unfamiliar with SQL so I find it hard even knowing how to compose a command.  Lots to learn here.  My PostGIS resources include:

CartoDB - I love how CartoDB makes temporal data come alive.  Check out this map of '7 Years of Tornado Data', and how you can almost pick out the season by the amount of tornado activity.  Apparently you can style the map in CartCSS (which I can do), combine various data sets, and run SQL queries.  Much to learn. Leaflet JS - "Leaflet is a modern open-source JavaScript library for mobile-friendly interactive maps". It's a UI that enables the user to interact with the MBTiles like: MBTiles > PHP Tile Server > Leaflet > User.

Web mapping tools I want to learn

Below I list a bunch of tools that I want to learn, but an still coming to grips as to what they do, and how they interact.  For instance,  I can picture the workflow of data from PostGIS to TileMill to MapBox, but I cannot picture the workflow from TileMill (MBTile output) to PHP Tile Server, and the role that JavaScript and HTML play in the creation of a hosted map (ok, I kinda get it, but not in a concrete, I have done this way). If I get anything wrong here (or anywhere in the post) please let me know - also, if I am missing a great resource, make a note of it in the comments. PHP Tile Server - PHP Tile Server translates the MBTile file onto the web and acts as an interface between the MBTiles and an UI such as Leaflet JS or even Google Maps. HTML - HTML provides the backbone of every website.  Learning to code HTML would simply allow me to create and style the webpage that my MBTiles are displayed upon. JavaScript - Like HTMP, JS is a programming langauge the provide some sort of function to a website.  Where HTML is static, JS is dynamic allowing the user to interact with elements of the website.  For instance, in a mapping context, JS would allow me to define a set of layers that a user could turn off and on to expose of hide specific types of data on a map.  Plan Your Place has a great interactive flood map of Calgary that illustrates this function.

GeoJSon - A JS derivative  (as is TopoJSon) of JS that codes spatial data such as point, line, polygon.  In the web mapping context it is a much more powerful format than ESRI Shape Files as it is lighter (i.e. quicker) and can be integrated into the code of the map.

Resources

This is not a complete list - in fact it is barely a list.  Please add a comment to point out what I am missing.

  • Code Academy - A free on-line coding tutorial that is interactive and problem based.  They offer tutorials for JavaScript, HTML, PHP and help you learn how to build web projects.  Very cool and free.
  • GitHub Learn GeoJson - GitHub is a place where programmers, especially those working on the OSS space, keep their code for others to download, use and improve upon. This is made by Lyzi Diamond.
  • Maptime! - An awesome list of mapping resources by Alan  McConchie (@almccon) and Matthew McKenna (@mpmckenna8).
  • Spatial Analysis On-line - As I try to remember my GIS courses, this is the on-line text that I reference to help me understand the analysis I want to run.
  • Mapschool - Tom MacWright wrote this as a crash course in mapping for developers.

Colour and Maps

These are the colour palette websites that I reference:

Finally, NASA has a great 6 part series on colour theory called the "Subtleties of Color".

Global Winds

A portrait of global winds. From NASA As high quality, high resolution satellite images become ubiquitous, maps such as NASA's global wind patters will become more common.  As a result, you will become used to looking at beautiful satellite base maps, don't forget just how special and rare these were.  An indication of the future satellite images can be found on the MapBox blog (and another cool post on the MapBox blog).

Edmonton's Residential Development Permit Process is Broken

  The object in question - expanding the garage 1.5m to the north, and attaching it to the house.

The City of Edmonton's residential development application process is broken.  I had a horrible experience that could have been much easier, less time consuming and a lot less costly.  Here is my story, the issues that I have with the process and some thoughts on how to improve the process.

The project

Our project was to add 10 square meters to the north portion of the garage, and in the process fill the 1.51m gap between the house and the garage.  A window on the south side of the house would have to be moved, as well as a so called 'man door' on the north side of the garage.  It's a relatively simple project, and I thought an easy ask of the City.  Because I have a background in geography (and could not find someone to do the drawings), and have done many to-scale drawings in the past, I decided to do the following myself, which I put together in a 13 page PDF:

  • A rationale and justification for the project
  • A site plan (from the RPR)
  • A floor plan
  • A south and west elevation

The PDF also included 12 photographs of the houses on the blocks to the east and west documenting that an attached garage was normal for this neighbourhood. I also addressed the concerns raised in the City of Edmonton's Mature Neighbourhood Overlay. Specifically:

  • A variance was identified in that the garage was within 3m of the property line (the variance already exists, and we would not change it by building on the north side of the garage);
  • The total footprint of the property was 2% over the allowed ratio (to address this, we offered to remove a small shed in the back that is considered part of the calculation);

The process

I met with a City of Edmonton Applications Officer.  We had a nice discussion, and he offered some advice that would help with my application, and assured me that all of the elements needed were present and adequate.  I left, modified some of the documentation, and emailed hime the PDF on 21 March.  The I waited.  I emailed 3 times over the next three days to ensure that the applications officer got my file. He did.  I emailed several more times, and called regarding the progress of my application - I even went so far as to call 311 to get the development officer's name and phone number.  I called several times to no avail.

I did not hear back until 03 May, when I received an email that stated:

I have received your application back from the development officer  They can not make a decision on what has been provided.  They would require scaled drawings of the house, floor plans, elevation drawings and a site plan showing setbacks from the property lines.

When I discussed the outcome with the applications officer I asked if I could phone the development officer.  I just wanted to talk with a decision maker to explain what I wanted to do.  I was told that '...it was not a good idea...' the development officer in question was not approachable. I felt like my application was being judged by criteria not mentioned anywhere (i.e. that my drawings were not done by a  professional), and that there was not one person with whom I could talk to about it.

Needless to say, I was disappointed for a couple reasons - (1) I was misled into thinking that my application was adequate when it was not, (2) I was not able to talk with the decision maker - I planned on re-applying and wanted to understand the requirements, and (3) I felt like this was a closed and adversarial process.

Over the next weeks I was able to find a person to make the drawings, and was able to reapply with professional, stamped drawings and a block face plan for my street.  I was turned down again, so I immediately appealed and took my development application to the development appeals board.

The appeal

This was an interesting process for a few reasons:

  • The Development Appeal Board is independent from the City of Edmonton, with board members being appointed from the public.
  • Their process and requirements are well documented and were discussed with me at great length by a planner who works with the appeals board.
  • They provided a checklist of requirements, a timeline with the expected date of the hearing, and an phone number to call if I had any questions!  This is in stark contrast to my previous experience.
  • They provided a deadline for presentation materials, and what they should consist of.
  • They also suggested that I talk to my neighbors within 60m of the property to see if they had any issues.  I did, and my neighbors had no issues.

In fact, 2/3 of my neighbors have had a similar experience with the city's development process, and signed my sheet on principle.  They pitied my this 'ridiculous' process and it worked to my benefit.

I presented to the appeals board, answered a few questions and waited for the answer.  For the first time in since my first discussion with an application officer 7 month prior, I felt like I was talking to the right person, like I was being heard and that I would be given a reasonable answer.  20 minutes after my presentation I was told that I would receive a development permit in the mail.

Recommendations and conclusions

In reflecting on the process, some weeks after we received the development approval, I think the most pertinent issues relate to transparency of process, including communication, and providing process support.  My recommendations:

  1. The City of Edmonton should have a step-by-step guide to residential development applications, from where to get drawings done, to all of the official and unofficial criteria for an application.  This process should be open to all home owners, just those who hire professionals to do the drawings and project manage the process.  A citizen with a scaled and clear drawing on graph paper should be treated as equally as those with professional drawings and a contractor well versed in the process.
  2. Assign a main contact within the development application department who can address any questions related to an application.
  3. Allow the applicant to talk to the decision maker.
  4. In this role, the City is providing a service while enforcing / upholding a set of by-laws.  The application and development officers should have adequate communication and collaboration training.  The application process SHOULD NOT be adversarial.

 

Dot Map of the USA

The Weldon Cooper Center at the University of Virginia put together on e of the most impressive maps that I have seen in a while this past June. The Racial Dot Map is:

an American snapshot; it provides an accessible visualization of geographic distribution, population density, and racial diversity of the American people in every neighborhood in the entire country. The map displays 308,745,538 dots, one for each person residing in the United States at the location they were counted during the 2010 Census. Each dot is color-coded by the individual's race and ethnicity.

Racial Dot Map Full Extent

There a couple things that I think are very interesting about the map.  The methods used are pretty cool where the creator, Dustin Cable, coded the map using Python and a whole set of skills that I do not possess.  It highlights the insight that can be gained when skill sets are combined - for instance I am certain that an urban planner would have a deeper understanding at the extent of racial segregation in a city like Boston (please let me know if I am out to lunch here).  This insight is accomplished using a very minimal design (you can toggle names off/on) that is both effective and beautiful. Remember, all of the distinguishable features on this map are a result of dots with each dot representing a person.  Wow.

Racial Dot Map detail of Boston

Finally, the map was inspired by a similar and equally remarkable Population Map of North America created by MIT Medial Lab researcher Brandon Martin-Anderson. The NA Population map contains 454 064 098 points - a very big data set!

Dot Map: Population of North America

The complete description of the Racial Dot Map can be found at the Cooper Center's site. 

Foursquare Checkin Video

Foursquare check-ins show the pulse of San Francisco from Foursquare on Vimeo.

I love how this video shows the pulse, the flow of life, of how San Francisco moves and beats over the course of a day.  I cannot help but think that these data should be made more freely available (now only available to FourSquare), and that they would be very useful for city planners to understand how 'hot spots' of activity work.

Live Maps

I love the look and feel of live maps that show 'real time' data as it unfolds in the world around us.  Two of my favorites - and older wind map, and a newer births and deaths map - are shown below: Wind speeds are estimates based on data accessed within the hour.

Check out the map - Wind Map

This map's data is accurate (likely) to within a year.

Check out the map - Births and Deaths

The problem with these maps are many - from a cartographic perspective they do not provide a level of detail that makes them useful. For instance, you could not plan on a weather or wind outcome from the wind map simply because the scale is not adequate to pinpoint where you are relative to the wind.

The births and deaths map likely takes an annual number of births and deaths for each country and then scales that number over a year.  If you ran the simulation for a year, you would see an 'accurate' annual rate for each country, that is reflective of the past.  It is not real time data as the creators have no way of knowing, for certain, that the births and deaths are in fact occurring.

The power of these maps, despite their draw backs, is huge.  The wind map, for instance, was mesmerizing as hurricane Sandy made landfall on the Easter Seaboard. I didn't care that the data was an hour old because the visual impact of the map was profound - it gave me a glimpse into the power of the hurricane. As with the wind map, seeing 'real time' births and deaths puts a scale to an annual number that I might read in the paper.  To see all of those births (4.2 per second)  and deaths (1.8 per second) occurring in Africa and SE Asia, and relatively few in N America and Europe provided some insight, still abstract but a little less so, into our global humanity.

yegdt: Land use in downtown Edmonton - preliminary analysis

Edmonton's downtown is a growing and changing entity.  In recent memory Edmonton's core has gone from being mostly dead, to having a few sections of vibrant life with interesting restaurants, bars and a wide range of events.  On an infrastructure level, the core is shifting away from surface lots and empty buildings to new developments including high rise buildings such as the Epcor Tower and the Icons on 104th (thanks in part to the muni being shut down). This gives rise to the question of how downtown land is being used.  So I, of course, made a map.

The area of this map is defined by the City of Edmonton's Downtown Neighbourhood Boundary. The black polygons on this map represent surface parking lots (~15% of the downtown area). The surface lots, in conjunction with roadways (~12% of downtown area), represent about 27% of the surface area of downtown core.  Buildings occupy 25% and parks space 6%. I was astonished to learn that there are over 2800 trees in the downtown, with over 25 species (tree data from data.edmonton.ca). I captured each one. Rather than document each tree in a huge legend, I created a teaser that allows you to mouse over each point to discover the trees in your neighbourhood.

Some things of note about the map - a full 27% of the downtown core is devoted to cars, and this data does not include parking garages or underground parking. There is more square feet of land dedicated to vehicles than to building footprints, and only 6% to park space.  Wow.

There are two main sources of data for this project. One is from an article about parking in Downtown Edmonton published on Spacing Edmonton.  Kyle Witiw, the author, was kind enough to share his hand-drawn parking data with me.  I then downloaded the Edmonton OpenStreetMap data from an OSM metro extracts site, and teased out the various buildings in the core, compared and corrected the OSM parking data with Kyle's data, and mapped it using QGIS + TileMill and MapBox. Kyle pointed out a couple of problems with my data, including:

The surface parking that used to exist on the NW corner of 104 street and 102 ave no longer exists - the Fox One condo development is currently going up on that site.

I have not yet corrected that.

There are some additional issues with these data. The road data is likely an under estimate as I measured the length of the roads bounded by the neighbourhood boundary, and then multiplied each length by the number of lanes (2) and width per lane (3m) to get the total area. I did not account for parking lanes or turning lanes, as such I think that I have under estimated to total surface area of roadways. Also, several park spaces are not in the data sets - for instance the area west of Canterra Centre at Jasper and 109 is poorly documented and does not include the park that contains the walking/biking trail.  Also with Canterra, Save-On-Foods is missing, as are the condos to the westThe addition of these residential and commercial footprints would slightly increase the overall built-up area in the core.

In addition, the blue buildings have been classified as  a 'mixed use'.  The OpenStreetMap data (the basis for the building and park data) does not define the use of the blue buildings.  I plan on updating the OSM data so that I can, in the future, produce a better map (I will also update the OSM data with Kyle's suggestion above).In summary, the data needs work and the best place to make data improvements is in the OSM database.

Please feel free to let me know what you think.

In The News

I was recently asked to make some maps with data from the Government of Alberta's new Open Data Portal.  Here is a video clip that made the local CTV News. A blog post that addresses my experience making the maps and my concerns regarding the portal can be found here. During an open data discussion with various stakeholders and Minister Clement, I said that an effective open data portal is one which causes discomfort for the government hosting it.  I don't feel that this GOA data portal is at all challenging as there are no difficult datasets to be found.  For instance, there was a recent slurry pond leak from a coal mine near Obed AB that leaked 'toxins' into the Athabasca River. The data associated with that release, which clearly makes the Government of Alberta uncomfortable judging by their media, should be on their open data portal.  Sadly, the video has not sound.  Working to fix this.

ctv_interview from Matthew Dance on Vimeo.

Alberta Road Map

I was intrigued by the American Road Map that made the rounds on Facebook and Twitter last week, and wondered how hard it would be to make one for Alberta.  Not too hard!  Data from Open Street Map.  GIS from QGIS and Mapbox. Full size map can be found on the Mapbox site.

[mapbox layers='mattdance.Alberta_Roads' api='' options='' lat='53.5472' lon='-113.55669999999999' z='8' width='700' height='600']

Toronto Tweets

There is an emerging narrative that characterizes Twitter as a tool of public engagement that can augment more traditional means of consulting with the public (for instance, see here for Environment Canada's commitment to "...implementing more avenues to facilitate online connections with partners, stakeholders and interested members of the Canadian public") .  While there is no doubt that there are many many people who use Twitter (it is reported that there are over 200 million active Twitter accounts)  as a mean of conversing with their elected officials, it is important to remember that Twitter does not include all voices within our Canadian cities.  This blog post is an attempt to understand who might, in fact, be Tweeting within Toronto in a effort to understand who might be Tweeting about Toronto. About this map

A year of geolocated tweets, representing about 5% of the total number of tweets for 2011

[mapbox layers='mattdance.toronto_tweets' api='' options='' lat='43.65969596299056' lon='-79.38002295227051' z='15' width='600' height='400']

Above are Twitter maps of Toronto that represents all of the geolocated tweets for Toronto in 2011, about 1.5% of the total Tweets (in other words, 98.5% of Tweets are not location enabled). The Twitter data were provided by Trendsmap through John Barratt (thank you!). A full sized version of the dynamic map can be found on the Map Box site.

This was a challenging data to work with as it is so large.  I started in QGIS to understand how the data looked and to pair it down to just tweets within the GTA.  I moved the data into Tile Mill by Map Box, and layered Open Street Map data to provide visual context for the tweets. The 'heat map' effect that I used is described here, as I was not able to make the QGIS heat map plugin work, for some reason (please let me know if you can help with this).  I plan on learning how to build a PostGIS database on my computer so that I can do this.

The idea for the map and post came from these three beautiful and interactive twitter maps: (1) London, (2) New York and, (3) Melbourne.

Observations and analysis

A closer look at the map reveals very dense Twitter areas and areas that are very sparsly Tweeted. The most densely Tweeted area is bounded by Bloor Street to the North and Lake Ontario to the South, connected by Young Street.  There is a greater density along the Lake, away from Young to the West. There are also a small number of ghost Tweets on the lake North of Toronto Island.

Area of Tweet density - Bloor to Lake along Young.

These boundaries are visible in the above image.  There are also a couple of identifiable hot spots - the Eaton Center, Rogers Center (cut off in the above picture).  The areas described by this Twitter Density also corresponds with the tourist and suburban destinations - the areas around Young - Bloor - and Front Street, including the sports stadiums, are not just neighbourhood destinations, but destinations for those interested in shopping or taking in the sights in Toronto.

In contrast, those areas that are strictly neighbourhoods, such as Hillsdale Avenue (running east from Young) do not offer that same density.

to_tweets_5_hillsdaleave

The above example shows a middle class neighbourhood within Toronto that does not have a large number of Tweets other than the cluster at the corner of Young Street and Eglington Avenue in the top left of the image.  The Mt Pleasant Cemetery bounds the neighbourhood to the South.

Hillsdale and Forman, Toronto

In addition, poorer neighbourhoods also seem to have a dearth of tweets.  The following image is of the Regent Park area between Dundas and Gerrard.

Regent Park to the East, Young and Dundas Square, with a hot spot of Tweets, to the West.

It is clear from the image that many people are Tweeting on Young Street.  You can even see a hotspot in the Eaton's Center and at the south east corner of Young and Dundas.  Further east, nothing.  From the Regent Park Wikipedia page:

The average income for Regent Park residents is approximately half the average for other Torontonians. A majority of families in Regent Park are classified as low-income, with 68% of the population living below Statistics Canada's Low-Income Cut-Off Rate in one of its census tracts, and 76% in the other (compared to a Toronto-wide average of just over 20%).

Conclusions

I suspect that most of the Tweets that occur in Toronto are from those who live in the region, but who may be downtown for some shopping, to take in a game, or other recreation.  I also suspect that a majority of those who live in Toronto are a small portion of the overall Tweets in the area between the Lake and Bloor, adjacent to Young.

As you move from this area, I suspect that a greater portion of Tweets are made by residents of those neighbourhoods, simply because fewer 'tourists' would travel to these neighbourhoods unless there was an attraction, such as shopping or food.  Although I am only exploring those ~1.5% of tweets that are geolocated, I feel that these are the Twitter users who are most likely to engage with an Open311type of application, to use their smart phones as a means of communicating location details to their municipality.  If this is the case, then those poor areas of the city, potentially the most disenfranchised, will become more so (look at Mark Graham's work mapping  the digital divide in Francophone Africa).

 

Where do accidents happen?

There is currently some debate in Edmonton regarding bike lanes, and the inconvenience that they will provide to drivers.  In particular, Mayor Mandel was quoted as saying:

Not that they’re not a good idea, but it just seems someone behind your scenes out there has just decided we’re going to eliminate all vehicles and only have bikes.

In response, I wanted to look at the number of interactions that cars have with bike or pedestrian in Edmonton. The data.edmonton.ca portal had the appropriate data from 2010 - intersection collisions here, and midblock collisions here. I sorted the data for pedestrian and bicycle collisions with vehicles, and I included all data for bikes and pedestrians, regardless of who was deemed to be a fault. I loaded the CSV data into Cartographica, a lightweight Mac based desktop GIS, and used OpenStreetMap as the base layer for Edmonton.  The results are below, captured as an image (I will work with someone more capable that me to check the data and to get it on line in an interactive format).  Legend: red/yellow diamonds are intersection collisions, blue/blue diamonds are midblock collisions.  The numer adjacent to each diamond represents an aggregate of pedestrian/bicycle interactions with motor vehicles. .

Where accidents happen between cars and other transport modes in Edmonton.

 

Some things to note:

  • More collisions happen at intersections (90 in total) than midblock (22 in total).
  • The most dangerous intersection in 2010 was Fort Road and 66 Street, with 4 collisions.
  • The most dangerous stretches of road were Gateway BLVD north of 51 Ave, 109 Street north of Whyte, and Calgary Trail north of 34th Ave.
  • Midblock collisions saw 18 pedestrians and 4 cyclists injured.
  • Intersections saw 65 pedestrians and 27 cyclists injured.

Conclusions

There is a greater safety issue at intersections where advanced pedestrian and cyclist activated lights should be installed in conjunction with bike lanes.  Furthermore, if the bike lanes are being considered for safety reasons, residential street speeds should be lowered from 50 km/hr to 30 km/hr.  There is convincing data that cars travelling slower inflict less damage than tose travelling faster - see this WHO report [PDF].

Finally, if the City of Edmonton were serious here, they could conduct an interesting study to track accident rates with cyclists and pedestrians give the introduction of bike lanes, intersection controls and low residential speed limits.

Thesis Musings

The Point

I just completed my MA in Human Geography. My research pertained to people's understanding of location and their communication of that understanding through emerging location based technologies. My entire thesis can be access here (be warned, it is a large PDF via Google Drive).

In an effort at transparency, the next series of blog posts will examine some of the data that I collected pertaining to place and space, and the communication of that understanding through the tools of neogeography. These posts will draw upon two sources of data; mental maps that my research cohort drew, in conjunction with descriptions of the places represented in their mental maps. I will start by framing the discussion with place, space and mapping concepts.

Please note: The University of Alberta’s Arts, Science, and Law Research Ethics Board granted permission for me to post sections of interview transcripts to personal blogs, as did each informant.  Any names that are referenced are pseudonyms.

Place and Space

The notion of 'place' is a common descriptor of the world, and is a central theme in the study of geography (Relph 1976) and other social sciences such as sociology and psychology (Gieryn 2000).  These various disciplines characterize and describe place in a variety of ways (Harrison 1996); for the purpose of this study, place is defined  as being comprised of three dimensions (after Relph, 1976):

  1. Observable activities that occur in relation to the place
  2. The meanings that are created by a person in that location, and;
  3. The physical features that comprise the location's concrete or tangible attributes.

A place is comprised of its physical characteristics, the activities that occur there and the meanings derived thereof  (Devine-Wright 1997). The more familiar a person is with a place, through experience (for instance, through recreational or sporting activities), the greater the meaning that place is likely to have (Lynch 1995).

Maps

Maps are a common metaphor used to describe place  (Zook 2007) where simple points on the map represent a much more complex reality. Lynch (1960) views place as a series of connected locations where individuals mentally organize their spatial environment in predictable ways around five elements (see below). Lynch's view of place implies that linkages exist between places based on a person's experience with those locations, creating a tapestry of meaning imposed upon an urban landscape. Similarly, (Tuan 1977) differentiates place from space based on the familiarity a person might have of the former; as a space becomes more familiar, more intimately known, it is transformed into a place. `If space is movement, then place is pause' (Tuan 1977, p.6).

Maps are one tangible way of conceptualizing and representing place and space. Modern maps, such as atlas and road maps, are mass-produced for a consumer market, and are the result of painstaking work done by experts in the fields of cartography, air photo interpretation, statistics, and other disciplines. One of the goals of a modern map is to communicate an objective representation of place that is accurate, useful and that conveys a sense of that place (Taylor 2009). For instance, a map of a city may convey a sense of place by rendering a collection of place names (buildings, roads, plazas, etc) overlaid on a road network, on top of topographic features.

While a map-reader may get a sense of that place, via those representations, the map does not convey the deeper understanding of place that comes from everyday experience, meanings and associations that a resident may possess. In fact, maps may distort a local knowledge by misrepresenting the meaning of place as reflected by, for instance, place names (Frank 2000). In this instance, expert knowledge of map-making is not enough to produce maps that convey an accurate sense of place. The missing ingredient is the experience of a place that can only be gleaned by visiting, or perhaps by interacting with someone who has intimate knowledge of it.

One outcome of this central and authoritative communication of place, via consumer maps, is that citizens are relegated to the role of consumers to be consulted by experts (i.e. urban planners, academics) in their quest to understand what a place means to people. Tools such as cognitive mapping (Kitchen 2009) and mental maps (Lynch 1960) are traditionally utilized to mine these location data from individuals. Tversky (1993) defines cognitive mapping as the process of mentally acquiring, storing, recalling, and decoding metric information relative to location. Mental mapping is the non-metric capture of spatial relations among elements, allowing reorientation, spatial inference and perspective taking (Downs 1977). The importance of these concepts is that they rely on an individual's non-expert or lay, understanding of space. Indeed, the academic notion of Naïve Geography (Egenhofer 1995) is predicated on a `common sense' understanding of geography, where the focus is largely on the non-expert.

The descriptions of place gleaned from the cognitive, mental or naïve are the result of an individual process of understanding. While there are shared map elements between people (e.g. most people will recognize a 'cross' as being the location of a church on a map), these shared elements do not represent a shared experience or the basis of a common understanding within a community. Within this context, there are a variety of ways that an individual or community can understand place, maps being one.

References

Devine-Wright, P. and Lyons, E.: 1997, Remembering Pasts and Representing Places: The Construction of National Identities in Ireland, Journal of Environmental Psychology 17, 33–45.

Downs, R. M. and Stea, D.: 1977, Maps in Minds. Reflections of Cognitive Mapping., Harper and Row.

Egenhofer, M. J. and Mark, D. M.: 1995, Naive Geography, in A. Frank and W. Kuhn (eds), Spatial Information Theory: A Theoretical Basis for GIS, International Conference COSIT ’95, Vol. 988 of Lecture Notes in Computer Science, Springer, Semmering, Austria, pp. 1–15.

Gieryn, T.: 2000, A Space for Place in Sociology, Annual Review of Sociology 26, 463–496.

Harrison, S. and Dourish, P.: 1996, Re-place-ing Space: The Roles of Place and Space in Collaborative Systems, Proceedings of the 1996 ACM conference on Computer Supported Cooperative Work, CSCW ’96, ACM, New York, NY, USA, pp. 67–76.

Kitchen, R. M.: 1994, Cognitive Maps: What they are and why we study them?, Journal of Environmental Psychology 14(1), 1–19.

Lynch, K.: 1960, The Image of the City, The MIT Press.

Lynch, K., Banerjee, T. and Southworth, M.: 1995, City Sense and City Design: Writings and Projects of Kevin Lynch, MIT Press.

Relph, E.: 1976, Place and Placelessness, Pion.

Tuan, Y.: 1977, Place and Space: The Perspective of Experience, University of Minnesota Press.

Tversky, B.: 1993, Cognitive Maps, Cognitive Collages, and Spatial Mental Models, in A. U. Frank and I. Campari (eds), Spatial Information Theory: A Theoretical Basis for GIS, Proceedings COSIT ’93, Vol. Lecture Notes in

Computer Science, Springer: Berlin, pp. 14–24.

Zook, M. A. and Graham, M.: 2007, Mapping Digiplace: Geocoded Internet Data and the Representation of Place, Environment and Planning B 34(3), 466–482.