Drew McLachlan
The Navigator

The Shapers have landed; the war has begun. The Resistance aims to protect humanity from these unknown alien invaders, while the Enlightened, believing the Shapers to be benign, work to foster an alliance with them and uplift our species. Armed only with smart phones, the rival factions duke it out on the streets of our city by claiming portals of power.

Landmarks like the Frank Ney statue, the steampunk orca, and the Nanaimo Museum must all be scanned so they can be brought under the influence of the appropriate team. An Intel video can be viewed online, debriefing potential recruits on the secret war. “This world around you is not what it seems,” a grizzled veteran warns. “Our future is at stake, and you must choose a side.”

New augmented reality (AR) technology has made possible games like Google’s Ingress. AR, simply put, is any program or device that alters or supplements your perception of the outside world. This can include both visual and audio augmentations, and includes something as simple as scores being displayed during a sporting event to a wearable headset that imposes GPS coordinates over your surroundings. AR technology is now seeing a renaissance following the advent of the smart phone.

The accessibility and usability of AR programs have drastically increased—their days of being chained to powerful PCs have ended as the HD screens and multicore processors sitting snug in our pockets have liberated them.

AR hasn’t only benefited games, of course. The technology is being utilised in the names of marketing, medicine, accessibility, education, and more.

“The potential is huge and growing every day for [AR] to be integrated into the classroom,” says  Avi Luxenburg, an educator and former VIU instructor who focuses on technology integration.

“Education is probably the area that’s sort of at the tail end of the movement for AR. It’s happening, but it’s happening quite slowly. There’s a publication that comes out every year, called NMC Horizon Report, and it basically talks about emerging technology in education. NMC Report for years has put augmented reality as an emerging technology.

When a technology is 20 per cent saturated into whatever it is—education, marketing, etc., it’s no longer an emerging technology, so you have to consider that they keep putting it in as an emerging technology but never quite reach that 20 per cent saturation needed to be considered ubiquitous in education. It’s still far from it.”

One program that has gained clout with educators is Aurasma. The app was used by an elementary school in Highland Village, Texas last year in a project that involved creating a garden meant to represent the ten ecological zones of the state. With Aurasma, students were able to view the garden through the lens of a tablet, which would prompt informative videos when pointed at certain plants or pictures.

The project was initiated as an alternative to field trips after the school was dealt a budget cut. The heritage garden ended up costing the school around $50. Aurasma has also been used by the University of British Columbia to embed videos and other rich media content into their viewbook, and by the University of Illinois to add similar content to their team’s baseball cards.

“They’re becoming quite popular with the educators who push the boundaries,” says Luxenburg. “You have to remember that education is the last bastion of the industrial revolution. It’s a Neanderthal system, and quite often it doesn’t have a lot to do with learning—it’s not necessarily built around the way we naturally learn. There are educators who push the boundaries doing things with technology like augmented reality, but they are few and far between. One of every 20-25 teachers are trying to push boundaries in terms of technology integration.”

While AR may not be on the curriculum of many classes, there are plenty of “edutainment” apps taking advantage of the technology. Guinness World Records released an app for their 2014 book of world records that allows readers to view a 3D model of the world’s largest dinosaur or take a photograph of themselves with the world’s shortest woman. An app called SkyView allows amateur astronomers to point their phone or tablet at the night sky and allow it to connect the dots between the stars and issue facts on constellations or planets.

The complexity of AR technology has travelled light years from its original inception in 1966. The first use of AR came with “The Sword of Damocles,” a head-mounted display system developed by renowned tech pioneer Ivan Sutherland and his student, Bob Sproull. The device consisted of a thick black bar sporting monitors and cameras that would cover the user’s eyes. Three securing straps were fitted over the head while two black poles with long cords attached, which the user would hold onto, protruded out near the ears.

Due to the weight of the device, it was attached to the ceiling via a long, metal arm designed to alleviate pressure on the user’s neck as well as tracking head movements. Inside the device, the user would see whatever was directly in front of them, only with wire frame cubes or pyramids floating and rotating in space. While it may have looked like something from a science fiction (or S&M) film and not have had much practical use, “The Sword of Damocles” spread the silicon curtain for computer scientists and engineers.

“The Ultimate Display,” an essay penned by Sutherland in 1965, introduced the masses to the idea of augmented reality technology. “We live in a physical world whose properties we have come to know well through long familiarity,” wrote Sutherland. “We sense an involvement with this physical world which gives us the ability to predict its properties well. For example, we can predict where objects will fall, how well-known shapes look from other angles, and how much force is required to push objects against friction.

We lack corresponding familiarity with the forces on charged particles, forces in non-uniform fields, the effects of nonprojective geometric transformations, and high-inertia, low friction motion. A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland.”

But how deep does the rabbit hole go? Perhaps the biggest use of AR, or at least the most mainstream attention garnered for it, is the upcoming Google Glass. While still in the testing phase, the consumer version of Google Glass is set to launch next year with a price tag of $300-500.

Glass is essentially a smartphone in the form of a pair of eyeglasses—it uses a microphone camera and projector to give users a number of tools similar to what can be found on an Android or iPhone, only hands-free. Users will be able to take photos, live stream footage of what they see, impose GPS coordinates on the road as they drive, and translate phrases into other languages, all with either a spoken command or a tap on the touch sensor installed on the side of the Glass.

Google Glass has been the talk of the tech world since its announcement in 2012. Publications like Wired, Techradar, and Ars Technica have written extensively about what Glass will do, how it works, and even reviews by staffers who have been able to get their hands on the pricey and exclusive developer models. Time Magazine named Google Glass the best invention of 2012, and John Naughton has praised it in The Guardian.

“What endears the Google Glass project to me is that it’s the latest instalment in a long and honourable tradition in computer science,” writes Naughton. “It goes all the way back to one of the great luminaries of the business, Douglas Engelbart, the man who invented the computer mouse. What motivated Engelbart from the outset was a passionate belief that computers had the power to augment, rather than replace, human capabilities. Machines, he believed, should do what machines do best, thereby freeing up humans to do what they do best. And this idea of ‘augmentation’ has inspired a good deal of research in the decades since Engelbart embarked on his mission to change the world.”

Many criticisms have cropped up as well, mostly centering on how Google Glass will affect our privacy. Japan’s National Institute of Informatics is developing “anti-glasses,” goggles equipped with 11 LED lights designed to blind cameras installed on Google Glass and similar devices, as to prevent unwanted photography and facial recognition apps.

Wired also published a piece by David Kravets and Roberto Baldwin that was highly critical of the terms of service attached to Google Glass. The terms basically state that Google owns your Glass even after purchase, and will deactivate it if you sell, loan, or give away your device. Perhaps the most critical resistance has come from Stop the Cyborgs, an online group concerned with the constant corporate surveillance and “extended body control” that devices like Google Glass could potentially lead to. Stop the Cyborgs urges the wary to ban said devices from their property, become active in a citizen privacy group, ask users to remove their device, and contact their local MP.

AR technology can come in many forms, but are they all benign? When Google Glass and its competitors land in 2014, will you resist the invasion? Or, like the Enlightened, will you embrace the technology, certain that it will be our uplifting?