Turing envisioned a scenario in which a human and a machine converse (in text only) with a third-party evaluator, who would then attempt to discern which was which. The thought was that a machine could be called intelligent if it could fool a human into thinking that it, too, was human. The Turing test, which is also known as “the imitation game,” was passed for the very first time in 2014 by a program called Eugene, which successfully simulated a 13-year-old boy (within a time limit).

Based on the results, it seems possible to feign intelligence, and so the ensuing question is if it’s possible to actually create intelligence. And, if humans would be able to recognize it when seen.

The umbrella term for these technological developments is artificial intelligence (AI), which usually conjures up thoughts of Data from Star Trek, the Terminator and the lifelike robots in HBO’s Westworld. As defined by A.J. Juliani in The Beginner’s Guide to Artificial Intelligence for Educators, “In computer science, an ideal ‘intelligent’ machine is a flexible, rational agent that perceives its environment and takes actions that maximize its chance of success at some goal.”

AI is more common than may be thought, says Ray DePaul, director of Mount Royal’s Institute for Innovation and Entrepreneurship. With an educational background in math, computer science and business, DePaul spent five years with Research In Motion (now BlackBerry).

“The grammar checker in Microsoft Word was once considered AI. Facial recognition in your phone, Google Now, Siri, those little speakers that sit in your home and you can call out any question and they will answer, that’s all AI.

“AI is everywhere, but we tend to only consider the term in futuristic things rather than what has already become commonplace,” he says.

Programmers ­— or coders — and mathematicians are the minds behind it all. The application of humanity, or human actions, to software is continually lessening the gap between the realm of people (the clever manipulator) and that of the machine (the obedient worker). Developments in the field are happening at a rapid pace, and as the globe continues to automate, humans are being faced with some of the most important philosophical and legal questions of this time.


Variants of ai

AI is currently being used for high-tech security options such as facial, handwriting and speech recognition; in the health care industry to help doctors better understand pain and offer diagnoses faster; to create highly realistic video games; to predict what will happen in real life situations such as a natural disaster; and, in a wide assortment of GPS devices (nobody can really say they’re lost anymore). In June of last year, Google introduced a lifelike “robot dog” that can clean houses. Robot mail and package deliveries are about to hit U.S. streets. The European Union recently voted to propose granting robots legal status and to categorize them as “electronic persons” so they could be held “responsible for acts or omissions.”

“The biggest tech companies in the world are all investing heavily in artificial intelligence,” says Alan Fedoruk, PhD and chair of the Department of Mathematics and Computing at Mount Royal. “And it’s all being made possible by the intense amount of computing power now available.”

According to Professor Charles Hepler, MRU’s computer science coordinator, “Processing abilities have been doubling every couple of years for the past 60 years or so, and have increased more than a billion-fold in total.

“Single processors now run pretty much at their peak speed, but we are still improving on computing power (for the time being) by running several processors at the same time,” says Hepler. “Most desktop computers now have at least four processors, and the average car dozens.”

As processors are able to manage extremely complicated algorithms and plow through millions of lines of code faster and faster, more and more intricate problems are being solved by machines. Jordan Kidney, professor in the Department of Mathematics and Computing, studies multi-agent systems and emergent computing and is an expert in AI.

“An agent is a program that knows a little bit more than usual and is able to react,” says Kidney. “A multi-agent system is the idea of multiples of these agents working together so you have co-operation, not competition. They co-ordinate and deal with unsure information to solve a problem better.”

Multi-agent systems have been designed where agents go in after a disaster, such as an earthquake where there is massive structural damage, and calculate where there are most likely to be trapped victims needing to be rescued. In these cases, the programmer or user is not performing the task, the agents are. But while the program may seem to be acting independently, “Is it really intelligent, or does it just seem that way? And does it matter?” asks Fedoruk.

Kidney says that emergent computing is akin to ants in a colony. “When you look at just one, you can’t really see what’s going on. It’s following its ‘own rules’ without global communication with all other ants. But when looking at a colony of ants as a whole, each is following their own rules but unexpected global results come out of this, such as ‘fast gathering’ of food sources and finding the shortest path to move.

Researchers recognized this emergent pattern and have created computer algorithms to duplicate the behaviour and apply it to solve different problems.”

1001011001010001010

An illustration of Jordan Kidney's face in which his fact is a machine being built by tiny illustrated people


How do you measure intelligence?

An early computer scientist named Joseph Weizenbaum came up with a program in the ‘60s called Eliza, which was the “original chatbot.” Designed like a therapist, Eliza appeared to interact with the user.

“You would say, ‘Hello, I’m feeling depressed,’ and it would say, ‘Why? Or, tell me more,’” Fedoruk says. Once the user replied, the chatbot would add a few words and respond with another question. As long as you, “coloured within the lines,” it seemed as if there was really someone there who cared about your answers, says Fedoruk.

“It’s a program that any of our first-year computing students could write,” he says. “There’s nothing to it. There’s no intelligence whatsoever, and yet it seemed, in a sense, like there was.”

Because there is no testable hypothesis for what intelligence actually is, AI is even called a pseudo-science by some.

“To build something when we don’t really know what it is, is hard,” says Fedoruk. “For a long time, what would happen in AI, is people would come up with these programs and say, ‘Look, it’s doing something intelligent,’ like planning. And then someone would look under the hood and say, ‘No, that’s not intelligence, that’s just a bit of code.’ But what, really, is the difference?”

The sort of human intellect, where there is perception and consciousness, has not been programmed yet, says Fedoruk, and some even argue that it can’t be done. Fedoruk, however, thinks it is possible.

Illustration of a car being driven by a brain


How do you measure intelligence?

One of the most prevalent discussions involving AI these days centres around the introduction of the autonomous car to the market, with self-driving cars being tested on public roads since 2013. Already, technology called telematics, or “black boxes,” is being used by fleet owners and companies such as car2go to record how people are driving. They then transmit that information to insurance companies or owners. Advanced vehicle safety technology ranges from back-up cameras and anti-lock brakes to accident prediction and avoidance.

Autonomous vehicles use what is called “deep” or “machine learning,” which has seen huge advances recently, says Fedoruk.

An example is Siri for the iPhone. “When you first start with Siri, you have to talk to it for a little while before it starts to understand your voice and learn your accent. The machine uses the data it gathers to train itself. It needs positive and negative examples so that it can figure out what the answers are. It’s actually similar to student learning. A student performs a task, and you need to tell them whether they did it right or wrong,” says Hepler.

When driving, people know to stop when they see a red light and go when the light is green. But there are hundreds of other things going on at the same time. Drivers also learn to discern between humans and objects, to anticipate unexpected pedestrian behaviour and to operate their vehicles in inclement weather. By using deep learning, autonomous cars have been able to reach this level of cognition.

“Instead of a bunch of programmers sitting down and trying to come up with endless lines of code to say, ‘if that, then this,’ they take the car out and let it learn. Which is the way we (humans) do it. So what you have is a car that essentially has millions of miles and millions of hours of experience before you turn it loose. Unlike your typical 18-year-old,” says Fedoruk.

“They (autonomous cars) can start kind of teaching themselves how to operate, essentially building their own code.”

Most of it is done within a neural net. The human brain has millions of neurons that are interconnected in certain ways, which then develop memories and abilities. It’s the same thing with a computer, but with simulated neurons.

“It’s actually an old technique,” says Fedoruk. “Neural nets are from the ‘60s. It’s just taken until now to learn how to use them properly and to have the necessary hardware.”

Where autonomous vehicles get tricky, though, is when they are presented with a philosophical problem. For example, choosing between driving straight and hitting a group of children or avoiding the kids and going off a cliff, potentially killing all the passengers.

So what philosophical framework should an automobile manufacturer program into an autonomous car — utilitarianism (Jeremy Benthan or John Stuart Mill) or social contract (Thomas Hobbes, John Locke, Jean-Jacques Rousseau)? With utilitarianism, actions should benefit the majority, while with the social contract theory value is determined by moral duty to the greater society.

“What happens afterwards when we sort it all out? Who is to blame?” asks Fedoruk.

And it leads to another question of what happens when technology is developed faster than laws can keep up?

Fedoruk says, “It happens all the time, so we need to start thinking about the ethics behind what is being built. We can’t stop these kinds of artifacts being made, so we need to think about if there’s a potential there for misuse. And frankly, there always is.”


Ethical coding

All humans have their own perspective, so anything we create comes with a certain amount of bias … algorithms included. The closely guarded secret of the Facebook feed algorithm (which many have tried — and failed — to deconstruct) is based on what you like, what your friends like, how popular your posts are and numerous other factors. Users assume it’s a “pure” algorithm that simply reflects actions and behaviour, but it’s entirely possible it could be subjective and presenting false results.

“Who knows if Facebook is doing that? We don’t know, but maybe they are,” says Fedoruk.

A way to work around this possibility would be to legislate ethical code, checked by an outside source, and if companies were subverting the system, so to speak, there would be severe penalties.

“There’s probably going to be a lot more rigour needed in the future,” says Fedoruk.


Big Data

Most of the ethical problems talked about with regards to computers are around privacy, where information technology (IT) employees have control over huge amounts of data.

“So you need to make sure only the right people access it,” says Fedoruk.

While algorithms run programs, they are also collecting reams of facts, figures and statistics everything from social data to scientific data to biological data.

“We’ve always produced information about ourselves as we move through our lives,” says Fedoruk, “but now it’s like somebody coming behind you and vacuuming that all up, putting it in a database, analyzing it and connecting it to other people.”

Just posting to Facebook every day leaves a long trail detailing who you are and what you’ve been up to, which is now neatly encapsulated into your Facebook Year in Review and the People You May Know options. These are made through big data analysis, and experts are more and more in demand, with computer science now involving incredibly dense mathematical problems.

Hepler says, “With big data, you need to make sure you’re not coming up with spurious statistical connections. If you look for correlations between 20,000 different things, you’re bound to find some. Is there some mechanism, some common cause, or were the correlations just luck?

“We need computer science people, who, in a way, are really data scientists,” Hepler says, adding that strong oversight is necessary to make sure information gleaned isn’t farmed out or sold inappropriately.


The future of the programmer

Those heading into the IT fields not only need to have expertise in coding, mathematics, algorithms and ethics, but they also need to be talented communicators, ensuring programs are generating the results clients are looking for. Fedoruk, Hepler, Kidney and DePaul all see the benefit of further integrating computer science with other areas of academia, such as business, science and the arts in an interdisciplinary manner.

“Computer scientists need to be able to talk, develop, listen, plan, implement and analyze results,” says Hepler.

According to Fedoruk, it’s a mistake to think that computers will “out-think” us, because humans are the ones that program their thinking. If they do seem to be acting in a way that wasn’t anticipated, it’s because they’re being perceptive in ways not predicted or understood. The opportunity then is in studying the unexpected response, and how it can be potentially explained. Humans must be true custodians of the technology they create.

And while it’s true that jobs such as truck driving may soon be lost (or dramatically altered) and that people may also be able to ask a machine for the answer to any question, that doesn’t mean that there aren’t entirely new prospects on the horizon.

“We are shifting from being just a knowledge-based society,” says DePaul. “What makes people valuable now is not the knowledge they possess, because knowledge is now pretty much free and instantly accessible from our smartphone in our pocket.

“What will make people valuable in the future is how you and I can synthesize knowledge and put disparate pieces together to come up with something new.”

Read more Summit


Future-proofing the next generation

Inspiring brains bombarded by information

READ MORE