Machine Learning – post script, I’m Scared of Machines

The last 3 years of my life have been very much dominated by my attending university. I’ve been studying a BSc (Hons) Computer Science at Kingston University London. Of course this culminated in a dissertation and final year project. I want to dive into that a little bit and maybe expand on my thoughts now we’re through the other side of that seemingly infinite tunnel.

My dissertation was regarding automated sentiment tracking and analysis of Tweets. A lot of things came up during this that are actually pretty interesting although not necessarily within the scope of my final report. That being said that didn’t stop me going off on one…

I think the most important part of AI these days, and yes that is a ludicrously overused buzz word, is ethics.

Ethics has always been quite a soft, gooey word to me, and to a lot of people. Notably in STEM fields, graduates and even professionals tend to seem a little lacking in soft-skills. If you need proof of this a cursory look at my inbox will back this point up pretty solidly.

I made a point throughout the dissertation to bring the discussion back around from the clinical and financial view of business use-case to a slightly more anthropological perspective. Not least because my end product is, in essence, a marketing tool; I don’t feel great about that.

Nonetheless it’s important, now more than ever, to understand the computers place in society. Without a shadow of a doubt computers are here to stay and off loading all your work to an AI sounds great but the disruption that they cause will always follow. At risk of being quite pessimistic, humans ruin everything. Social media has seen the brunt of this in recent years, from scams and fake celebrity slimming teas, to Nazis and witch hunts, we are seeing an uptick in what’s bad in the world.

But there’s other parts of the world where computers aren’t quite to blame. I reference a case in the USA where judges see AI developed risk scores and overrule pre-agreed terms of sentence. The computer made it’s judgement, the algorithm never did anything wrong; it did what it was programmed to do. The flaw in the system here is in the human nature of misplaces trust in authority, it’s debated if the judge should even have been allowed to see the score.

We see a separate case where too much weight was put on machines. Again in the US, a new disability benefits calculation system was brought in. Very quickly those who needed this found their funds changing, some rose and some fell. Those who lost money where put in very difficult situations and no human was able to easily find a reason for this drastic change in payout.

Okay so it’s all pretty depressing. But it falls on us, the humans, to understand the machines place in the world. Systems must be fully understood before they are implemented and their outputs should be scrutinised. Sure, maybe the flaw is the algorithm, or maybe it’s a flaw in the correlations we try to draw. Irrespective, there are complicated and difficult decisions that must be made. Perhaps there is no ‘right’ answer, perhaps a compromise is needed. Machines struggle with this concept of ‘maybe’, they are quite literally binary thinkers. We cannot take for granted the power that we have as humans to be rational and conserved about judgements, machines do not have this luxury.

Machines can do so much but there’s still a lot of things they cannot do, at least not yet… We must be wary of where machines sit in our society and what they do and what they make. When we begin to unravel the fundamentals of human consciousness to embed within machines we should remember that we have something special within us all. Human nature is complex and messy set of ideals and no one algorithm can hold them all. Empathy, Apathy, discontent. Emotions are the basis of our civilisations, they hold together worlds. We must be careful with the power we share.

I’m scared of machines. I don’t think an uprising will come any time soon, sentient AI is a long way from now and frankly impractical. But humans implicitly trust the magic black boxes algorithms occupy, regardless of if it’s just a fancy spreadsheet or not. I think the most worrying part of machine learning is that to those who understand it, it’s just some maths, and to those who don’t, it’s science fiction.

As inventors, developers, engineers, and scientists we need to understand the impact our creations can, will, and do have on the anthropocene. Throughout history civilisation has adopted technology without fully understanding the ramifications of it’s use. CFC’s, motor fuel, large scale weaponry all were adopted rapidly but now we understand them, but now is too late. We can prevent the mistakes of the past coming up time and time again.

I began and ended my dissertation with a couple of quotes that, I believe, carried the sentiment I held and now hold to machines.

I propose to consider the question, “Can machines think?” – Alan Turing

In the age of the algorithm, humans have never been more important – Dr Hannah Fry

I’ll let you decide the order.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.