Skip to main content Skip to page footer

The Conscience of Algorithms

How does AI impact people?

There has been much written over the years on how Artificial Intelligence (AI) and Machine Learning (ML) will impact our lives, often polarised in context.  On one hand, computers and intelligent robotics are able to perform dangerous jobs and tasks (saving lives), act as virtual assistants, and provide automated transportation, just to name a few.  On the flipside, automation will result in heavy loss of “old” economy jobs with much of the skilled workforce becoming obsolete and needing to retrain, or even that AI will be mankind’s ultimate downfall.  The reality is that automation and intelligent machines are already here.  While the world is approaching the decision make processes of driver-less vehicles with caution, other algorithms are already out there, shaping our experiences as individuals and as a society.  It’s time to ask what their impact is.  Should there be Regulation of the algorithm?

The concepts of AI and ML have evolved from science fiction and fantasy to real-time decision making systems that are driving the car in the next lane.  These technologies have extended beyond the targeted selling of basic goods and services to consumers and have a far greater level of influence and pervasive impact on our lives than many realise.  Specific algorithms used in social media platforms, such as Facebook, are trademarked secrets.  As more is revealed of the extent to which companies such as Cambridge Analytica have mined Facebook profiles and potentially influenced politics in the US, we are being served a warning on what can be mined and learned from this type of data.

A key feature of social media platforms is the algorithms, which are “learning” from data and patterns of reading, posting, watching, buying and other activities.  The “feed” or chain of information, stories, and advertising presented through these platforms is selectively tuned to each user, and can easily influence a user through selective placement or masking of current news items, products and so on.  Selective exposure to content being the primary driver of content generates the formation of homogeneous clusters of users, or what is known as “echo chambers”, where there is reinforcement of worldviews irrespective of their correctness or validity [e.g. http://www.pnas.org/content/114/12/3035].  We are in the era of AI driven social media influence, with almost no oversight or regulation of the rules or level of influence that occurs.

As these AI/ML approaches remember history and past experiences, it raises a very interesting and relevant question of “How do you rid yourself of these past data, especially if you can’t control or input directly to the AI/ML algorithms?”  Past poor choices or opinions, past circumstances such as bankruptcy or episodes in the justice system, may significantly influence the feed of information a user receives.  This has the potential to further stigmatise, stereotype or profile groups of individuals on the basis of past (or current) disadvantage, misfortune or misadventure.  Like a bad credit rating, your AI data could follow you around for years to come.

One key example of where use of an “unfair” algorithm has backfired is Centrelink’s Robodebt debacle.  While it may have seemed logical to automate this process and perhaps fair to allow welfare recipients the chance to disprove the debt claims, the human impact side was ill-considered – vulnerable alleged debtors may not have the resilience or resources to challenge Centrelink’s authority.  This problem was compounded by the technical inaccuracies of the algorithm and data matching tools used, seemingly erring on the side of identifying a debt when there wasn’t one – the classic Type 1 error in statistics.  One must also ask how much effort was put into identifying and rectifying historical underpayments.

Data Analysis Australia’s concern with AI/ML algorithms isn’t about malevolence and Skynet type scenarios, but about competence and equality.  With no clear oversight or regulation of the algorithms, there are real concerns about how these algorithms (and the rules that fine tune them) are influencing our experiences. 

A key lofty goal of AI/ML safety has always been to never place people in a position of threat or danger.  With no transparency or regulation of algorithms, one wonders how safe a position we are all in.

At Data Analysis Australia we pride ourselves on our deep understanding of algorithms and methods, and ensuring that we account for biases and limitations of data and approaches to deliver high quality, rigorous and relevant findings to our clients.

Contact Us to discuss your machine learning and big data challenges.

Further Reading:
 

https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

https://www.theguardian.com/technology/2017/jan/27/ai-artificial-intelligence-watchdog-needed-to-prevent-discriminatory-automated-decisions

https://www.theguardian.com/business/2016/dec/18/labour-calls-for-regulation-of-algorithms-used-by-tech-firms

https://www.theguardian.com/commentisfree/2018/apr/04/algorithms-powerful-europe-response-social-media

October 2018