Gordon Campbell on why we should be anxious about AI
Gordon Campbell on why we should be anxious about Artificial IntelligenceFirst published on Werewolf
Poor old Phil Twyford. Caned by everyone, including his own team – the Prime Minister, the PSA – for saying something so very obvious. Treasury has, and always has had, some people in it with precious little of the life experience relevant to understanding how their pet policy prescriptions might impact on people less fortunate than themselves. All up, Twyford’s “I just think some of these kids in Treasury are fresh out of university and they're completely disconnected from reality” seems an unexceptional observation. Gerry Brownlee for instance, and Michael Cullen both said far worse things about Treasury.
Basically, when Treasury sometimes gives every sign of being isolated from reality, insulating it from criticism doesn’t seem to be such a great idea. History surely calls for the same healthy skepticism about Treasury pronouncements as it displays towards the departments it monitors. Moreover…on most days, our news bulletins contain examples of the social fallout – domestic violence, mental health problems – still ravaging some communities after the loss so many secure, full time, relatively well paid jobs in the wake of the economic reforms of the 1980s and 1990s.
Those jobs – which used to give structure and meaning to life, as well as a good income – have been replaced by (a) a plethora of “flexible”, part-time, and poorly paid service jobs and (b) a relative handful of highly paid technical and specialist occupations, access to which is heavily determined by one’s existing level of socio-economic privilege. Sunshiny tabulations of the jobs lost/jobs gained regularly fail to examine the poor quality of most of the new jobs, much less consider who gets the (relatively few) plum jobs being created.
This dismal (and ongoing) legacy came to mind when reading this disturbingly chirpy recent report into the likely impact of Artificial Intelligence on New Zealand.
A condescending burble about the public’s fear of change, and the negative role being played by pop culture and the media in creating social “anxiety” about AI, was covered (in unconsciously self-satirizing fashion) within this RNZ report which asked exactly the wrong question: is the media to blame for public ignorance and anxiety? (Oh, the benighted public and the media. Someone interviewed even mentioned the Terminator, ho ho.)
Well, spare the mea culpas. Frankly, the prospect of possibly losing half the existing forms of paid employment to AI does make me feel extremely anxious, given the indifference shown by central government to the downstream social damage caused by the reform process last time around. This report and this report are totally scary. So is this recent NYT article about AI’s potential to make significant inroads into job categories (radiologists, surgeon, lawyer, journalist, airline pilot, songwriter etc) that we would normally think of as being bastions of human competence. Furthermore, those countries that are best managing the AI transition (eg Sweden) seem to be doing so via the very social mechanisms (eg strong unions, generous welfare support systems) that our last wave of reform disparaged and weakened, while being cheered on by the bright young folk at Treasury.
Throughout, the New Zealand AI report linked to above, reflects an employers’ world view of potential efficiency gains, a rosy expectation that the main labour force impacts will be phased in over 40 years (!) and peddles the usual cant that eventually the jobs lost will be replaced by jobs gained, with little consideration of the damage done mid flight, the quality of the jobs concerned, or the fact that (if Silicon Valley is anything to go by) the higher echelons of the new AI meritocracy will be dominated by the next generation of wealthy white males.
Currently, AI automation processes are being applied to three main categories of job-related behaviours. These are (a) the repetitive tasks of inputting and outputting associated with back-office administration. Rule being, if you can outsource a task, you can probably automate it. Then there is (b) the algorithmic-driven analysis of big data packs. Machine learning is playing an increasing role in analyzing, predicting, identifying and targeting consumers and probable patterns of behaviour at speeds and with accuracy comparable to, or superior to, the human brain. Finally, there are (c) the cognitive engagement tasks, whereby chatbots and other intelligence agents provide such services as banking information or retail product recommendations or health treatment options, on a 24/7 basis. That just sketches the territory.
As much of the commentary has already mentioned, the likely scale of job displacement over the next few decades will almost certainly require governments to provide some form of universal basic income. Leaving side the psychosocial impact of losing one’s job as a source of identity, life structure and income, the history of welfare reform in this country – from Ruth Richardson’s slashing of benefits to Paula Bennett’s fostering of the punitive culture at WINZ – shows just just vulnerable UBI could be to social stigma and to ideologically driven attacks by politicians on the entitlements.
On that point, the most systematic use of AI in this country to date – the “social investment” approach to welfare targeting – gives further cause for alarm. In championing this process, Bill English and Paula Bennett seemed blissfully unconcerned about the potential for stigmatising the beneficiaries (many of them poor and Maori) identified by the algorithms, thereby trapping them in a downward spiral.
Meaning: since contact with WINZ is regarded as a signifier of future welfare dependency, this will trigger further WINZ contact that is not only punitive, but self perpetuating. As in… we will manage the lives of the people we manage because the machines tell us these are the people we should manage because we have before, and what tells us this is the right thing to do is that we’re managing them a lot.
So far, the utterances of Social Development Minister Carmel Sepuloni suggest that the coalition government plans on continuing with the ‘social investment’ form of welfare targeting. In that respect, the re-assurances made to date – that the data will not be personalized – are something of a red herring. If Sepuloni intends to persist with the social investment approach, she needs to be doing a whole lot more to make its human targets safe from the stigmatizing and circular reasoning to which the approach seems prone. We need to know for instance, that the judgment of human social workers will not be eroded by the negative predictions issuing from the machines about the likely future behaviours of families on benefits.
So… should we be anxious about AI? Of course we should. Being anxious is the necessary first step to preventing AI from being used as a destructive tool by politicians, and by employers. This is simply too powerful a technology to be entrusted to an ideology of small government, and the profit motive.
And here’s Connie Francis with a peachy little number celebrating a few alleged upsides of robotics…