2026-05-05
In September 2025 a settlement of $475 million in the Robodebt Class Action provided compensation to victims of the unlawful AI-based government scheme. Such problems of AI should have always been understood, however, identification of other dangers requires special expertise.
By Andrew Turk, Fremantle-Tangney Greens, Green Issue Co-editor
Introduction
Artificial Intelligence (AI) is promoted as a boon to society, doing all sorts of work for us, leaving much more time for leisure, and creating wealth for all. This is a deception peddled by those who stand to benefit most from AI and are least likely to suffer from the problems it generates. Remember Robodebt, the huge scheme to generate government income by telling citizens that they owed money, based on AI-enabled systems which identified individuals who had possibly defrauded the Government. Hundreds of thousands of welfare recipients were wrongly accused of owing money to Centrelink. Very sadly it led to multiple suicides, and deaths resulting from the unnecessary stress produced by this disgraceful scheme. Prime Minister Albanese is quoted as saying it was a “gross betrayal and a human tragedy”. Was anyone held accountable for that suffering, and those deaths, and punished in some way?
Perhaps the lack of visible punishment relating to the unlawful Robodebt scheme has resulted in another such nasty bureaucratic AI system being revealed in late March 2026. This one is called the Integrated Assessment Tool (IAT) used by My Aged Care to determine whether elderly individuals will receive funding from the Federal Government, so they can purchase support services to keep living in their own home. This support means that they do not need to occupy scarce beds in aged care establishments or hospitals. Apparently, each applicant is given a score on a wide range of characteristics and the AI engine decides whether they deserve funding. During lengthy development of this cost-cutting proposal, the human experts were told that they could over-ride the AI decision using a particular button. The button is still there on the user interface but the experts have been forbidden to use it. Will managers never learn that AI is always fallible?
As Sir Walter Scott said in his 1808 epic poem, Marmion: A Tale of Flodden Field: "oh what a tangled web we weave when first we practice to deceive". Dishonesty creates a complex and often overwhelming series of inter-locking problems. Rather than just wringing our hands, and perhaps weeping, we need to try to untangle the reasons for such tragedies. Hopefully, this article can enhance conversations about the dangerous 'web of beliefs' called by the oxymoronic name Artificial Intelligence (AI).
In our societies, many useful activities have inbuilt potential dangers. The potential difficulties of using AI in the way it was applied within the Robodebt Scheme should have been obvious. Similarly, use of AI to facilitate creating pornographic images of children was always going to be unethical. These instances have alerted people to some of the dangers of AI, which are in fact far broader. The Australian Greens' concerns about AI dangers include:
- AI-driven misinformation and its threat to democracy;
- the potential for biased algorithms;
- risks to privacy and data security;
- potential job losses, particularly in creative industries; and
- the environmental impact of AI's high energy consumption.
Thus, the Greens advocate for stricter regulation, including an ‘Australian AI Act’ with mandatory guardrails, to address these issues. On the 6th August 2025, Greens Senator and Digital Rights Spokesperson David Shoebridge criticised the industry dominated Productivity Commission Report on AI Policy: "It’s a real missed opportunity for a balanced and nuanced analysis of the threats and opportunities posed by new technology. … The commission is using overly optimistic financial projections to dodge proper AI rules and kill off basic digital protections. … The extraordinary power of international tech companies is real, and that’s an even more important reason to not let them dictate law in this country" [Google - Greens policy statement on AI]. This is a great start, however, as the misuse of AI becomes ever more apparent, additional Green’s policies should be developed.
This topic is also of particular interest to at least two Independents in the Australian Parliament. In 2025 Kate Chaney (Independent MP) proposed to the Australian House of Representatives an amendment to the Criminal Code to ban using technology to generate child abuse materials. To her credit, she did not just address this high-profile danger of AI, but has followed up by preparing a position paper advocating the case for an ‘Australian AI Safety Institute’, as exists in many other countries. Australian Federal Parliament Independent Senator for the ACT David Pocock has also been active on the topic of AI enabled ‘deepfakes’. On 24th November 2025, an ABC on-line news item by Ange Lasvoipierre quoted Pocock as saying: "Currently, unless a deepfake is sexually explicit, there’s very little that you can do as an Australian". Pocock has proposed a bill to address this need, including a dedicated complaints framework within the Online Safety Act. Pocock also suggests changes to the Privacy Act to allow people who suffer from a deepfake to bring a civil lawsuit and sue perpetrators for financial compensation. This report noted that: "Ms Chaney and Senator Pocock said there was now an urgent need for far broader protections beyond the issue of sexualised deepfakes". This supports the Greens call for an ‘Australian AI Act'.
I agree with Shoebridge, Chaney and Pocock that further legislation is necessary. It is not just important to consider the obvious causes of lack of safety of this technological development, but to look at wider potential problems and ways to address them. In this context it is important to not only concentrate on the more spectacular problems that cause extreme dangers for a relatively small number of people, but also potential AI related consequences that may produce more minor (but still significant) difficulties for a much larger number of people in many places, living under different systems of government.
In an ABC website [Endnote 1], Luke Cooper discusses recently released data concerning the fears of Australians about AI. Cooper notes that a report just released by the Australian Institute of Criminology revealed that “most Australians held concerns about platforms that used AI to track their location, access their devices or accounts, or impersonate or deceive them. AI-generated deepfake content was also a strong concern for more than three in 10 people.” This fear is not because of a lack of knowledge: “More than 80 per cent said they held a ‘moderate’ to ‘very high’ level of knowledge and ability to use digital technologies, the report said. … Based on the responses of more than 16,000 Australians compiled for the 2024 Australian Cybercrime Survey and an additional AI-related questionnaire, the report ‘provides the clearest national picture of Australians' fear of AI misuse’ the AIC said.”
Appendix 1 provides an example of how AI makes mistakes by integrating information and creating combinations which are untrue. This is not some manufactured example, but one resulting from chance; a gift to me from the gods, who are obviously distressed at the audacity of AI, thinking that ‘it is God’. It concerns the helicopter crash that I survived, in Antarctica in January 1974 (see Green Issue, June 2025). We would be surprised and disappointed if a ten-year-old child made such basic and obvious blunders as those occurring in this AI generated report. They would never be made by a person trained in summarizing documents. Readers should be very careful before authorising an AI generated summary of any information.
This problem is becoming much more dangerous as websites are prioritizing display of AI summaries in response to queries on platforms such as Google. It is then much less likely that users will be bothered to check this summary via visits to more specific and reliable websites, relegated to lower priority listing by payments from other website owners (capitalism overcoming community service). If there are any errors in the AI summary provided in such an easy fashion (such as in the Appendix 1 example), then a user is likely to duplicate that error in their own documents, spreading the false information ever wider. This problem is made much worse by deliberate misinformation being distributed on the web in order to lead users to specific opinions. When this includes AI generated images and audio, the danger is multiplied. Rather than giving in to feelings of hopelessness, we need to find ways to defeat this AI induced crisis.
In order to take the next step in understanding and controlling problems caused by AI, in line with the Green’s policy of establishing a broader ‘Australian AI Act’, it is necessary to thoroughly investigate the basis of current societal policies that have led to the dangerous rise of AI. The analysis can start with philosophy. This is especially important because AI is an oxymoron, since real intelligence cannot be synthetic and is only present in people, who are capable of thinking beyond correlations to understand causation. Some aspects of cognition are suitable for synthetic replication, however, since cognition in humans is distributed throughout the whole body, not just reserved for the brain, and intricately linked to affect (emotion), even calling AI Synthetic Cognition would be inappropriate. The drive to integrate AI into more and more aspects of society results from basic philosophical errors.
Phenomenology is probably the most effective philosophy for understanding intersubjectivity and Human-Computer Interaction (HCI). Key alternative philosophical positions include: Realism, Idealism, Hermeneutics, Naturalism, Positivism, Universalism, Empiricism and Aristotelian logic. For instance: While Phenomenology focuses on the structure of consciousness and how things appear to the subject, realism holds that reality exists independently of our perceptions of it; Positivism is a philosophical approach asserting that only knowledge obtained through scientific methods and sensory experience is valid, rejecting intuition, metaphysics, and religious beliefs; Universalism is the belief that certain ideas have universal application, suggesting that some values or theories are the same for all groups, independent of social identity. Some or all of these non-phenomenological philosophies can be seen as not aligned with thorough understanding of intersubjectivity and social justice. They can be considered as supporting the rise of managerialism, commercialisation (of government activities), extreme capitalism, weakened ethics (which encourage racism and greed) and Software Engineering (SE).
Software Engineering (SE) is opposed to the Information Systems (IS) discipline (which it replaced in many Australian universities in the early 2000s). IS considers five key elements of information systems: hardware, software, data, procedures and people. AI (within SE) believes that information systems can be ‘engineered’, ignoring the crucial role of understanding user needs and incorporating optimisation of procedures and the appropriate role(s) for particular types of people for any specific information system. IS developers should analyse the needs of all user groups and types. SE and AI rarely (if ever) fully investigate these design considerations, via exhaustive tests of a draft user interface with each type of potential user. This has led to a major reduction in the quality of user interfaces and HCI in general, since the dominance of SE.
I strongly support the Green’s policy regarding establishment of a broader and more powerful ‘Australian AI Act’. However, a deep investigation is required to achieve a comprehensive summary of the current and potential good things about AI (uses and their values) and the bad things (threats to small or large numbers of people). For instance, replacing a human being to answer client’s questions with an AI enhanced online automated system is likely to be less efficient, effective and equitable, especially for older clients and those with a disability. It is desired by managers because it reduces direct costs, by lowering the number of staff employed, but is ethically unsound, including because it transfers work to the client and is less likely to address their needs. This has occurred during a decline in professionalism. Very sadly, in the last fifty years, Professional Associations (e.g. Institution of Engineers) are far less effective at controlling fees charged by their members and regarding ethical issues in general.
Key US players in the digital industry are philosophically opposed to government regulation of their industry. This has led major companies to seek to demonstrate that internal self-regulation is all that is needed: “The regulation of Artificial Intelligence (AI) continues to be a difficult but popular topic, especially with the formal adoption of the European Union (EU) AI Act and new guidance in the United States (U.S.) following the White House’s Executive Order last October. As governments around the world try to navigate AI innovation and oversight, we are also starting to see industry-led consortiums and the self-regulation of AI take shape.” [Endnote 2]. This approach by AI development companies and industry consortiums is claimed to produce a more rapid and flexible alternative to, or supplement for, government regulation.
To aid in development of such ‘AI governance’ models, the Australian Institute of Company Directors (AICD) has partnered with the Human Technology Institute (HTI) at the University of Technology Sydney to aid in responsibly harnessing AI [Endnote 3]. They believe that implementing demonstrated controls throughout the entire Artificial Intelligence (AI) lifecycle will increase public confidence and decrease the need for government controls. However, some commentators assert that voluntary commitments are insufficient, often succumbing to competitive pressures, requiring governments to continue to develop more public AI safety measures. Given recent history, I doubt that many Australians would trust major US digital-tech companies to provide adequate, let alone excellent, internal AI governance procedures.
My experiences of these problems
The recommended next step in analysis of problems with AI is even more difficult than the previous investigations because it concerns a range of social and technological developments that have occurred over many decades, including managerialism, commercialisation and more extreme capitalism. Each of these has involved a weakening of ethics and a decrease in the quality of important products and services. These unfortunate developments over the last fifty years have impacted strongly on my own working life.
I first encountered the rise of managerialism in the early 1970s when working as a Surveyor for the Australian Government Division of National Mapping (NATMAP). Having conducted research on surveying and cartography since completing my Surveying degree in the late 1960s, my NATMAP position was officially designated to include time for research. From the early 1970s I was in contact with university and government researchers across the world. Topics of interest included photo-maps, computer-based (digital) mapping and the subsequent development of Geographic Information Systems (GIS). To find out about the latest trends in digital mapping, in 1980 I completed an Applied Science (Cartography) degree at RMIT. With this additional knowledge, I managed to purchase the necessary equipment and develop the software to collect and analyse digital data produced by tiny devices attached to the stereo-plotters that drew the draft maps from stereo-viewed aerial photography. We had to write all the software, initially without support from computer scientists. This enabled my map production section to make the first 1:100.000 digital topographic map in Australia. I wanted to use that method to produce the most accurate and complete digital maps. However, the managers now in charge of NATMAP did not understand what I had achieved. They vetoed my demonstrated method, implementing instead a cheaper, far less accurate and complete way of producing digital maps ready for GIS. A quick and dirty solution to an important long-term issue. In response, I resigned and moved to The University of Melbourne, to teach about surveying, cartography and GIS and carry out research. To further facilitate this work, I studied a Melbourne Uni. Arts degree (with Psychology Honours and a Philosophy major). The resulting set of three degrees facilitated research which combines engineering and natural sciences approaches with those from social sciences.
In 1992 I completed a PhD in HCI for GIS at The University of Melbourne. I then moved to WA to engage with developments in Native Title (following the Federal act) in collaboration with the National Native Title Tribunal and Aboriginal language groups (claimants). I also taught user interface design and other aspects of HCI at Murdoch University, in the new Information Systems degree, for fourteen years from 1993. My position included supervision of PhD students and research into IS design methodologies, especially concerning linguistic and cultural issues.
In 2007 I was inappropriately declared redundant by Murdoch University management so that I could be replaced by someone to start a new SE degree. The IS degree, that I had helped develop, was cancelled. By this time, Australian Universities were disabled, in their turn, by managerialism. The generalist university managers did not comprehend the differences between IS and SE and did not seek advice, except from the more established Computer Science academics, who, of course, supported SE not IS. A similar situation occurred at universities across Australia. I was sacked, so I commenced a second PhD and continued my research and consulting activities as an un-paid Adjunct Associate Professor (until late 2025). Since 2007 I have concentrated on work with Aboriginal organisations and communities, which I commenced in 1994 (see my Green Issue article in the February 2026 issue).
The quality of almost all aspects of HCI, especially user interface design, has deteriorated very significantly since my PhD on that topic in 1992. The rise of SE has wiped out the results of my research recommendations and my teaching of design methodologies to IS students at Murdoch University for fourteen years. My lectures, handouts and research publications emphasised the requirement for comprehensive testing of a draft user interface with each type of potential user. Only after a draft interface was proven by such tests to be highly ‘usable’ should it be released. It seems that, with the rise of SE and elimination of IS teaching, user interface designers are not obeying this rule. They incorrectly believe that AI-enhanced design can be trusted to provide adequate usability. The general public must put up with their unsatisfactory products and services.
The further incursions of AI via Software Engineering
A central problem is the continual development of new computer application functionality, not necessarily required by some or all user groups. Competing ‘software engineers’, using the power of AI, try to develop ever more complicated extra aspects of functionality, seeking to induce users to choose their product and increase profits (extreme capitalism). These extra capabilities might be some use to high-end users but only add confusion for low-end users, like the elderly. This is added to by the overall trend in design favoring ‘minimalism’, with objects and graphics being far smaller than required to be easily usable.
In the case of user interface design, the ‘software engineer’ is wanting to provide access to the new functionality by having text indicating the particular function and/or an icon. The first adjustment is to make the text so small it is very difficult to read. The designers, in desperation, have also replaced wording to explain the purpose of any screen item with symbols or icons. The user must learn the meaning of these miniature graphics, rather than their purpose being obvious. The next unacceptable user interface design solution is to move crucial information to the bottom of the ‘page’, where it is not immediately visible to the user, without them scrolling down. This, together with the option of multiple screens to be viewed to complete a task, makes the user interface less usable, often requiring specific training. All this confusion and problems are caused for the user because the software designer wants to use the power of SE and AI to insert unnecessary functionality into the user interface.
Another major problem caused by these trends is the need to continually update the software, especially for it to remain inter-operable with other applications software, including from other corporations. This requires the user to authorise ‘automatic updating’ of software for changes that they do not necessarily want or understand, constituting a ‘theft of agency’ by the developer, against the user. If the user does not choose that option, they must manually update their software very frequently, or potentially suffer consequences that they probably don’t understand and can’t control. One solution is to always have a special version of each popular application, specifically designed to suit low-end users (e.g. elderly people). Such a version would be updated infrequently and only when necessary. The user interface would be much easier to see and understand, although offering only low-level functionality. I have even been tempted to set up a company to convert current applications to such easy-to-use versions, but I’m a bit too old now to do that.
In a classic case of failure of the requirement for continual updates to ensure inter-operability between computer-based systems from different makers, an Australian woman died on the 13th November 2025 when outdated software blocked a Samsung phone from making Triple Zero (000) calls to TPG Telecom. This constitutes a very relevant ‘canary in the coal mine’ situation. It demonstrates how the need for continual updates of software beyond the understanding of users can have even fatal consequences. This highlights the need for a broader enquiry into AI and SE. This work is necessary to enable a detailed case to be assembled to support the Green’s policy demanding an ‘Australian AI Act’. Hopefully this article will foster wider debate concerning these extremely important issues.
A recent development (on 2nd December 2025) was an announcement by the Australian Government Minister for Industry, Innovation and Science (Tim Ayres) that the government will not introduce AI-specific laws. However, a new AI Safety Institute will be established early in 2026. It will not have powers to act, only the ability to advise governments. This is our contribution as a founding member of the International Network for Advanced AI Measurement, Evaluation and Science. The government’s focus is still on industrial development and financial issues and a broader approach to risks and disadvantages of AI will not necessarily be undertaken.
“I’m sorry ... I can’t do that”
In the movie 2001: A Space Odyssey (1968) there is a famous scene with the following dialogue between Dave (a spacecraft crew member) and HAL (the computer system ‘helping’ the crew):
Dave Bowman: “Open the pod bay doors, HAL”.
HAL 9000: “I’m sorry Dave, I’m afraid I can’t do that”.
Dave: “What's the problem?”
HAL: “I think you know what the problem is just as well as I do”.
Later, there is follow-up dialogue:
Dave: “Unless you obey my instructions, I shall be forced to disconnect you.”
HAL: “I don't want to insist on it, Dave, but I am incapable of making an error.”
This is seen as a very early warning about the power of AI enabled computer systems, which may become so sophisticated, and sure of their superiority, that they take control from humans. Except that now it might not just be a spacecraft crew, but whole businesses, governments and nations.
It is possible to think about this negative warning in a more positive way. Perhaps it demonstrates a power that the AI enabled computer system should have. That is to say, the power to know when a request is beyond their capabilities and should be left to humans – a sort of ‘self knowledge’ or ‘awareness of their own inadequacies’ – like the way well-socialised and educated humans behave. We could call this sort of AI functionality “Meta AI” or perhaps in a more memorable term, Artificial Meta Intelligence (AMI), as an extended version of that demonstrated by 1968 HAL, with AMI like HAL’s granddaughter. Software writers could develop a whole sub-section of AI development devoted to this innovation and the thorough testing of its infallibility and effectiveness. Perhaps this could be part of government responsibility to take ultimate control of AI by mandating effective AMI.
An alternative terminology for this type of AI development is ‘Self-regulating AI’. This refers to internal guardrails, implementing monitoring mechanisms to provide ethical boundaries. Such systems can be part of industry developed internal governance models being implemented by AI development companies (as discussed in the Introduction).
A key aspect of the limitations of AI relate to the difference between ‘correlation’ (a mutual relationship or connection between two or more things, at a superficial level) and ‘causation’ (the action of one or more variables causing something to happen, investigated to an appropriate level of determination). ‘Correlation’ measures the strength of a relationship between two variables (to the extent that suitable data is used), while ‘causation’ indicates that one variable (quite possibly, along with other variables) directly causes the effect to change. This provides a much more sound approach to understanding any particular aspect of the physical world, and especially, social interaction processes and communal organisations. Unfortunately, AI often just relies on ‘correlation’ rather than ‘causation’ analysis, usually not providing any information concerning the methods and data sets used, and hence, the reliability of the answer that AI provides. Traditional AI is great at finding correlations, but fails at produce understanding of reasons (relevant variables) determining what happens, confusing coincidence with causality. This must be addressed by AMI and ‘internal governance’ type developments, since slow-motion legislative controls cannot keep up with rapid technological advances.
Conclusions
We need to heed the warnings of sexually explicit photos created of children, other ‘deepfakes’, the death of a woman who couldn’t call 000, and even the failure of AI displayed in Appendix 1, to embolden us to take-on the big digital-tech companies. Unless we have much stronger defenses against the potential problems of AI we can expect a social catastrophe that will kill many more people than Robodebt. To achieve appropriate control of SE and AI, we need to understand the basis of the support structures that enable these uncontrolled developments in computer systems. These include inadequate philosophies, managerialism, commercialisation, the undermining of professionalism, replacement of Information Systems methods by Software Engineering, the rise of ultra-capitalism and greed and a general reduction in ethics. I recommend that those developing Green Party policies in this area consider these wider issues, to provide a deeper understanding of the threats associated with AI, and the legislation required to address them. This will need to be undertaken in the context of the purpose, design and openness of the proposed new Australian AI Safety Institute. If the idea of Artificial Meta Intelligence (AMI) takes hold, then Green’s policies might advise its adoption, so that some of the control of AI will be automatically dealt with, within the application itself. This should have been the case from the very beginning!
Appendix 1. AI confused discussion of helicopter accidents.
Green Issue readers may remember an article of mine published in the June 2025 Green Issue - which described my time in the Australian Antarctic Territory during the summer of 1973/4. This included my surviving a helicopter crash at Burke Ridge. I Googled about this recently and was delivered the following AI summary, displaying great incompetence:
AI-composed Google entry:
In 1974, an Australian Antarctic Division Hughes 500 helicopter, VH-CHK, crashed at Burke Ridge, resulting in the deaths of the three men on board: pilot Peter Vidler and passengers John R. “J.R.” Bean and Gary T. Ticehurst. The crash was caused by a sudden drop in visibility, or "whiteout," which led the pilot to lose visual cues and descend unexpectedly.
The other helicopter in the pair, VH-CHG, landed nearby to attempt a rescue, but the injured passengers could not be reached for 20 hours due to the severe weather conditions.
The accident: The Hughes 500 helicopter, VH-CHK, was on a flight to return to Davis Station, refueling at a cache on the Amery Ice Shelf.
It seems that this text is a combination of information from at least three different helicopter accidents involving Australians (indicated in bold text):
- The crash of Hughes 500 helicopter, VH-CHK that I was involved in at Burke Ridge in January 1974, which involved no deaths.
- Another crash, on August 18th 2011, where three of the ABC's most respected staff - camera operator John Bean, reporter Paul Lockyer and pilot Gary Ticehurst - were killed when a twin-engine Squirrel helicopter crashed at the edge of Lake Eyre (on the Australian continent).
- A helicopter crash in 2013 involving three people, who all survived. The aircraft was an Aérospatiale AS350B2 helicopter. The crash happened at the Amery Ice Shelf on the coast, about 570 km away from Burke Ridge in the Southern Prince Charles mountains in the far south of Australian Antarctic Territory [ lat. 74-40, long. 65-25].
If you read my story in the June 2025 Green Issue you will see how different the actual event was compared with this AI miss-match of information from three different helicopter crashes. There were two of us in the helicopter, nobody died and the other helicopter was unable to rescue us, but our fixed-wing aircraft did.
AI simply cannot be trusted to tell the truth, the whole truth and nothing but the truth – the oath or affirmation used in courtrooms to ensure witness honesty, claimed to have originated from medieval English Common Law. AI needs to be re-configured to incorporate AMI – its own ability to fully understand, and effectively communicate to the user, its general limitations, as they apply in any specific situation. If this can be made to work, then our fear of AI should drastically decrease.
Endnotes
- https://www.abc.net.au/news/2026-02-26/australians-fear-ai-related-crime-deepfakes-hacks-study-shows/106381232
- https://www.avanade.com/en/insights/articles/rise-of-industry-self-regu…
- https://www.aicd.com.au/innovative-technology/digital-business/artifici…
Header photo: Mental confusion image (Freepik)
[Opinions expressed are those of the author and not official policy of Greens WA]