2

The good news - we’ve reached 'peak human'. The bad news - now AI will destroy o...

 1 year ago
source link: https://diginomica.com/good-news-weve-reached-peak-human-bad-news-now-ai-will-destroy-our-ability-work-and-learn
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The good news - we’ve reached 'peak human'. The bad news - now AI will destroy our ability to work and learn

By Chris Middleton

August 16, 2023

Dyslexia mode

AI

We’ve reached ‘peak humanity’, an era in which the greatest number of people worldwide have attained high standards of education, reasoning, rationality, and creativity. 

However, the rapid adoption of Artificial Intelligence (AI) may reverse that progress, stripping away our incentives to grow and advance ourselves, both individually and collectively. As a result, there is a “grave risk” that humans will become “de-educated and decoupled from the driving seat to the future”. 

That’s according to a new academic paper entitled ‘The Future of AI in Education: 13 Things We Can Do to Minimize the Damage’.  It warns:

As AI capabilities grow, our incentives to learn might diminish. It is not inconceivable that many of us might even lose the ability to read and write, as these skills would, for many, serve no useful purpose in day-to-day living.

The education factor

A grim scenario. However, unlike other warnings about AI’s potential for harm, this one comes from a sector that vendors claim will benefit most from the technology: education. 

Sour grapes? The 45-page document, published on educational research forum EdArXiv, notes: 

In all the hype about AI, we need to properly assess these risks to collectively decide whether the AI upsides are worth it and whether we should ‘stick or twist’.

The working paper’s authors are Arran Hamilton, Director of Education at Cognition Learning Group; Dylan Wiliam, Emeritus Professor of Educational Assessment at the UCL Institute of Education (at University College, London); and John Hattie, Professor of Education at the University of Melbourne.

By publishing in the eye of the AI hurricane, as it were, the authors aim be a catalyst for debate, reducing the “probability that we sleepwalk to a destination that we don’t want and can’t reverse back out of.”

Mixed metaphors aside, they advise slowing our collective progress so that society has time to make informed choices rather than move at a pace dictated by technologists. Advice shared by growing numbers of experts and leaders, of course.

So, what are the eminent authors so worried about? 

Four conceivable futures

They envisage four possible scenarios that could result from the hype cycle, and the rush (in diginomica’s view) towards tactical, me-too adoption (a phenomenon explored in many of our reports this year). 

The first is curtailment of the technology’s use, as AIs “greatly surpass human reasoning capabilities – possibly very rapidly”. But an outright ban by governments is unlikely, say the authors, who note:

Development could be slowed to ensure better regulation and safety, and to give time for careful consideration of which of the other three scenarios we want. 

However, if future developments in AI were banned, humans would still be in the driving seat and still require education. There might also be significant benefits for human learning from leveraging the AI systems that have been developed so far. And that might, subject to satisfactory guardrails, be excluded from any ban.

An interesting point. What do we need AIs to do that they are currently unable to? And why do we need them to do it? Surely questions that are worth asking.

The second, more likely outcome is the rise of “fake work”, warns the paper: jobs invented to keep people busy in a near future when AI would be able to perform every task to an advanced level:

[A scenario in which] AI gets so good that it can do most, if not all, human jobs. But governments legislate to force companies to keep humans in the labour market, to give us a reason to get up in the morning.

We think this scenario is possible in the medium-term – beyond 2035 – as Large Language Models [LLMs] and other forms of AI become ever more sophisticated. But it may be highly dispiriting for humans to be ‘in the room’ while AI makes the decisions, and no longer at the forefront of ideas or decision-making. 

Even with increased education we would be unlikely to overcome this, or to think at the machines' speed and levels of sophistication.

I believe that timescale may be shorter than the authors predict. So-called ‘prompt engineers’, people employed to press buttons while AIs do the creative work, are already among us. Meanwhile, even industries that have long produced human-readable content for flesh-and-blood readers seem to be refocusing their businesses on machine-generated content for clicks – and for other machines – mostly to impress their investors.

A third, more extreme scenario is transhumanism, in which we simply accept our fate and begin the gradual process of merging with machines to better ourselves – the logical endpoint of industrialization and automation, perhaps: the point at which evolution becomes mechanised. The paper says:

[In this scenario] we choose to upgrade ourselves through brain-computer interfaces to compete with the machines and remain in the driving seat.

We think this is possible in the longer term – beyond 2045 – as brain-computer interfaces become less invasive and more sophisticated. But as we become more ‘machine-like’ in our thinking, we may be threatened with potentially losing our humanity in the process.

This is a useful and long overdue perspective from academia. For years, science fiction writers have warned of the dangers of machines becoming more human, thus ignoring the greater risk of humans becoming more like machines. The authors add:

There would also no longer be any need for schooling or university, because we could ‘download’ new skills from the cloud.

A troubling scenario, because it implies that cherished concepts like lifelong learning and self-betterment may be replaced with instant, passive, supine consumption, devaluing human creativity, engagement, and diversity in the process. 

A left-brain-focused world, perhaps, in which right-brain skills are diminished and left to machines – to AIs that merely produce derivative simulations of human brilliance. (Want a new Beatles song? Just hit ‘generate’.)

The fourth possible outcome is the introduction of a Universal Basic Income (UBI), in which everyone is thrown a financial lifeline so they can survive in a world where real jobs are scarce and human society has become increasingly diffuse.The paper says:

[In this scenario] we decouple from the economy, leaving the machines to develop all the products and services; and to make all the big decisions. And we each receive a monthly ‘freedom dividend’ to spend as we wish. 

Community-level experiments in UBI have already been undertaken, and widespread adoption of this scenario could be possible by 2040. Some AI developers, including Sam Altman [OpenAI CEO], are already advocating this. 

It would enable us to save our ‘humanity’ by rejecting digital implants, but potentially we would have no reason to keep learning, with most of the knowledge work and innovation being undertaken by the machines. We might pass the time playing parlour games, hosting grand balls, or learning and executing elaborate rituals. We would perhaps also become ever more interested in art, music, dance, drama, and sport.”

A Black Mirror-style outcome, in fact – with the irony being that tech innovators have used art, music, writing, and other creative pursuits as a Trojan horse to lure the public into embracing AI, by offering them free toys with the promise of never having to pay for talent again. The arts are simply there for the scraping, if you’re unscrupulous enough to do it.

However, the political dimension is absent from this scenario. For example, it is hard to imagine any right-leaning, conservative government – and there are many of them at present – embracing such a socially progressive idea as paying humans to amuse themselves. 

Yet despite all this, the authors are neither naysayers nor doom merchants in any facile sense. Instead, they are simply conducting thought experiments via their own highly developed capacity for critical reasoning: the very skill that is alarmingly thin on the ground in 2023.

I believe critical reasoning, rather than passive acceptance of hype, is the only sensible way forward in the AI debate. There is nothing to be gained by listening to the many evangelists out there who see themselves as John the Baptist to Big Tech Jesus (“We’ve got to surf the wave of creative destruction, man.” The hell we do.)

AI’s upsides for educators

To their credit, however, the authors list the many advantages of AI for educators.  Among these are: greater personalization in education; adaptive content; democratization of access; cost and teacher workload reductions; more culturally relevant content; continuous assessment and feedback; new coaching and decision-support systems; assistive digital tutors; deeper inference in evaluating students’ needs; and greater, more targeted support for neuro-diverse learners and children with disabilities. 

An impressive list, and all areas where the current education system could do much better.

But the downsides are legion too, they warn. These include: AI’s potential for inaccuracy, producing confident-sounding but incorrect answers; its capacity for bias, not to mention for plagiarism and risks to data privacy, security, and protection; intrusive surveillance of students and teachers; algorithmic discrimination; opaque decision-making; loss of vital context in the education process; a focus on shallow factual box-ticking; and systems using poor pedagogic reasoning to speed up teaching itself. 

In short, a form of automated, target-based education that relies on the accuracy of inexpert, vendor-trained systems rather than the care and expertise of experienced humans. The paper says:

Thankfully the future is not (yet) set in stone. There are many different possibilities. Some have us still firmly in the driving seat, leveraging our education and collective learnings. However, in other less positive possible futures, humanity might lose its critical reasoning skills, because the advice of the machines is always so good, so oracle-like, that it becomes pointless to think for ourselves, or to learn.

Then it adds:

Many of our distinctly human capabilities, such as emotion, empathy, and creativity can be explained and modelled by algorithms, so that machines can increasingly pretend to be like us.

Pretending to be human

That pretence is a critical point. I would argue that the epochal problem facing humanity at present is not AI itself – a tool is a tool is a tool. Rather, it is human beings projecting intelligence and artificiality onto systems that have neither – and rushing to adopt them because everyone else has. The Great Stupid era.

As explored previously on diginomica, countless people hail the apparent genius of ChatGPT and others, thus crediting derivative work engines with the brilliance of millions of human minds, which has simply been scraped from the Web. Credit the humans first.

That said, the authors warn:

We should work on the assumption that we may be only two years away from Artificial General Intelligence (AGI) that is capable of undertaking all complex human tasks to a higher standard than us, and at a fraction of the cost. 

But even if AGI takes several decades to arrive, the incremental annual improvements are still likely to be both transformative and discombobulating. Given these potentially short timelines, we need to quickly establish a global regulatory framework – including an international coordinating body and country-level regulators.

At present, that process is both voluntary and piecemeal. 

What can be done? 

So, what are the authors’ other recommendations, beyond the above two? 

The paper lists eleven more:

• AI companies should go through an organizational licensing process before being permitted to develop and release systems into the wild.

• End-user applications should go through additional risk-based approvals before being accessible to members of the public. These processes should be proportionate with the level or risk/harm – with applications involving children, and vulnerable or marginalized people subject to more intensive scrutiny. 

• Students (particularly children) should not have unfettered access to these systems before risk-based assessments/trials have been completed. 

• Systems used by students should always have guardrails in place that enable parents and education establishments to audit how and where children are using AI in their learning. 

• Legislation should be enacted to make it illegal for AI systems to impersonate humans or interact with them without disclosing that they are an AI. 

• Measures to mitigate bias and discrimination in AI should be implemented. This could include guidelines for diverse and representative data collection, and fairness audits during the LLM development and training process. 

• Stringent regulations should be introduced for data privacy and consent, especially considering the vast amounts of data used by AI systems. Those regulations should define who can access data, under what circumstances, and how it can be used. 

• AI systems should be required to provide explanations for their decisions where possible, particularly for high-stakes applications like student placement, healthcare, credit scoring, or law enforcement. 

• Distributors should be made responsible for removing untruths, malicious accusations, and libel – within a short time of being notified. 

• Evaluation systems should continuously monitor and assess the safety, performance, and impact of AI applications. 

• And proportionate penalties should follow any breach of AI regulations. The focus could be on creating a culture of responsibility and accountability within the AI industry and among end-users. 

The authors add:

Our own position is that there is great uncertainty, but we ALL need to maintain a stance of vigilance and assume – from now on – that at any moment in time we could be only two years out from machines that are at least as capable as us.  So, we can’t bury our heads in the sand or get parochial. We need to grapple with these ideas and their implications today.

My take

A timely piece of work – despite a tone that some will see as alarmist.

The paper coincides with a (non-AI-related) campaign by the International Labour Organization, which states that “decent work” and quality jobs contribute to poverty reduction, higher standards of education, equality in the workplace, and social justice, and are thus the foundation of “fair and responsible” economic growth.

Placed in the context of this new AI paper, therefore, the implication may be that the rise of fake work (to keep humans employed in a near future of AI-dominance) could lead to reversals in all of those areas. The result might be soaring poverty, falling equality, and reduced social justice – plus a collapse in real education standards, as predicted in the AI paper itself.

As the authors note, we really must be vigilant.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK