Generative AI warnings cont’d

It was a heavy hitting week for humanity scale concerns and generative AI warnings.  Heavy hitting, not because of the usual media churn but because of the actors and the emerging clarity. Warnings focussed, not about imminent AGI, but what we already know is going on – social manipulation for power and profit.

  1. Kamala Harris  met with executives from several ai companies and told them they had a moral obligation – I’m sure US gov did the same for Zuckerberg several times. Right?
  2. Over a decade ago Yuval Noah Harari captured dinner party conversations by declaring the humble grain “had conquered humanity”. This week he declares the new conquerer to be LLMs that have parsed the world’s textual content and mastered the art of language and manipulated intimacy.
    • It’s a credible claim: this week my mother choked up when I read her the short story from a previous post. This was after I disclosed it was written by a robot. It’s simply that the prose aligned strongly with her lived experience, that the writing resonated enough to evoke an emotional response.
    • Harari makes this clear that democracy’s core strength and therefore weakness is communication.  Anything that can hack that is no less than an attack on the benchmark of civilisation.
    • He calls for regulation that demands (if not enforceable) that any AI communication be disclosed. The uncanny valley problem is being solved so dramatically that its plain  to see its a (err) plain and not a valley.
  3. Microsoft’s chief scientific officer, Eric Horvitz shows worrying signs that Microsoft thinks their own ethics and safety teams can protect humanity – WTF. Some of us remember that Windows is the most successful malware distribution platform in the history of humanity – his comforting tone rings hollow.
    • Horvitz’s hubristic antidote is to test in the wild, which is great for traditional open source software but also reminiscent of one theory of a certain biological pandemic we still live in 2023. What could possibly go wrong?
    • Credit Horvitz that he does also call out that the immediate threat is GPT in the hands of nefarious people , organisations or countries. (More on that below).
    • Horvitz is dismissive of the 6 month Pause letter and calls it a distraction.  That is not going to age well.
  4. The blockbuster news event was “the godfather of AI”, Geoffrey Hinton going public following  his departure from Google.
    • His understanding of the issues Harari raised are so deep that he expresses the concerns in a few short quips. (Contextual humour may indeed be a human characteristic that is hard to replicate.  Memes are easy but summarising the zeitgeist with brief wit is not ChatGPT’s forte).
    • The big, big claim he makes is that human capacity to learn appears substandard in comparison to AI’s use of back-propagation is a better learning algorithm than what humans possess.
      • AI packs more information into fewer connections.  “That is scary”.
      • He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
      • speed of transmission and communicability (see below – and an extra dimension to Harari’s point about communication).
    • Multi-polar traps/arms-race
      • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology.
      • The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
      • Bad Actors: “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
      • Hinton did not sign the recent “AI pause” letters.

Gems from Hinton

  • Humans may just be a passing phase in the evolution of intelligence. Biological intelligence was needed to create digital intelligence.  Then absorb everything written,  then AI will move to direct experience of the world. Humans may be allowed to leave to keep the power stations running.
  • Ray Kurzweil wanted immortality – it may turn our that “Immortality is not for humans”
  • OpenAI has read everything that Machiavelli ever wrote.
  • Currently AI IQ is: 80 or 90
  • GPT-4 has Trillion connections and humans have 100 trillion connections
  • Humans evolved to be power efficient – 30watts with lots of connections
  • AI currently uses around a Mega Watt when training,  but learns much more than humans
  • Humans are poor at coordinating distributed processing because our brains are all different (the trained models). AI can orchestrate faster digital learning, the models and weights can be immediately shared across many nodes.
  • Sentience doesn’t matter, its about AI using unintended subgoals – an obvious subgoal is “get more control”
  • Humans setting the guardrail rules (later) for AI is like a 2 year old child negotiating with its parent.

When will pro-human/planet AI overtake manipulation?

In 2022 my team released an AI neural net plugin to our SurfControl email filter that was trained to detect possible inappropriate (nude) images in corporate emails. As an “applied” use of AI it was groundbreaking and exciting. As a product it was very good at detecting sand dunes and baby faces 👶. A neural net is the underlying technology that ChatGPT is built and attempts to copy how the brain works. Why was our neural net plugin inaccurate? It had 100s or 1000s of connections. GPT is rumoured to have a trillion and the human brain has 100 trillion connections. So the same plugin today could actually describe the picture in detail.

The AI of the decade preceding ChatGPT was not applicable for the masses, it was applied on the masses in the form of Netflix recommendations, song playlists and of course fake news. It is the manipulation of humans where AI found chasm crossing monetary value. The attention economy benefited few.

Amongst this week’s news which I cover in a separate post is this horrifying chart in the Economist.

It shows the urgency to addresss some huge problem in our western cultures – since 2015 there has not been enough pandemics or financial meltdowns to cause the alarming increase in teenage female suicide. In contrast its easy to surmise that smartphones and the visual social medias are creating the problem.

This is heartbreaking.  Will it take decades (like it did with tobacco and asbestos) to protect our community from the causes? It’s not new news, here is Jean Twenge in 2017.

If it walks like a duck etc…

In a separate post I cover the acceleration of warnings from credible AI industry insiders such as Geoffrey Hinton.