Generative AI warnings cont’d

It was a heavy hitting week for humanity scale concerns and generative AI warnings.  Heavy hitting, not because of the usual media churn but because of the actors and the emerging clarity. Warnings focussed, not about imminent AGI, but what we already know is going on – social manipulation for power and profit.

  1. Kamala Harris  met with executives from several ai companies and told them they had a moral obligation – I’m sure US gov did the same for Zuckerberg several times. Right?
  2. Over a decade ago Yuval Noah Harari captured dinner party conversations by declaring the humble grain “had conquered humanity”. This week he declares the new conquerer to be LLMs that have parsed the world’s textual content and mastered the art of language and manipulated intimacy.
    • It’s a credible claim: this week my mother choked up when I read her the short story from a previous post. This was after I disclosed it was written by a robot. It’s simply that the prose aligned strongly with her lived experience, that the writing resonated enough to evoke an emotional response.
    • Harari makes this clear that democracy’s core strength and therefore weakness is communication.  Anything that can hack that is no less than an attack on the benchmark of civilisation.
    • He calls for regulation that demands (if not enforceable) that any AI communication be disclosed. The uncanny valley problem is being solved so dramatically that its plain  to see its a (err) plain and not a valley.
  3. Microsoft’s chief scientific officer, Eric Horvitz shows worrying signs that Microsoft thinks their own ethics and safety teams can protect humanity – WTF. Some of us remember that Windows is the most successful malware distribution platform in the history of humanity – his comforting tone rings hollow.
    • Horvitz’s hubristic antidote is to test in the wild, which is great for traditional open source software but also reminiscent of one theory of a certain biological pandemic we still live in 2023. What could possibly go wrong?
    • Credit Horvitz that he does also call out that the immediate threat is GPT in the hands of nefarious people , organisations or countries. (More on that below).
    • Horvitz is dismissive of the 6 month Pause letter and calls it a distraction.  That is not going to age well.
  4. The blockbuster news event was “the godfather of AI”, Geoffrey Hinton going public following  his departure from Google.
    • His understanding of the issues Harari raised are so deep that he expresses the concerns in a few short quips. (Contextual humour may indeed be a human characteristic that is hard to replicate.  Memes are easy but summarising the zeitgeist with brief wit is not ChatGPT’s forte).
    • The big, big claim he makes is that human capacity to learn appears substandard in comparison to AI’s use of back-propagation is a better learning algorithm than what humans possess.
      • AI packs more information into fewer connections.  “That is scary”.
      • He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
      • speed of transmission and communicability (see below – and an extra dimension to Harari’s point about communication).
    • Multi-polar traps/arms-race
      • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology.
      • The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
      • Bad Actors: “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
      • Hinton did not sign the recent “AI pause” letters.

Gems from Hinton

  • Humans may just be a passing phase in the evolution of intelligence. Biological intelligence was needed to create digital intelligence.  Then absorb everything written,  then AI will move to direct experience of the world. Humans may be allowed to leave to keep the power stations running.
  • Ray Kurzweil wanted immortality – it may turn our that “Immortality is not for humans”
  • OpenAI has read everything that Machiavelli ever wrote.
  • Currently AI IQ is: 80 or 90
  • GPT-4 has Trillion connections and humans have 100 trillion connections
  • Humans evolved to be power efficient – 30watts with lots of connections
  • AI currently uses around a Mega Watt when training,  but learns much more than humans
  • Humans are poor at coordinating distributed processing because our brains are all different (the trained models). AI can orchestrate faster digital learning, the models and weights can be immediately shared across many nodes.
  • Sentience doesn’t matter, its about AI using unintended subgoals – an obvious subgoal is “get more control”
  • Humans setting the guardrail rules (later) for AI is like a 2 year old child negotiating with its parent.

When will pro-human/planet AI overtake manipulation?

In 2022 my team released an AI neural net plugin to our SurfControl email filter that was trained to detect possible inappropriate (nude) images in corporate emails. As an “applied” use of AI it was groundbreaking and exciting. As a product it was very good at detecting sand dunes and baby faces 👶. A neural net is the underlying technology that ChatGPT is built and attempts to copy how the brain works. Why was our neural net plugin inaccurate? It had 100s or 1000s of connections. GPT is rumoured to have a trillion and the human brain has 100 trillion connections. So the same plugin today could actually describe the picture in detail.

The AI of the decade preceding ChatGPT was not applicable for the masses, it was applied on the masses in the form of Netflix recommendations, song playlists and of course fake news. It is the manipulation of humans where AI found chasm crossing monetary value. The attention economy benefited few.

Amongst this week’s news which I cover in a separate post is this horrifying chart in the Economist.

It shows the urgency to addresss some huge problem in our western cultures – since 2015 there has not been enough pandemics or financial meltdowns to cause the alarming increase in teenage female suicide. In contrast its easy to surmise that smartphones and the visual social medias are creating the problem.

This is heartbreaking.  Will it take decades (like it did with tobacco and asbestos) to protect our community from the causes? It’s not new news, here is Jean Twenge in 2017.

If it walks like a duck etc…

In a separate post I cover the acceleration of warnings from credible AI industry insiders such as Geoffrey Hinton.

Prometheus, Icarus, Sweetums and Lucifer walk into a bar…

regarding: https://paulkingsnorth.substack.com/p/the-universal

I enjoyed this intriguing and well-written post forwarded to me by a friend. This post has some thoughts:

By proposing (from previous posts) that each age of humanity fixates on a sovereign for their age, he sets out to uncover the someone or something that is the heir to the throne.

Spoiler Alert: its an internet connected AI.

In some ways it’s a companion piece (or err, homage) to Scott Alexander’s Moloch essay, Interesting this was recently discussed at around 42:09 by Max Tegmark who is one of the orchestrators of the GPT Pause letter.

I fear this article overly biases towards a core Christian narrative: that earth is a battleground of good vs evil. Kingsnorth borrows Steiner’s Lucifer, Christ, Ahriman* trinity before dismissing it. I guess he was “steel-manning” Steiner view in order to dismiss it but keep Christ at the middle – much the same as Buddhism’s middle way or Lao Tzu’s water method.

The Problem with Monsters 1

Unfortunately for Kingsnorth, Steiner’s model is the most  theologically compelling element of the piece. He does agree “Ahriman, the spiritual personification of the age of the Machine” (updated to the etheric realm emergence of computer and AI tech).

He provides some background that tech perniciously also reprograms us. No arguments there and the AI 1.0 of social media is proof that exponential capitalism has no ethics when it comes to hacking human psychological bugs for profit. John Vervaeke would neutralize this, I can’t recall his exact example but:

  • mankind invents a thing for drinking, lets call it a “cup”
  • cup is made in the shape most adapted to how we would hold it
  • soon when we reach for a cup, our hand forms in the shape of the cup.
  • we have “adaptive fit” to the tool

Similarly Yuval Noah Harari asserted that Humans (Sapiens) were conquered by agriculture. That the humble grain dominated us, transformed us, terminated hunter-gatherer societies and gave us bad backs from bending over all the time.

So Kingsnorth is, too, invoking an old human-hacking technology: “story-time with monsters” to deliver a morality tale.

The Actual Themes

And I think that is a disservice to the actual themes:

  1. what does each age of humanity worship? (in the era of agriculture it was a plough and commerce, in our era Kingsnorth is proposing its internet connected AI and our infatuation with all things computer**).
  2. Humanity’s track record of hubris in defending (a) has an appalling batting average
  3. in regards to (b), the exponential technology that the internet and AI is (rightfully pointed out by Tristan Harris) is much worse than Nuclear weapons because those weapons can’t rebuild and “improve” themselves like AI can.
  4. that AI people are modern hipster versions Saruman’s orc workers, actively constructing the beast to dominate us. They use words like “ushering”, signalling a subservient act, pre-emptively assigning grandeur in a worshipful way to the new intelligence they have a hand in creating.

The Problem with Monsters 2

I’d agree with all this but the disservice is to use amygdala stimulating monster stories to mobilize a response.  Perhaps I misquote Einstein: “No problem can be solved from the same level of consciousness that created it.”

Therein lies the questionably efficacy of Christian “battle of light-vs-evil” grand narratives in a post-post-modern world. I am under no delusion that evil doesn’t exist (we all know that as a current within ourselves) but anthropomorphising human psychopathy into entities can’t stop at 2 baddies vs 1 goodie. Why not reel in the whole goetic demon universe?

It strikes me that Tegmark and co’s letter is a more pre-frontal cortex response and therefore more constructive. The debates on that are outside the scope of this post.

Beyond those Amygdala hacking Monsters

I’d propose we drop the ancient myth method of trying to shock us into acknowledging an absence of wisdom and find a new way.

The recent movie “don’t look up”, that Max Tegmark reminds us of is a modern analogy. That our collective hypnosis and daily momentum stops us from acting to protect a better outcome.

For me the movie was a long form of Douglas Adams (in Hitchhiker’s Guide…) hilarious “SEP cloaking device”. In his book, the device is a technology that renders objects invisible by making them seem like they are somebody else’s problem.

Another known metaphor is the frog being boiled in slowly heating water.

My understanding is that from an evolutionary perspective we respond to rapid movements as threats in a linear landscape such as a jungle or savannah. The problem for us now is that changes are exponential and we don’t have evolutionary adapted equipment to see that readily. Couple that with the societal stress of mortgage slavery, capitalist, materialistic fixations and we can’t do more than glance at an “internet connected AI” and not see anything but the SEP cloak.

We need exponential thinkers like Tegmark and (at time of writing this, over) 26.6K signatures to re-program our response, not dismissively (as so many tech-bros would), not naively, not for fear of monsters but no less stridently.

The Tragedy of the Global Commons indeed. I hope Kingsnorth’s post helped people find that new way rather than just shake fists.

Sweetums

So what has this got to do with Prometheus, Icarus, Sweetums and Lucifer?

  • Prometheus was a rebel for technological advancement and punished for hubris
  • Icarus was test-flighting a new technological advancement and his hubris was his demise
  • Lucifer (re-cast by Christians as Satan-like, but originally a bringer of Light***
  • He’s just a monster. Adults fear monsters, kids have a balance of fear and “delight of fear” 🤔🤗

* Ahriman, a Zoroastrianism representing darkness and evil in material form.

** Is it our love of tech or really is that just a mirror to our love of intellect and facts. We

*** Lucifer, as the fallen angel and one of Steiner’s eternal forces on earth (not Satan) is really the main clue. Controversially, we can take the original morality tale as: “humanity’s greatest evil is to worship mind greater than anything else” – being cast out for his pride and desire for power, he and his followers “fell”, in the intellectual analogue to Adam and Eve’s “fall” desire for knowledge, power and early delights (desires).

As an antidote to us not being the smartest (information processing) beings on the planet, Tegmark proposes some other human traits (especially subjective experience) that perhaps we should celebrate and put at the center of our self-worth. It’s a great proposal and in agreement to my earlier post where I explored this. I updated accordingly.