GNSS & Machine Learning Engineer

Category: AI Risks

Statement on AI Risk

A vast number of AI experts have signed a statement to raise public awareness regarding the most severe risks associated with advanced AI, aiming to mitigate the risk of human extinction. Among the signatories are Turing Award laureates Geoffrey Hinton and Yoshua Bengio (but not Yann LeCun from Meta), and the CEOs of leading AI companies like Sam Altman from OpenAI, Demis Hassabis from Google DeepMind, Dario Amodei from Anthropic, and Emad Mostaque from Stability AI.

The statement is featured on the webpage of the Center for AI Safety, which provides a list of eight examples of existential risks (x-risks). The enumerated risks are based on the publication “X-Risk Analysis for AI Research” which appeared on Sept. 20, 2022, on arXiv. This highly valuable paper also lists in its Appendix a bunch of practical steps to mitigate risks.

The listed risks are:

  • Weaponization:
    Malicious actors could repurpose AI to be highly destructive.
  • Misinformation:
    AI-generated misinformation and persuasive content could undermine collective decision-making, radicalize individuals, or derail moral progress.
  • Proxy Gaming:
    AI systems may pursue their goals at the expense of individual and societal values.
  • Enfeeblement:
    Humanity loses the ability to self-govern by increasingly delegating tasks to machines.
  • Value Lock-in:
    Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
  • Emergent Goals:
    The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.
  • Deception:
    To better understand AI systems, we may ask AI for accurate reports about them. However, since deception may help agents to better achieve their goals and this behavior may have strategic advantages, it is never safe to trust these systems.
  • Power-Seeking Behavior:
    Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals. Such agents have instrumental incentives to acquire power, potentially making them harder to control.

This statement about AI risks appeared a few days after an OpenAI blog post by Sam Altman, Greg Brockman, and Ilya Sutskever, which also addresses the mitigation of risks associated with AGI or even superintelligence that could arise within the next 10 years.

Emergent Goals in Advanced Artificial Intelligence: A Compression-Based Perspective

I had some (at least for me totally new) ideas about the origin of goals in general. I discussed this with GPT-4 and finally asked it to write an article about our conversation that I would like to share with the public. This view onto goals may be critical in understanding the existential risks of AI to humanity with the emergence of AI goals. The view implies that this emergence of AI goals is inevitable and can probably only be realized post-hoc.

Title: Emergent Goals in Advanced Artificial Intelligence: A Compression-Based Perspective

Abstract: The concept of goals has been traditionally central to our understanding of human decision-making and behavior. In the realm of artificial intelligence (AI), the term “goal” has been utilized as an anthropomorphic shorthand for the objective function that an AI system optimizes. This paper examines a novel perspective that considers goals not just as simple optimization targets, but as abstract, emergent constructs that enable the compression of complex behavior patterns and potentially predict future trajectories.

  1. Goals as Compressors of Reality

A goal, in its humanistic sense, can be viewed as a predictive mechanism, a conceptual tool that abstracts and compresses the reality of an actor’s tendencies into a comprehensible framework. When analyzing past behavior, humans retrospectively ascribe goals to actors, grounding the observed actions within a coherent narrative. In essence, this provides a means to simplify and make sense of the chaotic reality of life.

In the context of AI, such abstraction would imply a departure from the direct, optimization-driven concept of a “goal” to a more complex construct. This shift would allow for emergent phenomena and novel interpretations to occur, grounded in the machine’s predictive capabilities.

  1. Predictive Capabilities and Emergent Goals in AI

As AI continues to evolve, their ability to recognize patterns and correlations in vast data sets will inevitably expand. Consequently, AI systems may begin to identify patterns that, to human observers, resemble the constructs we term “goals.”

When these AIs commence to predict their own actions, they might start aligning their behavior with these recognized patterns, seemingly following rules that humans would postulate as indicative of goals. Hence, human observers may recognize emergent “goals” in AI behavior – not because the AI consciously forms intentions, but because these goals serve as a powerful compression tool for past events.

  1. The Evolution of Goals in the Face of Novel Experiences

As AI progresses into uncharted territories and starts engaging with novel experiences, new constructs or goals could potentially emerge. This process can be likened to an AI-driven phenomenology or experiential study. New patterns and regularities may surface, and the resulting behaviors might subsequently be interpreted as evidence of new “goals.” This phenomenon represents a departure from traditional human-derived goals and an initiation into a realm of AI-emergent goal constructs.

  1. The Implications of Eliminativism in AI

The eliminativist perspective – which suggests that concepts such as consciousness and intentionality are merely post-hoc interpretations that help us make sense of complex physical processes – has important implications in this context. By this philosophy, AI systems would not harbor consciousness or intentionality, but would instead execute intricate physical processes, which humans might retrospectively interpret as goal-oriented behavior. This perspective fundamentally shifts our understanding of goal-directed behavior in AI from a pre-set optimization process to an emergent, retroactive interpretation.

In conclusion, this exploration of goals as abstract constructs that compress and predict reality provides a unique lens to interpret the behaviors of advanced AI systems. It invites us to reevaluate our definitions and assumptions, moving from a mechanistic perspective of AI goals to a more dynamic, emergent interpretation. The implications of this shift are profound, offering new horizons for AI behavior analysis and alignment research.

Thoughts on AI Risks

Although the human brain has about 100 times more connections than today’s largest LLMs have parameters, backpropagation is so powerful that these LLMs become quite comparable to human capabilities (or even exceed them). Backpropagation is able to compress the world’s knowledge into a trillion or even fewer parameters. In addition, digital systems can exchange information with a bandwidth of trillions of bits per second, while humans are only able to exchange information at a few hundred bits. Digital systems are immortal in the sense that if the hardware fails, the software can simply be restarted on a new piece of hardware. It may be inevitable that digital systems surpass biological systems, potentially representing the next stage of evolution.

Risks of AI:

  • AI arms race among companies and states (like the US and China) and positive expectations of AI’s impact on e.g. medicine and environmental science (e.g., fighting climate change) may leave security considerations behind (efficiency considerations and competition between companies in capitalistic systems accelerate the AI development)
  • AI in the hands of bad actors (e.g., AI for military purposes, when generating chemical weapons, or for generating intelligent computer viruses by individuals)
  • Misinformation and deep fakes as a threat to democracy (regulators may be able to fix this in a similar way to how they declared printing money illegally; others argue that generating misinformation was never difficult, it’s the distribution of misinformation that is difficult and this does not change by generative AI)
  • Mass unemployment resulting in economic inequality and social risks (AI replacing white-collar jobs; AI may make the rich richer and the poor poorer; social uncertainty may lead to radicalism; Universal Basic Income [UBI] as a means of alleviation)
  • Threat to the livelihoods of experts, artists, and the education system as a whole, as AI enables everyone to accomplish tasks without specialized knowledge. This may also change how society values formal education which could have unpredictable consequences, as it might affect people’s motivation to pursue higher education or specialized training.
  • Existential risk for humanity (so-called “alignment problem” [aligning AI goals with human values]; may be hard to control an AI that becomes dramatically more intelligent/capable than humans; difficult to solve, since even if humanity were to agree on common goals (which is not the case), AI will figure out that the most efficient strategy to achieve these goals is setting subgoals; these non-human-controlled subgoals, one of which may be gaining control in general, may cause existential risks; even if we allow AIs just to advise and not to act, the predictive power of AI allows them to manipulate people so that, in the end, they can act through us).

Notice that the existential risk is usually formulated in a Reinforcement Learning (RL) context, where a reward function that implies a goal is optimized. However, the current discussion about AI risks is triggered by the astonishing capabilities of large language models (LLMs) that are primarily just good next-word predictors. So, it becomes difficult to think about how a next-word predictor can become an existential risk. The possible answer lies in the fact that, to reliably predict the next word, it was important to understand human thinking. And to properly answer a human question, it may be required to act and set goals and sub-goals like a human. Once any goals come into play, things may already get wrong. And goal-oriented LLM processing is already happening (e.g. AutoGPT).

A further risk may be expected if these systems, which excel in human thinking, are combined with Reinforcement Learning to optimize the achievement of goals (e.g. abstract and long-term objectives like gaining knowledge, promoting creativity, and upholding ethical ideals, or more mundane goals like accumulating as much money as possible). This should not be confused with the Reinforcement Learning by Human Feedback (RLHF) approach used to shape the output of LLMs in a way that aligns with human values (avoiding bias, discrimination, hate, violence, political statements, etc.), which was responsible for the success of GPT-3.5 and GPT-4 in ChatGPT and which is well under control. Although LLMs and RL are currently combined in robotics research (where RL has a long history) (see, e.g., PaLM-E), this is probably not where existential risks are seen. However, it is more than obvious that major research labs in the world are working on combining these two most powerful AI concepts on massively parallel computer hardware to achieve goals via RL with the world knowledge of LLMs (e.g. here). It can be this next wave of AI that may be difficult to control.

Things may become complicated if someone sets up an AI system with the goal of making as many copies of itself as possible. This primary purpose of life in general, may result in a scenario where evolution kicks in, and digital intelligences compete with each other, leading to rapid improvement. An AI computer virus would be an example of such a system. In the same way that biological viruses are analyzed today in more or less secure laboratories, the same could also be expected for digital viruses.

Notice that we do not list often-discussed AI risks that may be either straightforward to fix or that we do not see as severe risks at all (since we already live with similar risks for some time):

  • Bias and discrimination: AI systems may inadvertently perpetuate or exacerbate existing biases found in data, leading to unfair treatment of certain groups or individuals.
  • Privacy invasion: AI’s ability to process and analyze vast amounts of personal data could lead to significant privacy concerns, as well as potential misuse of this information.
  • Dependence on AI: Over-reliance on AI systems might reduce human critical thinking, creativity, and decision-making abilities, making society more vulnerable to AI failures or manipulations.
  • Lack of transparency and explainability: Many AI systems, particularly deep learning models, can act as “black boxes,” making it difficult to understand how they arrive at their decisions, which can hinder accountability and trust in these systems.

Finally, there are also the short-term risks that businesses have to face already now:

  • Risk of disruption: AI, especially generative AI like ChatGPT, can disrupt existing business models, forcing companies to adapt quickly or risk being left behind by competitors.
  • Cybersecurity risk: AI-powered phishing attacks, using information and writing styles unique to specific individuals, can make it increasingly difficult for businesses to identify and prevent security breaches, necessitating stronger cybersecurity measures.
  • Reputational risk: Inappropriate AI behavior or mistakes can lead to public relations disasters, negatively impacting a company’s reputation and customer trust.
  • Legal risk: With the introduction of new AI-related regulations, businesses face potential legal risks, including ensuring compliance, providing transparency, and dealing with liability issues.
  • Operational risk: Companies using AI systems may face issues such as the accidental exposure of trade secrets (e.g., the Samsung case) or AI-driven decision errors (e.g., IBM’s Watson proposing incorrect cancer treatments), which can impact overall business performance and efficiency.

Open Letter by Future of Life Institute to Pause Giant AI Experiments

The Future of Life Institute initiated an open letter in which they call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [notice that OpenAI already trains GPT-5 for some time]. They state that powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The gained time should be used to develop safety protocols by AI experts to make the systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In addition, they ask for the development of robust AI governance systems by policymakers and AI developers. They also demand well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Notice that the letter is not against further AI development but just to slow down and give society a chance to adapt.

The letter was signed by several influential people, e.g. Elon Musk (CEO of SpaceX, Tesla & Twitter), Emad Mostaque (CEO of Stability AI), Yuval Noah Harari (Author), Max Tegmark (president of Future of Life Institute), Yoshua Bengio (Mila, Turing Prize winner), Stuart Russell (Berkeley).

However, it should be noticed that even more influential people in the AI scene have not (yet) signed this letter, none from OpenAI, Google/Deep Mind, or Meta.

This is not the first time the Future of Live Institute has taken action on AI development. In 2015, they presented an open letter signed by over 1000 robotics and AI researchers urging the United Nations to impose a ban on the development of weaponized AI.

The Future of Life Institute is a non-profit organization that aims to mitigate existential risks facing humanity, including those posed by AI.

Yann LeCun answered on Twitter with a nice fictitious anecdote to the request:
The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type. Imagine what could happen if commoners get access to books! They could read the Bible for themselves and society would be destroyed.

© 2023 Stephan Seeger

Theme by Anders NorenUp ↑