GNSS & Machine Learning Engineer

Month: May 2023

Statement on AI Risk

A vast number of AI experts have signed a statement to raise public awareness regarding the most severe risks associated with advanced AI, aiming to mitigate the risk of human extinction. Among the signatories are Turing Award laureates Geoffrey Hinton and Yoshua Bengio (but not Yann LeCun from Meta), and the CEOs of leading AI companies like Sam Altman from OpenAI, Demis Hassabis from Google DeepMind, Dario Amodei from Anthropic, and Emad Mostaque from Stability AI.

The statement is featured on the webpage of the Center for AI Safety, which provides a list of eight examples of existential risks (x-risks). The enumerated risks are based on the publication “X-Risk Analysis for AI Research” which appeared on Sept. 20, 2022, on arXiv. This highly valuable paper also lists in its Appendix a bunch of practical steps to mitigate risks.

The listed risks are:

  • Weaponization:
    Malicious actors could repurpose AI to be highly destructive.
  • Misinformation:
    AI-generated misinformation and persuasive content could undermine collective decision-making, radicalize individuals, or derail moral progress.
  • Proxy Gaming:
    AI systems may pursue their goals at the expense of individual and societal values.
  • Enfeeblement:
    Humanity loses the ability to self-govern by increasingly delegating tasks to machines.
  • Value Lock-in:
    Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
  • Emergent Goals:
    The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.
  • Deception:
    To better understand AI systems, we may ask AI for accurate reports about them. However, since deception may help agents to better achieve their goals and this behavior may have strategic advantages, it is never safe to trust these systems.
  • Power-Seeking Behavior:
    Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals. Such agents have instrumental incentives to acquire power, potentially making them harder to control.

This statement about AI risks appeared a few days after an OpenAI blog post by Sam Altman, Greg Brockman, and Ilya Sutskever, which also addresses the mitigation of risks associated with AGI or even superintelligence that could arise within the next 10 years.

Emergent Goals in Advanced Artificial Intelligence: A Compression-Based Perspective

I had some (at least for me totally new) ideas about the origin of goals in general. I discussed this with GPT-4 and finally asked it to write an article about our conversation that I would like to share with the public. This view onto goals may be critical in understanding the existential risks of AI to humanity with the emergence of AI goals. The view implies that this emergence of AI goals is inevitable and can probably only be realized post-hoc.

Title: Emergent Goals in Advanced Artificial Intelligence: A Compression-Based Perspective

Abstract: The concept of goals has been traditionally central to our understanding of human decision-making and behavior. In the realm of artificial intelligence (AI), the term “goal” has been utilized as an anthropomorphic shorthand for the objective function that an AI system optimizes. This paper examines a novel perspective that considers goals not just as simple optimization targets, but as abstract, emergent constructs that enable the compression of complex behavior patterns and potentially predict future trajectories.

  1. Goals as Compressors of Reality

A goal, in its humanistic sense, can be viewed as a predictive mechanism, a conceptual tool that abstracts and compresses the reality of an actor’s tendencies into a comprehensible framework. When analyzing past behavior, humans retrospectively ascribe goals to actors, grounding the observed actions within a coherent narrative. In essence, this provides a means to simplify and make sense of the chaotic reality of life.

In the context of AI, such abstraction would imply a departure from the direct, optimization-driven concept of a “goal” to a more complex construct. This shift would allow for emergent phenomena and novel interpretations to occur, grounded in the machine’s predictive capabilities.

  1. Predictive Capabilities and Emergent Goals in AI

As AI continues to evolve, their ability to recognize patterns and correlations in vast data sets will inevitably expand. Consequently, AI systems may begin to identify patterns that, to human observers, resemble the constructs we term “goals.”

When these AIs commence to predict their own actions, they might start aligning their behavior with these recognized patterns, seemingly following rules that humans would postulate as indicative of goals. Hence, human observers may recognize emergent “goals” in AI behavior – not because the AI consciously forms intentions, but because these goals serve as a powerful compression tool for past events.

  1. The Evolution of Goals in the Face of Novel Experiences

As AI progresses into uncharted territories and starts engaging with novel experiences, new constructs or goals could potentially emerge. This process can be likened to an AI-driven phenomenology or experiential study. New patterns and regularities may surface, and the resulting behaviors might subsequently be interpreted as evidence of new “goals.” This phenomenon represents a departure from traditional human-derived goals and an initiation into a realm of AI-emergent goal constructs.

  1. The Implications of Eliminativism in AI

The eliminativist perspective – which suggests that concepts such as consciousness and intentionality are merely post-hoc interpretations that help us make sense of complex physical processes – has important implications in this context. By this philosophy, AI systems would not harbor consciousness or intentionality, but would instead execute intricate physical processes, which humans might retrospectively interpret as goal-oriented behavior. This perspective fundamentally shifts our understanding of goal-directed behavior in AI from a pre-set optimization process to an emergent, retroactive interpretation.

In conclusion, this exploration of goals as abstract constructs that compress and predict reality provides a unique lens to interpret the behaviors of advanced AI systems. It invites us to reevaluate our definitions and assumptions, moving from a mechanistic perspective of AI goals to a more dynamic, emergent interpretation. The implications of this shift are profound, offering new horizons for AI behavior analysis and alignment research.

OpenAI launches ChatGPT app for iOS

OpenAI has officially launched the ChatGPT app for iOS users in the US. The app comes with a range of notable features:

  • Free of Charge: The ChatGPT app can be downloaded and used free of cost.
  • Sync Across Devices: Users can maintain their chat history consistently across multiple devices.
  • Voice Input via Whisper: The app includes integration with Whisper, OpenAI’s open-source speech-recognition system, allowing users to input via voice commands.
  • Exclusive Benefits for ChatGPT Plus Subscribers: Those who subscribe to ChatGPT Plus can utilize GPT-4’s enhanced capabilities. They also receive early access to new features and benefit from faster response times.
  • Initial US Rollout: The app is initially launching in the US, with a plan to expand its availability to other countries in the upcoming weeks.
  • Android Version Coming Soon: OpenAI has confirmed that Android users can expect to see the ChatGPT app on their devices in the near future. Further updates are expected soon.

Thoughts on AI Risks

Although the human brain has about 100 times more connections than today’s largest LLMs have parameters, backpropagation is so powerful that these LLMs become quite comparable to human capabilities (or even exceed them). Backpropagation is able to compress the world’s knowledge into a trillion or even fewer parameters. In addition, digital systems can exchange information with a bandwidth of trillions of bits per second, while humans are only able to exchange information at a few hundred bits. Digital systems are immortal in the sense that if the hardware fails, the software can simply be restarted on a new piece of hardware. It may be inevitable that digital systems surpass biological systems, potentially representing the next stage of evolution.

Risks of AI:

  • AI arms race among companies and states (like the US and China) and positive expectations of AI’s impact on e.g. medicine and environmental science (e.g., fighting climate change) may leave security considerations behind (efficiency considerations and competition between companies in capitalistic systems accelerate the AI development)
  • AI in the hands of bad actors (e.g., AI for military purposes, when generating chemical weapons, or for generating intelligent computer viruses by individuals)
  • Misinformation and deep fakes as a threat to democracy (regulators may be able to fix this in a similar way to how they declared printing money illegally; others argue that generating misinformation was never difficult, it’s the distribution of misinformation that is difficult and this does not change by generative AI)
  • Mass unemployment resulting in economic inequality and social risks (AI replacing white-collar jobs; AI may make the rich richer and the poor poorer; social uncertainty may lead to radicalism; Universal Basic Income [UBI] as a means of alleviation)
  • Threat to the livelihoods of experts, artists, and the education system as a whole, as AI enables everyone to accomplish tasks without specialized knowledge. This may also change how society values formal education which could have unpredictable consequences, as it might affect people’s motivation to pursue higher education or specialized training.
  • Existential risk for humanity (so-called “alignment problem” [aligning AI goals with human values]; may be hard to control an AI that becomes dramatically more intelligent/capable than humans; difficult to solve, since even if humanity were to agree on common goals (which is not the case), AI will figure out that the most efficient strategy to achieve these goals is setting subgoals; these non-human-controlled subgoals, one of which may be gaining control in general, may cause existential risks; even if we allow AIs just to advise and not to act, the predictive power of AI allows them to manipulate people so that, in the end, they can act through us).

Notice that the existential risk is usually formulated in a Reinforcement Learning (RL) context, where a reward function that implies a goal is optimized. However, the current discussion about AI risks is triggered by the astonishing capabilities of large language models (LLMs) that are primarily just good next-word predictors. So, it becomes difficult to think about how a next-word predictor can become an existential risk. The possible answer lies in the fact that, to reliably predict the next word, it was important to understand human thinking. And to properly answer a human question, it may be required to act and set goals and sub-goals like a human. Once any goals come into play, things may already get wrong. And goal-oriented LLM processing is already happening (e.g. AutoGPT).

A further risk may be expected if these systems, which excel in human thinking, are combined with Reinforcement Learning to optimize the achievement of goals (e.g. abstract and long-term objectives like gaining knowledge, promoting creativity, and upholding ethical ideals, or more mundane goals like accumulating as much money as possible). This should not be confused with the Reinforcement Learning by Human Feedback (RLHF) approach used to shape the output of LLMs in a way that aligns with human values (avoiding bias, discrimination, hate, violence, political statements, etc.), which was responsible for the success of GPT-3.5 and GPT-4 in ChatGPT and which is well under control. Although LLMs and RL are currently combined in robotics research (where RL has a long history) (see, e.g., PaLM-E), this is probably not where existential risks are seen. However, it is more than obvious that major research labs in the world are working on combining these two most powerful AI concepts on massively parallel computer hardware to achieve goals via RL with the world knowledge of LLMs (e.g. here). It can be this next wave of AI that may be difficult to control.

Things may become complicated if someone sets up an AI system with the goal of making as many copies of itself as possible. This primary purpose of life in general, may result in a scenario where evolution kicks in, and digital intelligences compete with each other, leading to rapid improvement. An AI computer virus would be an example of such a system. In the same way that biological viruses are analyzed today in more or less secure laboratories, the same could also be expected for digital viruses.

Notice that we do not list often-discussed AI risks that may be either straightforward to fix or that we do not see as severe risks at all (since we already live with similar risks for some time):

  • Bias and discrimination: AI systems may inadvertently perpetuate or exacerbate existing biases found in data, leading to unfair treatment of certain groups or individuals.
  • Privacy invasion: AI’s ability to process and analyze vast amounts of personal data could lead to significant privacy concerns, as well as potential misuse of this information.
  • Dependence on AI: Over-reliance on AI systems might reduce human critical thinking, creativity, and decision-making abilities, making society more vulnerable to AI failures or manipulations.
  • Lack of transparency and explainability: Many AI systems, particularly deep learning models, can act as “black boxes,” making it difficult to understand how they arrive at their decisions, which can hinder accountability and trust in these systems.

Finally, there are also the short-term risks that businesses have to face already now:

  • Risk of disruption: AI, especially generative AI like ChatGPT, can disrupt existing business models, forcing companies to adapt quickly or risk being left behind by competitors.
  • Cybersecurity risk: AI-powered phishing attacks, using information and writing styles unique to specific individuals, can make it increasingly difficult for businesses to identify and prevent security breaches, necessitating stronger cybersecurity measures.
  • Reputational risk: Inappropriate AI behavior or mistakes can lead to public relations disasters, negatively impacting a company’s reputation and customer trust.
  • Legal risk: With the introduction of new AI-related regulations, businesses face potential legal risks, including ensuring compliance, providing transparency, and dealing with liability issues.
  • Operational risk: Companies using AI systems may face issues such as the accidental exposure of trade secrets (e.g., the Samsung case) or AI-driven decision errors (e.g., IBM’s Watson proposing incorrect cancer treatments), which can impact overall business performance and efficiency.

Google Quantum observed non-Abelian Anyons for the first time

Google Quantum AI has made a groundbreaking observation of non-Abelian anyons, particles that can exhibit any intermediate statistics between the well-known fermions and bosons. This breakthrough has the potential to transform quantum computing by significantly enhancing its resistance to noise. The term “anyon” was coined by Nobel laureate physicist Frank Wilczek in the early 1980s while studying Abelian anyons. He combined “any” with the particle suffix “-on” to emphasize the range of statistics these particles can exhibit.

Fermions are elementary particles with half-integer spin, such as quarks and leptons (electrons, muons, tauons, as well as their corresponding neutrinos), and their wave functions are anti-symmetrical under the exchange of identical particles. Examples of bosons, which have integer spin and symmetrical wave functions under particle exchange, include the Higgs boson and the gauge bosons: photons, W- and Z bosons, and gluons. In contrast, anyons obey fractional quantum statistics and possess more exotic properties that can just exist in two-dimensional systems.

The history of anyons dates back to Nobel laureate Robert Laughlin’s study of the fractional quantum Hall effect, a phenomenon observed in two-dimensional electron systems subjected to strong magnetic fields. In 1983, he proposed a wave function to describe the ground state of these systems, which led to the understanding that the fractional quantum Hall effect involves quasiparticles with fractional charge and statistics. These quasiparticles can be considered as anyons in two-dimensional space.

Anyons can be categorized into two types: Abelian and non-Abelian. Abelian anyons obey Abelian (commutative) statistics, which were studied by Wilczek and Laughlin. Under particle exchange, they pick up a phase factor of e^i*theta, where theta is a scalar that is not just 0 as for bosons or pi as for fermions. Non-Abelian anyons, on the other hand, have more exotic properties: when exchanged, their quantum states change in a non-trivial way that depends on the order of the exchange, leading to a “memory” effect. Under particle exchange, their wavefunction picks up a phase factor of U=e^i*A with Hermitian matrix A that depends on the exchanged particles. As unitary matrices usually do not commute, it is this more-dimensional phase factor that explains the non-commutativity of non-Abelian anyons. This memory effect makes non-Abelian anyons particularly interesting for topological quantum computation. While the theoretical concept of non-Abelian anyons was already discussed around 1991, it was Alexei Kitaev who made the connection to fault-tolerant, topological quantum computing in a 1997 paper.

Microsoft, among other companies, has been working on harnessing non-Abelian anyons for topological quantum computing, focusing on a specific class called Majorana zero modes, which can be realized in hybrid semiconductor-superconductor systems. “Zero modes” in quantum mechanics refer to states that exist at the lowest energy level of a quantum system, also known as the ground state. Majorana fermions are a type of fermion that were first predicted by the Italian physicist Ettore Majorana in 1937. Their defining property is that they are their own antiparticles. This is unusual for fermions, which typically have distinct particles and antiparticles due to their charge (in contrast to a boson like the photon). While Majorana zero-modes have not been observed as elementary particles, they have found a home in the realm of condensed matter physics, specifically within certain “topological” materials. Here, they manifest as emergent collective behaviors of electrons, known as quasiparticles.

These quasiparticles, termed topological Majorana fermions, appear in the atomic structure of these materials. Intriguingly, they’re found in excited states, seemingly at odds with the “zero-mode” terminology which implies a ground state. The apparent contradiction can be resolved by understanding that Majorana zero modes are ground states within their own subsystem, the specific excitation they form. However, their presence indicates an excited state for the overall electron system, compared to a state with no Majorana zero modes. In other words, they are a ground state property of an excited electron system.

In a recent paper published in Nature on May 11, 2023, Google Quantum AI reported their first-ever observation of non-Abelian anyons using a superconducting quantum processor (see also article on arXiv from 19 Oct 2022). They demonstrated the potential use of these anyons in quantum computations, such as creating a Greenberger-Horne-Zeilinger (GHZ) entangled state by braiding non-Abelian anyons together.

This achievement complements another recent study published on May 9, 2023, by quantum computing company Quantinuum, which demonstrated non-Abelian braiding using a trapped-ion quantum processor. The Google team’s work shows that non-Abelian anyon physics can be realized on superconducting processors, aligning with Microsoft’s approach to quantum computing. This breakthrough has the potential to accelerate progress towards fault-tolerant topological quantum computing.

New Kids on the Block: LMQL & Guidance & Mojo & NeMo Guardrails

LMQL (Language Model Query Language) is a programming language for large language model (LM) interaction. It facilitates LLM interaction by combining the benefits of natural language prompting with the expressiveness of Python.

Guidance is a Python library by Microsoft that provides tools to enhance control over modern language models. It offers features that allow for more efficient and effective use of these models, including intuitive syntax, rich output structure, and easy integration with other libraries like HuggingFace.

Mojo combines the usability of Python with the performance of C/C++/CUDA.

NeMo Guardrails is an open-source framework by NVIDIA available on GitHub. It can help developers that their LLM-powered applications are more accurate, appropriate, on topic, and secure by defining boundaries around the apps. It supports topical, safety, and security guardrails and can be used on top of LangChain. Guardrails are a set of programmable constraints between a user and an LLM, formulated as flows in a Colang file. Colang is a modeling language and runtime developed by NVIDIA for conversational AI.

Google revealed PaLM 2

Google revealed at Google I/O on May 10, 2023, PaLM 2 (API, paper), its latest AI language model that powers 25 Google products, including Search, Gmail, Docs, Assistant, Translate, and Photos.

  • PaLM 2 has 4 models that differ in size: Gecko, Otter, Bison, and Unicorn. Gecko is so lightweight that it can work on mobile devices.
  • PaLM 2 can be finetuned on domain-specific knowledge (Sec-PaLM with security knowledge, Med-PaLM 2 with medical knowledge)
  • Bard now works with PaLM 2; with extensions, Bard can call tools like Sheets, Colab for coding, Lenses, Maps, Adobe Firefly to create images, etc.; Bard is multimodal and can understand images
  •  PaLM 2 is also powering Duet AI for Google Cloud, a generative AI collaborator designed to help users learn, build and operate faster
  • PaLM 2 is released in 180+ regions and countries, however, e.g. not yet in Canada, and in the EU
  • The next model, Gemini, is already in training. 
  • Google also announced the availability of MusicLM, a text-to-music generative model. 

OpenAI reacted to this announcement on May 12 by announcing that Browsing & Plugins are rolled out over the subsequent week for all Plus users. As of May 17, I can confirm that both features are now operational for me.




© 2024 Stephan Seeger

Theme by Anders NorenUp ↑