GNSS & Machine Learning Engineer

Tag: Meta

AI race is heating up: Announcements by Google/DeepMind, Meta, Microsoft/OpenAI, Amazon/Anthropic

After weeks of “less exciting” news in the AI space since the release of Llama 2 by Meta on July 18, 2023, there were a bunch of announcements in the last few days by major players in the AI space:

Here are some links to the news of the last weeks:

Meta released Llama 2 free for Commercial Use

Meta open-sourced Llama 2 together with Microsoft, this time in contrast to Llama 1 free not just for research but also for commercial use.

  • Free for commercial use for businesses with less than 700 Mio monthly active users
  • Models with 70B, 13B, and 7B parameters
  • Llama-2-70B model is currently the strongest open-source LLM (Huggingface leaderboard), comparable to GPT-3.5-0301, noticeably stronger than Falcon, MPT, and Vicuna
  • Not yet at GPT-3.5 level, mainly because of its weak coding abilities
  • RLHF fine-tuned
  • Source code on GitHub, weights available on Azure, AWS, and HuggingFace
  • Llama 2 paper
  • 4K token context window
  • Trained on 2 trillion tokens with training costs of about $20M
  • Knowledge cut-off Dec 2022
  • Testing on https://www.llama2.ai

Just 4 days after this announcement, on July 22, 2023, StabilityAI released FreeWilly1 and FreeWilly2 which are fine-tuned models based on LLaMA65B and Llama-2-70B. These models took over the leadership on Hugging Face (Huggingface leaderboard). However, both models have no commercial license and are just intended for research.

Statement on AI Risk

A vast number of AI experts have signed a statement to raise public awareness regarding the most severe risks associated with advanced AI, aiming to mitigate the risk of human extinction. Among the signatories are Turing Award laureates Geoffrey Hinton and Yoshua Bengio (but not Yann LeCun from Meta), and the CEOs of leading AI companies like Sam Altman from OpenAI, Demis Hassabis from Google DeepMind, Dario Amodei from Anthropic, and Emad Mostaque from Stability AI.

The statement is featured on the webpage of the Center for AI Safety, which provides a list of eight examples of existential risks (x-risks). The enumerated risks are based on the publication “X-Risk Analysis for AI Research” which appeared on Sept. 20, 2022, on arXiv. This highly valuable paper also lists in its Appendix a bunch of practical steps to mitigate risks.

The listed risks are:

  • Weaponization:
    Malicious actors could repurpose AI to be highly destructive.
  • Misinformation:
    AI-generated misinformation and persuasive content could undermine collective decision-making, radicalize individuals, or derail moral progress.
  • Proxy Gaming:
    AI systems may pursue their goals at the expense of individual and societal values.
  • Enfeeblement:
    Humanity loses the ability to self-govern by increasingly delegating tasks to machines.
  • Value Lock-in:
    Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
  • Emergent Goals:
    The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.
  • Deception:
    To better understand AI systems, we may ask AI for accurate reports about them. However, since deception may help agents to better achieve their goals and this behavior may have strategic advantages, it is never safe to trust these systems.
  • Power-Seeking Behavior:
    Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals. Such agents have instrumental incentives to acquire power, potentially making them harder to control.

This statement about AI risks appeared a few days after an OpenAI blog post by Sam Altman, Greg Brockman, and Ilya Sutskever, which also addresses the mitigation of risks associated with AGI or even superintelligence that could arise within the next 10 years.

3rd-Level of Generative AI 

Defining 

1st-level generative AI as applications that are directly based on X-to-Y models (foundation models that build a kind of operating system for downstream tasks) where X and Y can be text/code, image, segmented image, thermal image, speech/sound/music/song, avatar, depth, 3D, video, 4D (3D video, NeRF), IMU (Inertial Measurement Unit), amino acid sequences (AAS), 3D-protein structure, sentiment, emotions, gestures, etc., e.g.

and 2nd-level generative AI that builds some kind of middleware and allows to implement agents by simplifying the combination of LLM-based 1st-level generative AI with other tools via actions (like web search, semantic search [based on embeddings and vector databases like Pinecone, Chroma, Milvus, Faiss], source code generation [REPL], calls to math tools like Wolfram Alpha, etc.), by using special prompting techniques (like templates, Chain-of-Thought [COT], Self-Consistency, Self-Ask, Tree Of Thoughts, ReAct [Reason + Act], Graph of Thoughts) within action chains, e.g.

we currently (April/May/June 2023) see a 3rd-level of generative AI that implements agents that can solve complex tasks by the interaction of different LLMs in complex chains, e.g.

However, older publications like Cicero may also fall into this category of complex applications. Typically, these agent implementations are (currently) not built on top of the 2nd-level generative AI frameworks. But this is going to change.

Other, simpler applications that just allow semantic search over private documents with a locally hosted LLM and embedding generation, such as e.g. PrivateGPT which is based on LangChain and Llama (functionality similar to OpenAI’s ChatGPT-Retrieval plugin), may also be of interest in this context. And also applications that concentrate on the code generation ability of LLMs like GPT-Code-UI and OpenInterpreter, both open-source implementations of OpenAI’s ChatGPT Code Interpreter/AdvancedDataAnalysis (similar to Bard’s implicit code execution; an alternative to Code Interpreter is plugin Noteable), or smol-ai developer (that generates the complete source code from a markup description) should be noticed.
There is a nice overview of LLM Powered Autonomous Agents on GitHub.

The next level may then be governed by embodied LLMs and agents (like PaLM-E with E for Embodied).

Open Letter by Future of Life Institute to Pause Giant AI Experiments

The Future of Life Institute initiated an open letter in which they call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [notice that OpenAI already trains GPT-5 for some time]. They state that powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The gained time should be used to develop safety protocols by AI experts to make the systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In addition, they ask for the development of robust AI governance systems by policymakers and AI developers. They also demand well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Notice that the letter is not against further AI development but just to slow down and give society a chance to adapt.

The letter was signed by several influential people, e.g. Elon Musk (CEO of SpaceX, Tesla & Twitter), Emad Mostaque (CEO of Stability AI), Yuval Noah Harari (Author), Max Tegmark (president of Future of Life Institute), Yoshua Bengio (Mila, Turing Prize winner), Stuart Russell (Berkeley).

However, it should be noticed that even more influential people in the AI scene have not (yet) signed this letter, none from OpenAI, Google/Deep Mind, or Meta.

This is not the first time the Future of Live Institute has taken action on AI development. In 2015, they presented an open letter signed by over 1000 robotics and AI researchers urging the United Nations to impose a ban on the development of weaponized AI.

The Future of Life Institute is a non-profit organization that aims to mitigate existential risks facing humanity, including those posed by AI.

Yann LeCun answered on Twitter with a nice fictitious anecdote to the request:
The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type. Imagine what could happen if commoners get access to books! They could read the Bible for themselves and society would be destroyed.

Meta AI presents CICERO, the first AI to achieve human-level performance in strategy game Diplomacy

Meta AI presents CICERO, an AI agent that can negotiate and cooperate with people. It is the first AI system that achieves human-level performance in the popular strategy game Diplomacy. Cicero ranked in the top 10 of participants on webDiplomacy.net.

Yannic Kilcher gives a great discussion of the accompanying Science paper. A second paper is freely available on arXiv. The source code is accessible on GitHub.

Meanwhile also DeepMind published an AI agent playing Diplomacy.

Galactica: Paper Generator by Meta AI

Meta AI publishes with Galactica.ai a large language model trained on scientific papers that allows to write a literature review, wiki article, or lecture note with references, formulas, etc. just by giving some text input about a topic. Even the paper about Galactica was written with the help of Galactica.

Just after a day, the Galactica.ai webpage is now down. But the source code is available on GitHub. Yannic Kilcher made a nice paper review about Galactica where he also explains why the demo webpage has been taken down.

Meta AI publishes ESMFold, a new breakthrough model for protein folding

ESMFold (ESM = Evolutionary Scale Modeling) [paper] uses a large language model that allows to accelerate folding (i.e. predicting the 3D structure of a protein from the DNA sequence [that encodes the amino acid sequence]) by up to 60 times (compared to state-of-the-art techniques like AlphaFold). This improvement has the potential to accelerate work in medicine, green chemistry, environmental applications, and renewable energy.

In addition, Meta AI made a new database of 600 million metagenomic protein structures (proteins which are found in microbes in the soil, deep in the ocean, and even in our guts and on our skin) available to the scientific community via the ESM Metagenomic Atlas.

ESMFold and related models like ESM-2 are published together with the API on GitHub and HuggingFace.

© 2024 Stephan Seeger

Theme by Anders NorenUp ↑