GNSS & Machine Learning Engineer

Tag: Baker Lab

3rd-Level of Generative AI 

Defining 

1st-level generative AI as applications that are directly based on X-to-Y models (foundation models that build a kind of operating system for downstream tasks) where X and Y can be text/code, image, segmented image, thermal image, speech/sound/music/song, avatar, depth, 3D, video, 4D (3D video, NeRF), IMU (Inertial Measurement Unit), amino acid sequences (AAS), 3D-protein structure, sentiment, emotions, gestures, etc., e.g.

and 2nd-level generative AI that builds some kind of middleware and allows to implement agents by simplifying the combination of LLM-based 1st-level generative AI with other tools via actions (like web search, semantic search [based on embeddings and vector databases like Pinecone, Chroma, Milvus, Faiss], source code generation [REPL], calls to math tools like Wolfram Alpha, etc.), by using special prompting techniques (like templates, Chain-of-Thought [COT], Self-Consistency, Self-Ask, Tree Of Thoughts, ReAct [Reason + Act], Graph of Thoughts) within action chains, e.g.

we currently (April/May/June 2023) see a 3rd-level of generative AI that implements agents that can solve complex tasks by the interaction of different LLMs in complex chains, e.g.

However, older publications like Cicero may also fall into this category of complex applications. Typically, these agent implementations are (currently) not built on top of the 2nd-level generative AI frameworks. But this is going to change.

Other, simpler applications that just allow semantic search over private documents with a locally hosted LLM and embedding generation, such as e.g. PrivateGPT which is based on LangChain and Llama (functionality similar to OpenAI’s ChatGPT-Retrieval plugin), may also be of interest in this context. And also applications that concentrate on the code generation ability of LLMs like GPT-Code-UI and OpenInterpreter, both open-source implementations of OpenAI’s ChatGPT Code Interpreter/AdvancedDataAnalysis (similar to Bard’s implicit code execution; an alternative to Code Interpreter is plugin Noteable), or smol-ai developer (that generates the complete source code from a markup description) should be noticed.
There is a nice overview of LLM Powered Autonomous Agents on GitHub.

The next level may then be governed by embodied LLMs and agents (like PaLM-E with E for Embodied).

Baker Lab open-sourced RF Diffusion

On March 30, 2023, the Baker Lab announced that RF Diffusion (a powerful guided diffusion model for protein design) is now free and open source. The source code is available on ColabFold (as a Google Colab) and on GitHub.

Proteins made via RF Diffusion have the potential to prevent infections, combat cancer, reverse autoimmune disorders, and serve as key components in advanced materials.

More information can be found in the papers [1] and [2].

RFdiffusion from Baker Lab solves the Protein Generation Problem

While ProteinMPNN takes a protein backbone (N-CA-C-O atoms, CA = C-Alpha) and finds an amino acid sequence that would fold to that backbone structure, RFdiffusion [Twitter] instead makes the protein backbone by just providing some geometrical and functional constraints like “create a molecule that binds X”.

The authors used a guided diffusion model for generating new proteins in the same way as Dall-E produces high-quality images that have never existed before by a diffusion technique.

See also this presentation by David Baker.

If I interpret this announcement correctly it means that drug design is now basically solved (or starts to get interesting depending on the viewpoint).

This technique can be expected to significantly increase the number of potential drugs for combating diseases. However, animal tests and human studies can also be expected as the bottlenecks of the new possibilities. Techniques like organ chips from companies like emulate may be a way out of this dilemma (before one-day entire cell, tissue, or whole body computational simulations become possible).

ProteinMPNN from Baker Lab can reverse AlphaFold

The software tool ProteinMPNN (Message Passing Neural Network) from Baker Lab can predict from a given 3D protein structure possible amino acid sequences that would fold into the given structure, in this way effectively reversing what AlphaFold from DeepMind or ESMFold from Meta can do. So the approach allows to design proteins. With a DNA/RNA printer as the BioXp from TelesisBio or the Syntax system from DNAScript it is possible to directly output the desired protein or a virus that generates the protein in a cell when injected into the body.

The source code is available on GitHub and has also already been integrated into a Hugging Face space. See also here.

© 2024 Stephan Seeger

Theme by Anders NorenUp ↑