OpenAI has officially launched the ChatGPT app for iOS users in the US. The app comes with a range of notable features:
Free of Charge: The ChatGPT app can be downloaded and used free of cost.
Sync Across Devices: Users can maintain their chat history consistently across multiple devices.
Voice Input via Whisper: The app includes integration with Whisper, OpenAI’s open-source speech-recognition system, allowing users to input via voice commands.
Exclusive Benefits for ChatGPT Plus Subscribers: Those who subscribe to ChatGPT Plus can utilize GPT-4’s enhanced capabilities. They also receive early access to new features and benefit from faster response times.
Initial US Rollout: The app is initially launching in the US, with a plan to expand its availability to other countries in the upcoming weeks.
Android Version Coming Soon: OpenAI has confirmed that Android users can expect to see the ChatGPT app on their devices in the near future. Further updates are expected soon.
LMQL (Language Model Query Language) is a programming language for large language model (LM) interaction. It facilitates LLM interaction by combining the benefits of natural language prompting with the expressiveness of Python.
Guidance is a Python library by Microsoft that provides tools to enhance control over modern language models. It offers features that allow for more efficient and effective use of these models, including intuitive syntax, rich output structure, and easy integration with other libraries like HuggingFace.
Mojo combines the usability of Python with the performance of C/C++/CUDA.
NeMo Guardrails is an open-source framework by NVIDIA available on GitHub. It can help developers that their LLM-powered applications are more accurate, appropriate, on topic, and secure by defining boundaries around the apps. It supports topical, safety, and security guardrails and can be used on top of LangChain. Guardrails are a set of programmable constraints between a user and an LLM, formulated as flows in a Colang file. Colang is a modeling language and runtime developed by NVIDIA for conversational AI.
Google revealed at Google I/O on May 10, 2023, PaLM 2 (API, paper), its latest AI language model that powers 25 Google products, including Search, Gmail, Docs, Assistant, Translate, and Photos.
PaLM 2 has 4 models that differ in size: Gecko, Otter, Bison, and Unicorn. Gecko is so lightweight that it can work on mobile devices.
PaLM 2 can be finetuned on domain-specific knowledge (Sec-PaLM with security knowledge, Med-PaLM 2 with medical knowledge)
Bard now works with PaLM 2; with extensions, Bard can call tools like Sheets, Colab for coding, Lenses, Maps, Adobe Firefly to create images, etc.; Bard is multimodal and can understand images
PaLM 2 is also powering Duet AI for Google Cloud, a generative AI collaborator designed to help users learn, build and operate faster
PaLM 2 is released in 180+ regions and countries, however, e.g. not yet in Canada, and in the EU
Google also announced the availability of MusicLM, a text-to-music generative model.
OpenAI reacted to this announcement on May 12 by announcing that Browsing & Plugins are rolled out over the subsequent week for all Plus users. As of May 17, I can confirm that both features are now operational for me.
1st-level generative AI as applications that are directly based on X-to-Y models (foundation models that build a kind of operating system for downstream tasks) where X and Y can be text/code, image, segmented image, thermal image, speech/sound/music/song, avatar, depth, 3D, video, 4D (3D video, NeRF), IMU (Inertial Measurement Unit), amino acid sequences (AAS), 3D-protein structure, sentiment, emotions, gestures, etc., e.g.
X = 3D-protein, Y = AAS: ProteinMPNN (from Baker Lab)
X = 3D structure, Y = AAS: RFdiffusion (from Baker Lab)
and 2nd-level generative AI that builds some kind of middleware and allows to implement agents by simplifying the combination of LLM-based 1st-level generative AI with other tools via actions (like web search, semantic search [based on embeddings and vector databases like Pinecone, Chroma, Milvus, Faiss], source code generation [REPL], calls to math tools like Wolfram Alpha, etc.), by using special prompting techniques (like templates, Chain-of-Thought [COT], Self-Consistency, Self-Ask, Tree Of Thoughts, ReAct [Reason + Act], Graph of Thoughts) within action chains, e.g.
we currently (April/May/June 2023) see a 3rd-level of generative AI that implements agents that can solve complex tasks by the interaction of different LLMs in complex chains, e.g.
However, older publications like Cicero may also fall into this category of complex applications. Typically, these agent implementations are (currently) not built on top of the 2nd-level generative AI frameworks. But this is going to change.
Other, simpler applications that just allow semantic search over private documents with a locally hosted LLM and embedding generation, such as e.g. PrivateGPT which is based on LangChain and Llama (functionality similar to OpenAI’s ChatGPT-Retrieval plugin), may also be of interest in this context. And also applications that concentrate on the code generation ability of LLMs like GPT-Code-UI and OpenInterpreter, both open-source implementations of OpenAI’s ChatGPT Code Interpreter/AdvancedDataAnalysis (similar to Bard’s implicit code execution; an alternative to Code Interpreter is plugin Noteable), or smol-ai developer (that generates the complete source code from a markup description) should be noticed. There is a nice overview of LLM Powered Autonomous Agents on GitHub.
The next level may then be governed by embodied LLMs and agents (like PaLM-E with E for Embodied).
The Future of Life Institute initiated an open letter in which they call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [notice that OpenAI already trains GPT-5 for some time]. They state that powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
The gained time should be used to develop safety protocols by AI experts to make the systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In addition, they ask for the development of robust AI governance systems by policymakers and AI developers. They also demand well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Notice that the letter is not against further AI development but just to slow down and give society a chance to adapt.
The letter was signed by several influential people, e.g. Elon Musk (CEO of SpaceX, Tesla & Twitter), Emad Mostaque (CEO of Stability AI), Yuval Noah Harari (Author), Max Tegmark (president of Future of Life Institute), Yoshua Bengio (Mila, Turing Prize winner), Stuart Russell (Berkeley).
However, it should be noticed that even more influential people in the AI scene have not (yet) signed this letter, none from OpenAI, Google/Deep Mind, or Meta.
This is not the first time the Future of Live Institute has taken action on AI development. In 2015, they presented an open letter signed by over 1000 robotics and AI researchers urging the United Nations to impose a ban on the development of weaponized AI.
The Future of Life Institute is a non-profit organization that aims to mitigate existential risks facing humanity, including those posed by AI.
Yann LeCun answered on Twitter with a nice fictitious anecdote to the request: The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type. Imagine what could happen if commoners get access to books! They could read the Bible for themselves and society would be destroyed.
OpenAI announced on Mar 23, 2023, the availability of plugins within ChatGPT. Access is currently limited to ChatGPT Plus subscribers that joined a waitlist and have been selected by OpenAI.
Plugins can be automatically called by ChatGPT’s underlying LLM (Large Language Model, currently GPT-3.5 or GPT-4) in order to answer the questions of the user.
In order to make this work, plugins have to be registered in the ChatGPT user interface with a manifest file (ai-plugin.json) that is hosted in the developer’s domain at yourdomain.com/.well-known/ai-plugin.json. The file contains in a prescribed format – metadata about the plugin (name, logo) – details about the authentication mechanism – an OpenAI specification for the endpoints of the API – and a general description for the LLM of what the plugin can do. The web app API needs to define an endpoint “/.well-known/ai-plugin.json” to access the content of this file.
In addition to the manifest file, an openapi.yaml file, that defines the OpenAI specification, has to be generated that is referenced in the “api” section of the manifest file via the “url” field. This file contains a detailed description of the API endpoints. The web app API needs to define an endpoint “/openapi.yaml” to access the content of this file.
When the user has activated a registered plugin and starts a conversation, the plugin’s description is injected into the message to ChatGPT, but invisible to the user. In this way, the LLM may choose an API call from the plugin if this seems relevant to the user’s question. The LLM will then incorporate the API result into the response to the user. More details can be found in OpenAI’s documentation.
Among the already available plugins, a few stand out. With the Wolfram plugin, all kinds of computational problems can be solved. And with the Zapier plugin, more than 5000 apps can be accessed. OpenAI itself introduced a web browser (that uses the Bing search API) and a code interpreter plugin (that runs in a sandbox without an internet connection). In addition, they open-sourced the code for a knowledge base retrieval plugin, that has to be self-hosted by a developer.
Interestingly enough, OpenAI notices that plugins will likely have wide-ranging societal implications and that language models with access to tools will likely have much greater economic impacts than those without. They expect the current wave of AI technologies to have a big effect on the pace of job transformation, displacement, and creation. OpenAI discusses the impact potential of large language models at the labor market in a recent publication.
Just a day after the OpenAI announcement of ChatGPT plugins, the open-source community already integrated these plugins also into LangChain. This is done just by referring to the plugin manifest file ai-plugin.json (see Twitter), e.g.:
All the other exciting news of the week is well summarized by Matt Wolfe (Google Bard, NVIDIA GTC, Adobe Firefly, Image Generation in Bing via DALL-E2, Microsoft Loop, AI in Canva, GitHub Copilot X, AI in Ubisoft, Metahuman by Unreal Engine).
On the same day as OpenAI released GPT-4 (March 14, 2023), Google also announced the availability of the PaLM API for developers on Google Cloud [video]. They said that they are now providing access to foundation models on Google Cloud’s Vertex AI platform, initially for generating text and images, and over time also for audio and video. In addition, with the Generative AI App Builder, they introduced the possibility of quickly building AI-powered chat interfaces and digital assistants.
Finally, Google also made for a limited set of trusted test users generative AI features available within Google Workspace (Gmail and Google Docs).
OpenAI released GPT-4 within ChatGPT on March 14, 2023, described in detail in a 98-pages paper (summarized on youtube).
Available to ChatGPT-Plus subscribers (currently with a cap that is changing over time, e.g. 100 messages every 4 hours, or 25 messages every 3 hours).
Still based on training data that cuts off Sept 2021.
It still does not learn from its experience.
Still no internet access.
The training was already finalized in Aug 2022.
Fine-tuned via RLHF (Reinforcement Learning with Human Feedback).
API waitlist is open (so no API access yet for everyone)
API prices (for comparison: GPT-3.5-turbo $0.002 per 1k tokens):
gpt-4: 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens and $0.06 per 1K completion tokens.
gpt-4-32k: 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens and $0.12 per 1K completion tokens.
The number of parameters and size of the training data set have both not been published. So competitors are not encouraged to replicate these performance ingredients but are referred to a freely available benchmark (OpenAI Evals) that measures the real performance.
GPT-4 ranks in the 10% best of the bar exam and 0.5% best of biology olympiad.
GPT-4 can handle contexts of over 25,000 words.
GPT-4 can access images as inputs and can generate captions, classifications, and analyses. However, this image-to-text functionality is not yet publicly available.
Microsoft Bing was already using an early version of GPT-4 in the last few weeks.
An excellent overview by Greg Brockman, President and co-founder of OpenAI, can be found on youtube.
Microsoft released Visual ChatGPT on March 08, 2023, in a paper and with source code on GitHub and Hugging Face. Although this does not seem to be GPT-4-based, it demonstrates similar image capabilities via a combination of pre-existing technologies (generate/modify [text-to-image], and describe [image-to-text]).
Two days after the GPT-4 release, Microsoft announced on March 16, 2023, the integration of GPT-4 into their Office products as a feature they called Copilot. Copilot is not yet available for general use, but Microsoft plans to roll it out gradually to selected customers in the coming months.
On March 01, 2023, OpenAI announced the releases of APIs for ChatGPT (published on Nov 30, 2022) and the automatic speech recognition (ASR) engine Whisper for speech-to-text (STT) transcription (and translation) that was open-sourced in Sept 2022.
The ChatGPT model family is called gpt-3.5-turbo and costs just $0.002 per 1k tokens, which is 10 times cheaper than the existing GPT-3.5 models. Instead of consuming unstructured text as traditionally done by GPT, the ChatGPT models consume a sequence of messages with metadata following a new format called Chat Markup Language (ChatML). The number of tokens (tokens in prompt + tokens in response as available via response[‘usage’][‘total_tokens’]) is restricted to 4096. Notice that there is no possibility to fine-tune gpt-3.5-turbo models.
For Whisper the large-v2 model is now available through an API for a price of $0.006 per minute. The API contains endpoints for transcriptions (transcribes in source language) and translations (transcribes into English).
In addition, the possibility of dedicated instances for professional users was announced that can make economical sense beyond ~450M tokens per day.
A significant change that was made in the Terms of Service and Usage Polices is that data submitted to the API is no longer used for service improvements (e.g. model training) unless an organization opts in. Before it was necessary to opt-out.
Microsoft has introduced a new language modeling approach for text-to-speech synthesis (TTS) called VALL-E. The approach uses discrete codes derived from an off-the-shelf neural audio codec model, and is trained using 60K hours of English speech, which is hundreds of times larger than existing systems, and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt (project page, paper).
An unofficial Pytorch implementation for VALL-E is available on GitHub.