GNSS & Machine Learning Engineer

Category: Speech to Text

OpenAI launches ChatGPT app for iOS

OpenAI has officially launched the ChatGPT app for iOS users in the US. The app comes with a range of notable features:

  • Free of Charge: The ChatGPT app can be downloaded and used free of cost.
  • Sync Across Devices: Users can maintain their chat history consistently across multiple devices.
  • Voice Input via Whisper: The app includes integration with Whisper, OpenAI’s open-source speech-recognition system, allowing users to input via voice commands.
  • Exclusive Benefits for ChatGPT Plus Subscribers: Those who subscribe to ChatGPT Plus can utilize GPT-4’s enhanced capabilities. They also receive early access to new features and benefit from faster response times.
  • Initial US Rollout: The app is initially launching in the US, with a plan to expand its availability to other countries in the upcoming weeks.
  • Android Version Coming Soon: OpenAI has confirmed that Android users can expect to see the ChatGPT app on their devices in the near future. Further updates are expected soon.

3rd-Level of Generative AI 

Defining 

1st-level generative AI as applications that are directly based on X-to-Y models (foundation models that build a kind of operating system for downstream tasks) where X and Y can be text/code, image, segmented image, thermal image, speech/sound/music/song, avatar, depth, 3D, video, 4D (3D video, NeRF), IMU (Inertial Measurement Unit), amino acid sequences (AAS), 3D-protein structure, sentiment, emotions, gestures, etc., e.g.

and 2nd-level generative AI that builds some kind of middleware and allows to implement agents by simplifying the combination of LLM-based 1st-level generative AI with other tools via actions (like web search, semantic search [based on embeddings and vector databases like Pinecone, Chroma, Milvus, Faiss], source code generation [REPL], calls to math tools like Wolfram Alpha, etc.), by using special prompting techniques (like templates, Chain-of-Thought [COT], Self-Consistency, Self-Ask, Tree Of Thoughts, ReAct [Reason + Act], Graph of Thoughts) within action chains, e.g.

we currently (April/May/June 2023) see a 3rd-level of generative AI that implements agents that can solve complex tasks by the interaction of different LLMs in complex chains, e.g.

However, older publications like Cicero may also fall into this category of complex applications. Typically, these agent implementations are (currently) not built on top of the 2nd-level generative AI frameworks. But this is going to change.

Other, simpler applications that just allow semantic search over private documents with a locally hosted LLM and embedding generation, such as e.g. PrivateGPT which is based on LangChain and Llama (functionality similar to OpenAI’s ChatGPT-Retrieval plugin), may also be of interest in this context. And also applications that concentrate on the code generation ability of LLMs like GPT-Code-UI and OpenInterpreter, both open-source implementations of OpenAI’s ChatGPT Code Interpreter/AdvancedDataAnalysis (similar to Bard’s implicit code execution; an alternative to Code Interpreter is plugin Noteable), or smol-ai developer (that generates the complete source code from a markup description) should be noticed.
There is a nice overview of LLM Powered Autonomous Agents on GitHub.

The next level may then be governed by embodied LLMs and agents (like PaLM-E with E for Embodied).

OpenAI releases ChatGPT and Whisper APIs

On March 01, 2023, OpenAI announced the releases of APIs for ChatGPT (published on Nov 30, 2022) and the automatic speech recognition (ASR) engine Whisper for speech-to-text (STT) transcription (and translation) that was open-sourced in Sept 2022.

The ChatGPT model family is called gpt-3.5-turbo and costs just $0.002 per 1k tokens, which is 10 times cheaper than the existing GPT-3.5 models. Instead of consuming unstructured text as traditionally done by GPT, the ChatGPT models consume a sequence of messages with metadata following a new format called Chat Markup Language (ChatML). The number of tokens (tokens in prompt + tokens in response as available via response[‘usage’][‘total_tokens’]) is restricted to 4096. Notice that there is no possibility to fine-tune gpt-3.5-turbo models.

For Whisper the large-v2 model is now available through an API for a price of $0.006 per minute. The API contains endpoints for transcriptions (transcribes in source language) and translations (transcribes into English).

In addition, the possibility of dedicated instances for professional users was announced that can make economical sense beyond ~450M tokens per day.

A significant change that was made in the Terms of Service and Usage Polices is that data submitted to the API is no longer used for service improvements (e.g. model training) unless an organization opts in. Before it was necessary to opt-out.

© 2024 Stephan Seeger

Theme by Anders NorenUp ↑