GNSS & Machine Learning Engineer

Author: admin (Page 1 of 7)

Google announced Gemini and AlphaCode-2

On Dec 6, 2023, Google launched its new multimodal model Gemini that will work across Google products like search, ads, and the chatbot Bard. Gemini was trained jointly across text, image, audio, and video and has a 32K context window.

Gemini 1.0 comes in three different sizes:

  • Gemini Ultra: largest and most capable model to be released at the beginning of 2024
  • Gemini Pro: best model that is available immediately within Bard in 170 countries (but not yet in the EU and UK)
  • Gemini Nano: most efficient model for mobile devices with the same availability as Pro; Nano-1 (1. 8B parameters), Nano-2 (3.25B parameters)

Achievements and capabilities:

Some more sources: Google release note, Gemini report, Google Developer blog, YouTube: Matt Wolfe, AI Explained.

Interestingly, Gemini was trained on a large fleet of TPUv4 accelerators across multiple data centers. At such scales, machine failures due to cosmic rays are commonplace and have to be handled (Gemini report, page 4).

When paired with search and tool-use techniques, Gemini forms the basis for advanced reasoning systems like AlphaCode 2, which excels in competitive programming challenges against human competitors. AlphaCode 2, solely based on Gemini Pro and not yet on Gemini Ultra, shows a substantial improvement over its predecessor by solving 43% of problems on Codeforces, a 1.7x increase. In this way, AlphaCode 2 performs better than 87% of human competitors. However, due to its intensive machine requirements to generate, filter, and score up to a million solutions, AlphaCode 2 is currently not feasible for customer use, although Google is working on this.

OpenAI fired and then rehired Sam Altman

Nov 17, 2023:
Sam Altman was fired from OpenAI. Greg Brockman was first removed as chairman of the board, and later he announced to quit. Chief technology officer Mira Murati was appointed interim CEO.

From OpenAI’s announcement: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. “

Some early discussions on Youtube: SVIC Podcast, Matt Wolfe.
Early details of what happened: Wes Roth.

No details are known as to why Sam Altman was fired, only speculations.

Nov 18, 2023:
More senior departures from OpenAI.
Some summaries of what is known a day later: [Matt Wolfe][AI Explained][MattVidPro AI][Yannic Kilcher].
Reports that the OpenAI board is in discussions with Sam Altman to return as CEO.

Still, no details are known as to why Sam Altman was fired, just more speculations.

Nov 19, 2023:
Emmett Shear (Twitch co-founder and former Twitch CEO) becomes OpenAI’s interim CEO (his tweet).

Still, no details are known as to why Sam Altman was fired, just more speculations.

Nov 20, 2023:
Announcement that Sam Altman and Greg Brockman go to Microsoft and lead a new Microsoft subsidiary.
Ilya Sutskever declared his regret for his participation in the board’s actions.
Employee Letter to OpenAI’s board signed by more than 700 of the 770 employees (including Ilya Sutskever and one-day CEO Mira Murati) to request the resignation of the whole board and reappointment of Sam Altman as CEO, otherwise they will resign and join the newly announced Microsoft subsidiary.
OpenAI’s board approached Anthropic about to merge [1].
Summaries of the latest news: [Matt Wolfe][AI Explained][Bloomberg].

Still, no details are known as to why Sam Altman was fired, just more speculations.

Nov 21, 2023:
Summary of the latest status: [Wes Roth].

Instead of taking OpenAI’s merger offer, Anthropic announced a massive update with Claude 2.1 and 200K context window.

Nov 22, 2023:
Sam Altman is back at OpenAI.
Summaries of the whole soap opera: [Matt Wolfe][AI Explained].

Still, no details are known as to why Sam Altman was fired, just more speculations.

Nov 23, 2023:
New rumors about Sam Altman’s ouster: Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q*, precipitated the board’s actions.

Background of why Sam Altman may have been fired:
There is much speculation about safety concerning people (like Ilya Sutskever) acting against people trying to accelerate AI commercialization (Sam Altman, Greg Brockmann). As more and more money is poured in, there may be a concern about losing control over OpenAI’s mission to achieve AGI for the benefit of all of humanity. Interesting in this context is a statement [1][2] by Sam Altman at the APEC summit in San Francisco on November 16, 2023 (where US President Biden met Chinese President Xi Jinping) that OpenAI recently made a major breakthrough. In addition, he made a statement [1][2] about the model’s capability within the next year. Does this mean that AGI was achieved within OpenAI? This is important in the context of OpenAI’s structure as a partnership between the original nonprofit and a capped profit arm. The important parts of the document describing the structure are:

  • First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit…
  • Second, … The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.
  • Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.
  • Fifth, the board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

This means that once AGI is achieved (and the board decides when this is the case) investors can no longer benefit from further advancements. Their investment is basically lost.

Another speculation is that the OpenAI’s board member Adam D’Angelo is behind Sam Altman’s ouster. Adam D’Angelo is co-founder and CEO of Quora, which built the chat platform Poe whose recently introduced assistant feature on October 25 is in direct competition with OpenAI’s custom GPTs made public on OpenAI’s DevDay on November 06, 2023. However, this reason becomes less likely as Adam is also part of the new board after rehiring Sam.

GitHub Universe 2023 Announcements

There were two major announcements about GitHub Copilot at the GitHub Universe 2023 conference on Nov 08, 2023.

GitHub Copilot Enterprise account:

  • Copilot personalized for your organization
  • Contains everything in Copilot Business
  • Chat personalized to your codebase
  • Documentation search and summaries
  • Pull request summaries
  • Code review skills
  • Fine-tuned models
  • available in Feb 2024 for 39$ per user/month

Copilot Workspace: [1]

  • Copilot workspace automatically proposes a solution based on its deep understanding of the code base.
  • Builds a step-by-step plan to implement the changes; if something isn’t quite right, the spec and plan are fully editable.
  • With the approval of the plan, Copilot automates the implementation of changes across the repository.
  • Copilot not only synthesizes code but also builds, tests, and validates the success of these changes.
  • This workspace is designed for collaboration. You can edit any of the suggested changes and if you accidentally introduce an error along the way, Copilot will automatically catch it, repair it, and rerun the code.
  • Easy to create a pull request with generated summary of the work to merge and deploy fast.
  • Available in 2024.

OpenAI DevDay Announcements

OpenAI rolled out on its DevDay an array of transformative updates and features [blog post, keynote recording]. Here’s a succinct rundown:

  • Recap: ChatGPT release Nov 30, 2022 with GPT-3.5. GPT-4 release in March 2023. Voice input/output, vision input with GPT-4V, text-to-image with DALL-E 3, ChatGPT Enterprise with enterprise security, higher speed access, and longer context windows. 2M developers, 92% of Fortune 500 companies building products on top of GPT, 100M weekly active users.
  • New GPT-4 Turbo: OpenAI’s most advanced AI model, 128K context window, knowledge up to April 2023. Reduced pricing: $0.01/1K input tokens (3x cheaper), $0.03/1K output tokens (2x cheaper). Improved function calling (multiple functions in single message, always return valid functions with JSON mode, improved accuracy on returning right function parameters). More deterministic model output via reproducible outputs beta. Access via gpt-4-1106-preview, stable release pending.
  • GPT-3.5 Turbo Update: Enhanced gpt-3.5-turbo-1106 model with 16K default context. Lower pricing: $0.001/1K input, $0.002/1K output. Fine-tuning available, reduced token prices for fine-tuned usage (input token prices 75% cheaper to $0.003/1K, output token prices 62% cheaper to $0.006/1K). Improved function calling, reproducible outputs feature.
  • Assistants API: Beta release for creating AI agents in applications. Supports natural language processing, coding, planning, and more. Enables persistent Threads, includes Code Interpreter, Retrieval, Function Calling tools. Playground integration for no-code testing.
  • Multimodal Capabilities: GPT-4 Turbo supports visual inputs in Chat Completions API via gpt-4-vision-preview. Integration with DALL·E 3 for image generation via Image generation API. Text-to-speech (TTS) model with six voices introduced.
  • Customizable GPTs in ChatGPT: New feature called GPTs allowing integration of instructions, data, and capabilities. Enables calling developer-defined actions, control over user experience, streamlined plugin to action conversion. Documentation provided for developers.

AI race is heating up: Announcements by Google/DeepMind, Meta, Microsoft/OpenAI, Amazon/Anthropic

After weeks of “less exciting” news in the AI space since the release of Llama 2 by Meta on July 18, 2023, there were a bunch of announcements in the last few days by major players in the AI space:

Here are some links to the news of the last weeks:

Room-temperature Superconductivity breakthrough?

A groundbreaking discovery has potentially been made in the field of superconductivity. Researchers from South Korea have developed a superconductor material, codenamed LK 99 (short for the authors Lee and Kim that made the first discovery of the material in 1999), that potentially operates at room temperature and atmospheric pressure. This would be a significant leap forward, overcoming the limitations of previous superconductors that required extremely low temperatures or high pressures to function.

Superconductivity, a quantum mechanical phenomenon where the electrical resistance of a material vanishes and magnetic flux fields are expelled from within the material, was first discovered by Dutch physicist Heike Kamerlingh Onnes in 1911. This discovery earned him the Nobel Prize in Physics in 1913. The implications of this phenomenon are vast, particularly for energy transmission and storage, as superconductors can conduct electricity with virtually no loss of energy.

One of the key features of superconductivity is the Meissner effect, where a superconductor in a magnetic field will expel the magnetic field within the material. This is due to the superconductor’s perfect diamagnetism, and it leads to phenomena such as magnetic levitation.

Another significant contribution to the understanding of superconductivity came from Vitaly Ginzburg and Alexei Abrikosov, who, along with Anthony Leggett, were awarded the Nobel Prize in Physics in 2003. Ginzburg and Abrikosov developed the Ginzburg-Landau theory in the 1950s, a phenomenological theory that describes superconductivity in the vicinity of the critical temperature. This theory successfully explains many properties of superconductors, including the Meissner effect, and it has been instrumental in the development of the theory of type II superconductors, which remain superconducting in the presence of strong magnetic fields.

The understanding of superconductivity took a significant leap forward in 1957 when John Bardeen, Leon Cooper, and John Robert Schrieffer proposed the BCS theory. This theory, which explains how electrical resistance in certain materials disappears at very low temperatures, earned them the Nobel Prize in Physics in 1972. The theory introduced the concept of Cooper pairs, where electrons with opposite momenta and spins pair up and move through the lattice of positive ions in the material without scattering and losing energy.

In 1986, the discovery of high-temperature superconductors by Georg Bednorz and K. Alex Müller, who were awarded the Nobel Prize in Physics in 1987, marked another milestone in the field. These materials exhibited superconducting properties at temperatures higher than those predicted by the BCS theory, opening up new possibilities for practical applications.

Each superconductor has a critical temperature below which it exhibits superconductivity, and some require a minimum pressure. Traditional superconductors need extreme cooling and sometimes high pressure. High-temperature superconductors work at warmer temperatures, but still below room level. The new material, LK 99, is groundbreaking as it remains superconducting at room temperature and atmospheric pressure.

The researchers published two papers discussing their findings on arXiv within two hours of each other on July 22, 2023. The first paper, “The First Room-Temperature Ambient-Pressure Superconductor”, was authored by Sukbae Lee, Ji-Hoon Kim, and Young-Wan Kwon. The second paper, “Superconductor Pb_10-x Cu_x (PO_4)_6 O showing levitation at room temperature and atmospheric pressure and mechanism”, was authored by the same first two researchers of the first paper along with Hyun-Tak Kim, Sungyeon Im, SooMin An, and Keun Ho Auh. The strategic authorship suggests a potential candidacy for the Nobel Prize, which can only be shared among three people.

In March 2023, the group filed for their international patent application, further solidifying their claim. However, the scientific community has expressed some skepticism due to a past incident. Randa Dias, a physicist at the University of Rochester, had a paper published in Nature in October 2020 claiming room-temperature superconductivity in a carbonaceous sulfur hydride under extreme pressure. The paper was retracted in September 2022 after other researchers were unable to replicate the results. While we await conclusive evidence supporting the claim of room-temperature superconductivity, you can monitor the scientific community’s assessment of the claim here.

The LK 99 material has a critical current of 250 mA at 300°K (27°C) that quickly drops towards almost 0 when reaching 400°K. The current generates a magnetic field that breaks down superconductivity. This is a crucial aspect as high currents for generating high magnetic fields are central for applications in MRIs and in fusion reactors, where the magnetic field is used for the confinement of the plasma.

The proposed superconductor is not only revolutionary but also simple and inexpensive to produce. The process involves three steps explicitly explained in the second paper using common materials: lead oxide, lead sulfate, copper powder, and phosphorus. The resulting compound, Pb10-xCux(PO4)6O, is achieved through a series of heating and mixing processes.

The use of copper instead of lead in the superconductor results in a shrinkage effect, which was previously achieved through high pressure. This is related to the concept of a quantum well, a potential well with discrete energy values. The quantum well effect is the underlying mechanism for superconductivity in LK-99.

The potential applications of room-temperature superconductors are transformative. They could lead to more efficient power transmission, reducing energy loss during transmission through power lines. They could also enable cheaper and simpler magnetic resonance imaging (MRI) machines, fusion reactors, high-speed magnetic trains, and quantum computers. In addition, they could lead to more efficient batteries, potentially revolutionizing the energy storage industry. A more detailed discussion of the implications of a room-temperature ambient-pressure superconductor that depends on whether strong or weak magnetic fields and currents are possible has been put together by Andrew Cote.

A comprehensive overview of this discovery has been provided in a YouTube video by ‘Two Bit da Vinci’.

The breakthrough discovery of the room-temperature superconductor LK 99 is not the only recent advancement in the field of superconductivity. In a related development, a team of scientists from MIT and their colleagues have created a simple superconducting device that could dramatically cut energy use in computing. This device, a type of diode or switch, could transfer current through electronic devices much more efficiently than is currently possible.

The team’s work, published in the July 13 online issue of Physical Review Letters, showcases a superconducting diode that is more than twice as efficient as similar ones reported by others. It could even be integral to emerging quantum computing technologies. The diode is nanoscopic, about 1,000 times thinner than the diameter of a human hair, and is easily scalable, meaning millions could be produced on a single silicon wafer.

The team discovered that the edge asymmetries within superconducting diodes, the ubiquitous Meissner screening effect found in all superconductors, and a third property of superconductors known as vortex pinning all came together to produce the diode effect. This discovery opens the door for devices whose edges could be “tuned” for even higher efficiencies.

These advancements in superconductivity, both in the creation of room-temperature superconductors and the development of highly efficient superconducting diodes, hold great promise for the future of technology and energy efficiency. They could lead to more efficient power transmission, revolutionize the energy storage industry, and dramatically cut the amount of energy used in high-power computing systems.

You can read more about the superconducting diode in the Phys.org article.

On July 29, 2023, there has been an additional announcement by Taj Quantum in Florida for a Type II room-temperature superconductor (US patent 17249094).

Update 03.01.2023 [1][2]: Two Chinese labs have now also found room-temperature superconductors.

Meta released Llama 2 free for Commercial Use

Meta open-sourced Llama 2 together with Microsoft, this time in contrast to Llama 1 free not just for research but also for commercial use.

  • Free for commercial use for businesses with less than 700 Mio monthly active users
  • Models with 70B, 13B, and 7B parameters
  • Llama-2-70B model is currently the strongest open-source LLM (Huggingface leaderboard), comparable to GPT-3.5-0301, noticeably stronger than Falcon, MPT, and Vicuna
  • Not yet at GPT-3.5 level, mainly because of its weak coding abilities
  • RLHF fine-tuned
  • Source code on GitHub, weights available on Azure, AWS, and HuggingFace
  • Llama 2 paper
  • 4K token context window
  • Trained on 2 trillion tokens with training costs of about $20M
  • Knowledge cut-off Dec 2022
  • Testing on https://www.llama2.ai

Just 4 days after this announcement, on July 22, 2023, StabilityAI released FreeWilly1 and FreeWilly2 which are fine-tuned models based on LLaMA65B and Llama-2-70B. These models took over the leadership on Hugging Face (Huggingface leaderboard). However, both models have no commercial license and are just intended for research.

GPT-4 in the top 1% of human thinkers in creativity test

In a recent study by the University of Montana, GPT-4 demonstrated remarkable performance in the Torrance Tests of Creative Thinking (TTCT, a standard test for measuring creativity), matching the top 1% of human thinkers. The model excelled in fluency and originality. These findings imply that the creative abilities of GPT-4 could potentially surpass those of humans.

For a recent benchmark on advanced reasoning capabilities of large language models take a look at the ARB (Advanced Reasoning Benchmark).

OpenAI gives all ChatGPT Plus users access to Code Interpreter

The ChatGPT code interpreter allows users to run code and upload individual data files (in .csv, .xlsx, .json format) for analysis. Multiple files can be uploaded sequentially or within one zip-file. To upload a file, click on the ‘+’ symbol located just to the left of the ‘Send a message’ box or even simpler via drag and drop.

The code interpreter functionality is accessible to ChatGPT Plus users and can be enabled in the settings under ‘Beta features’. Once enabled, this functionality will then appear in the configuration settings of any new chat under the ‘GPT-4’ section, where it also needs to be activated.

Given a prompt, the code interpreter will generate Python code that is then automatically executed in a sandboxed Python environment. If something goes wrong, for instance, if the generated source code requires the installation of a Python package or if the source code is simply incorrect, the code interpreter automatically attempts to fix these errors and tries again. This feature makes working with the code interpreter much more efficient. Before, it was necessary to paste ChatGPT’s proposal into a Jupyter notebook and run it from there. If errors occurred, these had to be fixed either independently or by manually pasting the error text back into ChatGPT so that it could provide a solution. This manual iterative procedure has now been automated with the code interpreter.

Note that the code interpreter executes the source code on OpenAI’s servers, not in the local environment. This leads to restrictions on the size of the uploaded data, as well as a very stringent time limit of 120s for the execution of the code. Given this, it becomes clear what developers truly desire. They seek the integration of this feature into their local development environment, such as VSCode, or within a cloud service, such as AWS, GCP, or Azure, without any restrictions on data size or execution times. This then leans more towards the direction of projects like AutoGPT or GPT Engineer. It’s likely only a matter of days, weeks, or months before such functionality becomes widely available. It’s also probable that complete access to your code repository will be enabled, first through a vector database solution and after some time maybe by including the entire repository within prompts, which are currently increasing dramatically in size (as exemplified in LongNet; since this requires retraining of the LLM such solutions cannot be expected to become available before GPT-4.5 or GPT-5).

For testing, try e.g. the following prompts:

  • What is the current time?
  • Plot the graphs of sin(x) and cos(x) in a single graph
  • Make a QR-code of my contact information: Stephan Seeger; Homepage: domain-seeger.de

or after uploading a data set (e.g. from Kaggle)

  • Explain the dataset.
  • Show 4 different ways of displaying the data visually.

Before, such functionality was only available via the Notable plugin or via the open-source implementation GPT-Code-UI on GitHub.

Microsoft scales Transformer sequence length to 1 billion tokens

LongNet, a new Transformer variant introduced in recent research by Microsoft, has successfully scaled sequence lengths to over 1 billion tokens without compromising shorter sequence performance. Its key innovation, dilated attention, allows an exponential expansion of the attentive field with growing distance. The model exhibits linear computational complexity and logarithmic token dependency, while also demonstrating strong performance on long-sequence modeling and general language tasks.

« Older posts

© 2024 Stephan Seeger

Theme by Anders NorenUp ↑