💬 THE CHATBOT RACE • WATERMARKED LANGUAGE MODELS • AI ANTIBODY TESTS | V19
This year will be an incredibly exciting one for AI, as it continues to touch everything around us in ever more powerful ways. From generative visual models to large language models and beyond – I can’t wait to share the latest inspiring progress with you!
💬 THE CHATBOT RACE IS ON
↗ Independent
It took just 5 days for ChatGPT to reach a million users. A number that makes other viral successes look like snails:
Now DeepMind announced that it is considering releasing a competitor to ChatGPT this year. According to Demis Hassabis, the Founder of DeepMind, Sparrow can do things ChatGPT struggles with, like citing real sources for example.
Even though Sparrow is actually a couple of weeks older than ChatGPT, it hasn’t been released to the public so far. The release of ChatGPT will be a tricky one to follow as they really figured out how to turn these Large Language Models into viral products.


Oh, and China’s search giant Baidu is apparently joining the chatbot party too…
💧 WATERMARKING LARGE LANGUAGE MODELS
Is there a way to reliably tell an AI-generated text from one written by a human? The more generally available tools like Sparrow or ChatGPT become the more crucial this question will be.
CNET for example started publishing stories generated by ChatGPT, without telling their readers – which (rather expectedly) quickly backfired.
Researchers are racing to address this question. In Input v18 we discussed GPTZero. This week OpenAI themselves announced a classifier that is able to spot AI-generated text (although its performance sounds quite limited).
Another promising technique recently presented by researchers is implementing a watermark into the language models themselves. These watermarks would introduce statistical anomalies, unnoticeable to humans, into the text generated by the model. Which would make it possible to spot these texts in the wild by looking for these anomalies.
First studies have shown that this enables the detection of AI-generated text with near certainty.
🧪 AI ANTIBODY TESTS
↗ NEO.LIFE
Could AI help us understand our own immune system? The complexity that prevents us humans from fully understanding how it works exactly, turns out to be another perfect problem to tackle with advanced machine learning.
Serimmune, a California-based biotech company, is using AI to improve the way we test for antibodies.
This will be a tricky one to explain in a few sentences, but here we go:
Everybody will be familiar with COVID-19 antigen tests by now. These tests are designed to chemically react to a SINGLE type of antigen. In the case of Covid one, that is connected to the virus’s infamous spike protein. So, with this approach, we have to design a bespoke test for each disease we want to test for, which makes it hard to adapt and scale.
Serimmune in contrast is looking at the full ‘epitope repertoire‘ – millions of tiny pieces that represent the antibodies present in a person’s body. They then use machine learning to predict the likelihood of an infection with a number of different diseases, based on this.
This is a perfect example of how AI can help us crack problems that have traditionally been out of our reach just because of their sheer complexity.
A fitting analogy for this would be Balaji’s ‘prime number maze’.
Let’s use AI to escape this maze.
The Input is a newsletter made with 🖤 by Nice Outside on planet earth. If you have feedback, are interested in geeking out about any of the things mentioned above or just want to jam on an idea, feel free to reach out to max@no.studio.