Fireside Chat with Peter Welinder
An informal interview with Peter Welinder, VP of Product & Partnerships at OpenAI, by Sergey Karayev. Published May 25, 2023.
Chapter Summaries
How did you get into machine learning?
- This video features a fireside chat with Peter Welinder, VP of Products and Partnerships at Open AI
- The host, Sergey Karayev, kicks off the conversation by asking how Peter got into machine learning
- Peter started with a book on artificial intelligence in high school, went on to study physics and switched to neuroscience before focusing on computer vision and machine learning
- Both Peter and the host had similar experiences of being interested in intelligence and studying neuroscience before realizing it wasn't for them
- Peter has always been fascinated by the idea of creating machines that can do everything humans can do
Early career in computer vision: Anchovi, Dropbox, Carousel
- Peter started a startup after finishing grad school
- The startup originally focused on using computer vision techniques to track animals, but pivoted to creating an application to organize photos based on content after seeing the rise of iPhone 4's improved camera capabilities
- The startup was eventually acquired by Dropbox, where the speaker joined the company's machine learning and computer vision team to help make sense of the vast amount of unindexed photos on the platform
- While at Dropbox, the team created a mobile app called Carousel, which allowed for easy photo organization and was well-received by users
- Dropbox eventually de-prioritized the photo organization product, leading the team to focus on analyzing documents and improving semantic search within the platform.
Transitioning from research to product at OpenAI
- Peter has always been interested in making technology useful to solve problems people have
- He was drawn to Dropbox for its potential to organize content with new techniques, like deep reinforcement learning
- OpenAI was an interesting company with a focus on hard problems, including robotics with deep reinforcement learning
- OpenAI was focused on AGI, a super hard problem, and was a place where you could be pragmatic and focus on problem-solving rather than publishing
- When Peter joined OpenAI in 2017, they had no idea whether OpenAI would be around in a year, let alone when the work might lead to AGI
How did OpenAI converge on GPT for AI?
- OpenAI converged on "GPT-style AI" through a process of trying different techniques and seeing what worked best
- Peter discusses several past projects that involved reinforcement learning: competitive gaming and robotics
- OpenAI created a DOTA bot that beat world champions, trained using deep reinforcement learning
- They also got a robotic hand to solve a Rubik's Cube, trained using deep RL in simulation and with lots of data
- The language modeling project started with discovering sentiment neurons in earlier models and later evolved into GPT-3, which was validated as a useful tool for scaling
- Peter explains that they consolidated learnings from past projects into one big bet on language models as a way to push towards AGI
Productizing GPT: Playground, API, & ChatGPT
- Peter notes that he and his team had trouble deciding on how to turn their technology into a product, considering various applications such as translation systems, writing assistants, and chatbots
- They ultimately decided to release their technology as an API so that other people could build products on top of it
- They had to improve the API's performance before demoing it to hundreds of companies, and eventually found 10 launch partners
- When they released GPT-3 as a chatbot, they were initially unsure of how successful it would be, but were surprised to see it gain over a million users within a week
Surprises from the response to ChatGPT
- Initially worried product wasn't ready, but users found it great for many use cases
- Users had multiple use cases and continued to find more ways to apply it in workflows
- Large incumbents quickly adopting chat technology, partly due to product marketing and ease of trying it out
- ChatGPT became a good product marketing tool for what the general technology of language modeling could do
- Companies realized they would fall behind if they didn't adopt the technology, creating FOMO
ChatGPT's success: UX or capabilities?
- Peter discusses the importance of the chat interface in relation to the improved capabilities of the model
- The ability to do back-and-forth communication was available before the GPT release
- The UI change was definitely part of the success
- But the availability and accessibility of the ChatGPT release was a significant change as well
AGI when?
- In response to a question about AGI timelines, Peter defines AGI as an autonomous AI system that can do economically useful work at the level of humans or beyond
- Following that definition, Peter indicates he considers it likely that we will have something close to AGI by the end of this decade
- So it's possible it has already happened, and the right way of putting together existing components results in a system that can do computer work at the level of humans or beyond
- We've seen during the coronoavirus pandemic that much economically useful work can be done from a computer
- But still very uncertain!
We are excited to share this course with you for free.
We have more upcoming great content. Subscribe to stay up to date as we release it.
We take your privacy and attention very seriously and will never spam you. I am already a subscriber