
Ars Technica
On Monday, Ars Technica hosted our Ars Frontiers virtual conference. In our fifth panel, we covered “The Lightning Start of AI—What’s Changing Suddenly?” The panel featured a conversation with Paige Bailey, lead product manager for Generative Models at Google DeepMind, and Haiyan Zhang, general manager of Gaming AI at Xbox, moderated by Ars Technica’s AI reporter, Benj Edwards .
The panel originally streamed live, and you can watch a recording of the entire event on YouTube. The introductory part of “Lightning AI” begins at the 2:26:05 mark of the broadcast.
Ars Frontiers 2023 livestream recording.
With “AI” being a nebulous term, meaning different things in different contexts, we began the discussion by considering the definition of AI and what it meant to the panelists. Bailey said, “I like to think of AI as helping to extract patterns from data and use them to predict insights…
Zhang agrees, but from a video game angle, he also sees AI as a transformative creative force. For him, AI is not just about analyzing, finding patterns, and classifying data; it also develops capabilities in creative language, image creation, and coding. Zhang believes that this transformative power of AI can elevate and inspire human inventiveness, especially in video games, which he considers “the ultimate expression of human creativity.”
Next, we entered the main question of the panel: What has changed that has brought about the new era of AI? Is it all just hype, perhaps based on the high visibility of ChatGPT, or are there some major tech breakthroughs that have brought us to this new wave?

Ars Technica
Zhang pointed to the advances in AI techniques and the large amount of data available today for training: “We have seen improvements in model architecture for transformer models, as well as recursive autoencoders that model, and also the availability of large data sets. to quickly train these models and a couple of the third, the availability of hardware such as GPUs, MPUs to actually get the models to get the data and to train them of new computational capabilities.”
Bailey echoes these sentiments, adding a notable mention of open-source contributions, “We also have this vibrant community of open source tinkerers who are open sourcing models, models like LLaMA, carefully fix it with very high-quality tuning instructions and RLHF datasets. .”
When asked to explain the importance of open source collaborations in accelerating AI development, Bailey cited the widespread use of open-source training models such as PyTorch, Jax, and TensorFlow. He also affirmed the importance of sharing best practices, saying, “I really think that this machine learning community only exists because people share their ideas, their insights, and their code.”
When asked about Google’s plans for open source models, Bailey pointed to existing Google Research resources on GitHub and highlighted their partnership with Hugging Face, an online AI community. “I don’t want to give away anything that might come down the pipe,” he said.
Generative AI in game consoles, AI risk

Ars Technica
As part of a conversation about AI hardware developments, we asked Zhang how long it would be before generative AI models could run natively on consoles. He said he was excited about the prospect and noted that a dual cloud-client configuration could be the first: “I think it’s a combination of working with AI to be inferencing in the cloud and working in collaboration with local inference for us. to bring to life the best player experiences.”
Bailey pointed to the development of reducing the LLaMA language model in Meta to run on mobile devices, hinting that the same path forward could also open up the possibility of running AI models on consoles. in the game: “I want to have a hyper-personalized big language. model running on a mobile device, or running on my own game console, which can make a boss that is very cruel for it’s easy for me to lose, but it can be easier for others to lose.
To follow up, we asked if a generative AI model were running locally on a smartphone, would Google be cut out of the equation? “I think there’s probably room for different options,” Bailey said. “I think there should be options available for all of these things to come together meaningfully.”
In discussing the social risks from AI systems, such as misinformation and deep fakes, the two panelists said that their respective companies are committed to the responsible and ethical use of AI. “At Google, we care deeply about making sure the models we produce are responsible and behave as ethically as possible. the right pre-training mix is created,” Bailey explained.
Despite his initial enthusiasm for open source and locally run AI models, Baily mentions that API-based AI models that run only in the cloud can be more secure overall: “In my think there’s a lot of risk for the models to be available in the hands of people who don’t necessarily understand or be aware of the risk. And that’s also part of the reason why sometimes it helps to prefer APIs over open source models .”
Like Bailey, Zhang also discussed Microsoft’s corporate approach to responsible AI, but he also spoke about the ethical challenges specific to the game, such as ensuring that AI features are inclusive and accessible.