Rise of the Machines: DeepMind AlphaCode AI’s Strong Showing in Programming Competitions

Artificial Intelligence Data AI Problem Solving

Scientists report that the AI system AlphaCode can achieve average human-level performance in solving programming contests.

AlphaCode – a new Artificial Intelligence (AI) system for developing computer code developed by DeepMind – can achieve average human-level performance in solving programming contests, researchers report.

The development of an AI-assisted coding platform capable of creating coding programs in response to a high-level description of the problem the code needs to solve could significantly impact programmers’ productivity; it could even change the culture of programming by shifting human work to formulating problems for the AI to solve.

To date, humans have been required to code solutions to novel programming problems. Although some recent neural network models have shown impressive code-generation abilities, they still perform poorly on more complex programming tasks that require critical thinking and problem-solving skills, such as the competitive programming challenges human programmers often take part in.

Here, researchers from DeepMind present AlphaCode, an AI-assisted coding system that can achieve approximately human-level performance when solving problems from the Codeforces platform, which regularly hosts international coding competitions. Using self-supervised learning and an encoder-decoder transformer architecture, AlphaCode solved previously unseen, natural language problems by iteratively predicting segments of code based on the previous segment and generating millions of potential candidate solutions. These candidate solutions were then filtered and clustered by validating that they functionally passed simple test cases, resulting in a maximum of 10 possible solutions, all generated without any built-in knowledge about the structure of computer code.

AlphaCode performed roughly at the level of a median human competitor when evaluated using Codeforces’ problems. It achieved an overall average ranking within the top 54.3% of human participants when limited to 10 submitted solutions per problem, although 66% of solved problems were solved with the first submission.

“Ultimately, AlphaCode performs remarkably well on previously unseen coding challenges, regardless of the degree to which it ‘truly’ understands the task,” writes J. Zico Kolter in a Perspective that highlights the strengths and weaknesses of AlphaCode.

Reference: “Competition-level code generation with AlphaCode” by Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals, 8 December 2022, Science.
DOI: 10.1126/science.abq1158

Be the first to comment on "Rise of the Machines: DeepMind AlphaCode AI’s Strong Showing in Programming Competitions"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.