OpenAI Codex is one of the most impressive deep learning models ever created. Released a few months ago, Codex can generate code based on natural language sentences. The model is proficient in more than a dozen programming languages and can produce code for fairly complex instructions. If the research behind Codex is impressive, even more impressive is the machine learning (ML) engineering work put in place to develop such a model. Think about the challenges of training and testing a model that generates code. Codex is the model that powers GitHub Copilot.
From GTP-3 to Codex
GPT-3’s main skill is generating natural language in response to a natural language prompt, meaning the only way it affects the world is through the mind of the reader. OpenAI Codex has much of the natural language understanding of GPT-3, but it produces working code—meaning you can issue commands in English to any piece of software with an API.
Once a programmer knows what to build, the act of writing code can be thought of as (1) breaking a problem down into simpler problems, and (2) mapping those simple problems to existing code (libraries, APIs, or functions) that already exist. The latter activity is probably the least fun part of programming (and the highest barrier to entry), and it’s where OpenAI Codex excels most.
OpenAI Codex being a general-purpose programming model, it can be applied to essentially any programming task (though results may vary). Its been successfully used for transpilation, explaining code, and refactoring code.
OpenAI Codex empowers computers to better understand people’s intent, which can empower everyone to do more with computers.
Converting Python to Ruby with OpenAI Codex
Reference - https://openai.com/