Logo Size: | 250px |
Developer: | GitHub, OpenAI |
Operating System: | Microsoft Windows, Linux, macOS, Web |
Website: | copilot.github.com |
Latest Release Version: | 1.7.4421 |
GitHub Copilot is a code completion tool developed by GitHub and OpenAI that assists users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code.[1] Currently available by subscription to individual developers and to businesses, the generative artificial intelligence software was first announced by GitHub on 29 June 2021, and works best for users coding in Python, JavaScript, TypeScript, Ruby, and Go.[2] In March 2023 GitHub announced plans for "Copilot X", which will incorporate a chatbot based on GPT-4, as well as support for voice commands, into Copilot.[3]
On June 29, 2021, GitHub announced GitHub Copilot for technical preview in the Visual Studio Code development environment.[4] GitHub Copilot was released as a plugin on the JetBrains marketplace on October 29, 2021.[5] October 27, 2021, GitHub released the GitHub Copilot Neovim plugin as a public repository. GitHub announced Copilot's availability for the Visual Studio 2022 IDE on March 29, 2022.[6] On June 21, 2022, GitHub announced that Copilot was out of "technical preview", and is available as a subscription-based service for individual developers.[7]
GitHub Copilot is the evolution of the 'Bing Code Search' plugin for Visual Studio 2013, which was a Microsoft Research project released in February 2014.[8] This plugin integrated with various sources, including MSDN and Stack Overflow, to provide high-quality contextually relevant code snippets in response to natural language queries.[9]
When provided with a programming problem in natural language, Copilot is capable of generating solution code.[10] It is also able to describe input code in English and translate code between programming languages.
According to its website, GitHub Copilot includes assistive features for programmers, such as the conversion of code comments to runnable code, and autocomplete for chunks of code, repetitive sections of code, and entire methods and/or functions.[2] [11] GitHub reports that Copilot’s autocomplete feature is accurate roughly half of the time; with some Python function header code, for example, Copilot correctly autocompleted the rest of the function body code 43% of the time on the first try and 57% of the time after ten attempts.
GitHub states that Copilot’s features allow programmers to navigate unfamiliar coding frameworks and languages by reducing the amount of time users spend reading documentation.
GitHub Copilot was initially powered by the OpenAI Codex,[12] which is a modified, production version of the Generative Pre-trained Transformer 3 (GPT-3), a language model using deep-learning to produce human-like text.[13] The Codex model is additionally trained on gigabytes of source code in a dozen programming languages.
Copilot’s OpenAI Codex is trained on a selection of the English language, public GitHub repositories, and other publicly available source code.[2] This includes a filtered dataset of 159 gigabytes of Python code sourced from 54 million public GitHub repositories.[14]
OpenAI’s GPT-3 is licensed exclusively to Microsoft, GitHub’s parent company.[15]
In November 2023, Copilot Chat was updated to use OpenAI's GPT-4 model.[16]
Since Copilot's release, there have been concerns with its security and educational impact, as well as licensing controversy surrounding the code it produces.[17]
While GitHub CEO Nat Friedman stated in June 2021 that "training ML systems on public data is fair use",[18] a class-action lawsuit filed in November 2022 called this "pure speculation", asserting that "no Court has considered the question ofwhether 'training ML systems on public data is fair use.'"[19] The lawsuit from Joseph Saveri Law Firm, LLP challenges the legality of Copilot on several claims, ranging from breach of contract with GitHub's users, to breach of privacy under the CCPA for sharing PII.[20] [19]
GitHub admits that a small proportion of the tool's output may be copied verbatim, which has led to fears that the output code is insufficiently transformative to be classified as fair use and may infringe on the copyright of the original owner.[21] In June 2022, the Software Freedom Conservancy announced it would end all uses of GitHub in its own projects,[22] accusing Copilot of ignoring code licenses used in training data.[23] In a customer-support message, GitHub stated that "training machine learning models on publicly available data is considered fair use across the machine learning community", but the class action lawsuit called this "false" and additionally noted that "regardless of this concept's level of acceptance in 'the machine learning community,' under Federal law, it is illegal".
On July 28 2021, the Free Software Foundation (FSF) published a funded call for white papers on philosophical and legal questions around Copilot.[24] Donald Robertson, the Licensing and Compliance Manager of the FSF, stated that "Copilot raises many [...] questions which require deeper examination." On February 24, 2022, the FSF announced they had received 22 papers on the subject and using an anonymous review process chose 5 papers to highlight.[25]
The Copilot service is cloud-based and requires continuous communication with the GitHub Copilot servers.[26] This opaque architecture has fueled concerns over telemetry and data mining of individual keystrokes.[27] [28]
A paper accepted for publication in the IEEE Symposium on Security and Privacy in 2022 assessed the security of code generated by Copilot for the MITRE’s top 25 code weakness enumerations (e.g., cross-site scripting, path traversal) across 89 different scenarios and 1,689 programs. This was done along the axes of diversity of weaknesses (its ability to respond to scenarios that may lead to various code weaknesses), diversity of prompts (its ability to respond to the same code weakness with subtle variation), and diversity of domains (its ability to generate register transfer level hardware specifications in Verilog). The study found that across these axes in multiple languages, 39.33% of top suggestions and 40.73% of total suggestions led to code vulnerabilities. Additionally, they found that small, non-semantic (i.e., comments) changes made to code could impact code safety.
A February 2022 paper released by the Association for Computing Machinery evaluates the impact Codex, the technology used by GitHub Copilot, may have on the education of novice programmers. The study utilizes assessment questions from an introductory programming class at the University of Auckland and compares Codex’s responses with student performance. Researchers found that Codex, on average, performed better than most students; however, its performance decreased on questions that limited what features could be used in the solution (e.g., conditionals, collections, and loops). Given this type of problem, "only two of [Codex’s] 10 solutions produced the correct output, but both [...] violated [the] constraint." The paper concludes that Codex may be useful in providing a variety of solutions to learners, but may also lead to over-reliance and plagiarism.