GitHub’s Copilot is a great tool for managing projects, but it’s not the end of programming. In fact, it may even be the beginning. Copilot is a collaboration platform that lets developers work on projects with other developers. It also allows for code reviews and collaboration between teams. This is important because it can help to prevent errors and make sure that the code being developed is correct. Copilot also has a feature called “GitHub Workflow.” This allows developers to create a workflow that will allow them to work on different parts of a project at the same time. This can help to speed up the process and make sure that all of the tasks are completed correctly. Overall, Copilot is an important tool for managing projects and helping to ensure that the code being developed is correct. It can also help to speed up the process and make sure that all of the tasks are completed correctly.
Distributed as a Visual Studio Code extension, Copilot is like a vastly more powerful autocomplete that can fill in whole sections of code. It looks at what you’re writing and suggests new lines or entire self-contained functions.
Copilot’s introduction has caused some fear that it will eventually replace developers altogether. After all, if it knows what you’re writing, and can suggest what comes next, isn’t it the closest thing yet to an AI-powered developer? Here’s why that isn’t the case.
What Actually Is Copilot?
First, it’s helpful to explore what Copilot is today. The preview release is built on OpenAI Codex, an AI system acquired by Microsoft from OpenAI. It’s a powerful source code analyzer compatible with dozens of popular programming languages.
Codex is meant to understand “how” people use code. It determines the context of the code you’re writing and suggests what could come next. Unlike an IDE’s autocomplete, Copilot is capable of synthesizing new output from the code it’s learned. It’s not just an index of previously seen code.
GitHub’s currently citing a few specific examples as key use cases. These include generation of common functions, automatic production of unit tests, and improved discovery of code in APIs and libraries. If you’re integrating with a common third-party API, Copilot could get you started before you’ve read the documentation or copied a boilerplate.
The system can also automate the completion of repetitive code sections, such as an array of objects with similar properties. You can write the first few manually, then have Copilot populate the rest of the array using your example. It’s reminiscent of dragging down cell values in Excel.
How Far Does Copilot Go?
At present, the answer is “not very far.” Despite all the “intelligence”, “contextual” and “synthesizer” buzz words, Copilot still has limited insight into your true intentions and what your code needs to achieve.
Copilot only looks at your current file when computing suggestions. It won’t assess how the code’s used across your program. The AI’s interpretation of your work might be significantly different to your own and could vary on a file-by-file basis, even if the true reasoning behind the files doesn’t change.
GitHub’s also clear that Copilot’s output is not guaranteed to be the “best” approach or even code that works. You might get security issues, lines that use old or deprecated language features, or code that simply doesn’t run or make sense. You need to audit each Copilot suggestion you use to make sure your project still compiles and runs.
Copilot’s real role in the development process should now be a little clearer: it’s an assistive technology, meant to make the mundane somewhat easier, not a true automaton. Think it of as a sidekick or a navigator, not some form of omniscient developer that writes your code for you.
Copilot Doesn’t Scale
Copilot’s best when you let it help you write functions that solve common use cases. What it can’t do is understand the broader context of your codebase. Without the ability to really understand your intentions, Copilot’s scale is limited.
GitHub says it’s working on making Copilot smarter and more useful. But until it’s able to look at your entire project, not a single file, it’s unclear how its role could be further expanded. In its present state, Copilot’s essentially a glorified autocomplete. Instead of pressing tab to auto-fill standard library function names, you can accept suggestions for the functions themselves.
Solutions for abstract technical problems already abound on programming sites like Stack Overflow. Copilot cuts out the time needed to search for a question, review the answers, and copy-and-paste the code. However, you’re left to work out how to incorporate the solution into your overall system, after you’ve checked Copilot’s suggestion actually works.
Copilot’s not really programming at all. It looks at what you’ve written, infers what you might be trying to do, and tries to assemble something suitable from its learned solutions. Copilot works for you, not the other way around. It’s incapable of thinking creatively, suggesting a high-level architecture, or producing a cohesive system. Each suggestion is fully self-contained and derived solely from the code immediately around it in the source file.
By GitHub’s own admission, Copilot really is dependent on you. The tool works best when your code base is logically organized into small functions with clear typings, comments, and doc blocks. If you want the best results, you’ll need to lead Copilot along by writing high-quality code yourself.
What About Licensing?
Copilot has been trained using public GitHub projects with a wide variety of licenses. According to GitHub, this represents fair use of those projects. What’s less clear is your responsibilities should you accept a Copilot suggestion.
GitHub says Copilot’s output “belongs to you” and “you are responsible for it.” It explicitly states you don’t need to credit Copilot or any other author if you use a suggested snippet. The company’s keen to present Copilot as a “code synthesizer” that produces original output, not a search engine of indexed snippets.
Here’s where the trouble begins. Copilot still stands a chance of outputting code sections verbatim. Depending on the licenses surrounding those snippets, this could get your own project into hot water. As Copilot’s been trained on GitHub projects, you might even find personal data is injected into your source files.
These events are meant to be rare. They’re said to be more likely if the surrounding code context is weak or unclear. Examples seen so far include GPL-licensed Quake code emitted as-is (complete with profane language) and a real individual’s website text and social links showing up when Copilot thinks you’re writing an “about me” page.
The GPL and other similar licenses stipulate derivative works must include the same permissions, so incorporating GPL code into a commercial product is a licensing breach. Consequently, use of Copilot has serious legal ramifications attached which you should evaluate before installing it. As Copilot does seem to emit code verbatim, without indicating the license accompanying the snippet, you could unknowingly commit copyright infringement by accepting a suggestion.
This should confirm beyond doubt that Copilot’s initial release is not going to replace a human developer. The code it emits isn’t guaranteed to be relevant, might be broken or outdated, and could even be a legal risk.
Conclusion
Copilot’s an ambitious project which has attracted a lot of discussion. The level of debate indicates many people have strong feelings about the idea. It’s been a while since a new developer tool attracted so much buzz on day one.
Copilot’s appealing because it plays to several developer frustrations. Most, if not all, programmers sense the inefficiency in writing “boilerplate” code that’s not terribly specific to their project. Taking Copilot at face value, they now have a solution that frees up more time to work on the creative aspects of their work.
Where Copilot falls down is the blanket approach GitHub’s taken to training the model. The inclusion of GPL-licensed code and the complete lack of any form of output testing are oversights that will hamper Copilot’s real-world use. It’s unclear whether GitHub’s decision to train the model using public code actually falls under fair use; there’s speculation it may not, in at least some jurisdictions.
Moreover, GitHub’s inability to verify Copilot code actually works means developers will still need to exercise caution and review everything it writes. A big part of Copilot’s promise is in helping inexperienced developers progress but this won’t happen if potentially buggy code is suggested and accepted.
Finally, Copilot doesn’t provide any indication of how or why its suggestions work. If it’s to truly replace human developers, it should be able to explain the workings of a solution and provide visibility into the decisions it took. Developers can’t blindly trust the machine; there will always need to be oversight and evaluation of different solutions.
The how and why is also the biggest challenge facing a developer early in their career, which impedes Copilot’s role as a mentoring tool. Anyone can copy source code from public projects, documentation, their peers, or Copilot, but ultimately it’s acquiring understanding of why solutions work that progresses you through your career.
Copilot in its current iteration doesn’t address this – it’s still up to you to work out what the inserted code does. Even a developer who regularly relies on Stack Overflow will end up in a better place, as they’ll be reading answers and learning the thinking behind solutions. Copilot is a black box that could be seen as a repository of perfect code suggestions; the evidence so far shows that is far from the case.