The Autocomplete Developer

GitHub Copilot moved out of beta.

More friction than value

My reaction after 30 minutes of usage? I disabled it. In day-to-day development I tend know what I’m doing. When I struggle to solve a problem, it’s usually not because I don’t know what code to type. It’s because I struggle to find the right abstraction level, the right primitives / components, the right interfaces, or have challenges elegantly accounting for all cases of the business logic - my issues are about code design and thinking, not typing. But GitHub Copilot mainly addresses the latter.

Parsing multi-line autocomplete proposals into my head which diverge from my line of thinking disrupt my flow, and to verify their correctness takes me longer than typing the equivalent code from the top of my head - and that’s in the best case where the first proposals get it right.

Copilot as it is today is disrupting and annoying me more than it helps. I could configure it to only trigger when I’m stuck. But I feel even that would be to treat a symptom, not the cause.

I’m not surprised about this finding - because in my experience, developers spent most time on things in anticipation of typing down code, not in typing it. The value is in the thought. Spending less time typing will increase productivity, but it’s a micro-optimization compared to optimizing ones thought process / problem solving ability.1

(Un)learn coding?

Coding will change over the years, and having an AI doing grunt-work seems like a plausible pathway.2 What I’ve seen so far does not feel like it increases my productivity with languages I know though. At the same time it’s impactful enough that for someone learning a new language, I can well imagine how it would help to get working output in little time from the get-go.

The potential downside I fear is one where relying on the computer to write the trivial 50% of code might impact my ability to deliver the remaining 50%: Day-to-day familiarity with code is important to retain and grow the skill.

I well remember a candidate applying to a senior JS position at Bird who told me they were feeling blind without TypeScript in the coding challenge. The code in question was the return value of a fetch call, which was a flat array with objects with 3 properties.

This is just a single anecdote about an unsuited candidate, but it reminded me that my expectation of a senior developer is that they can implement working code even with little tooling. Which reminds me about my dad’s opinion regarding advanced math: one should be able to solve every problem without any help but pen and paper, to proof proper understanding and control of computer based calculations.

The truth is that as new tools appear that abstract low level problems, our capabilities and productivity increase. At the same time even experienced practitioners grow less accustomed to solve the low level problem. And that’s mostly a fine shortcut until some “apocalypse strikes” and intimate knowledge of basics is required again.

Morals and legality are murky

  1. Copilot is trained on open source software, but doesn’t provide any attributions. Using code proposed by Copilot might come with license obligations, but they are not displayed, and thus using Copilot code might infringe license agreements.
  2. Proposals can contain PII and tokens of projects which pushed such information to the repositories.
  3. Proposed code can be buggy and contain (un)known security flaws.3

From these, the first two seem like GitHub’s job to get right. 3. is the responsibility of the software developers job to check, just like when using code from the internet like answers from StackOverflow.

If I were to run a company, 1. would make me seriously think to disobey tools like Copilot, just to be on the safe side (even though consequences are unlikely).

But 1. is primarily a punch in the gut of any OSS project using GitHub. GitHub didn’t bother to ask for permission to use the code in training data - it would have been trivial for GitHub to offer an opt-in or -out UI in their product, but they didn’t. That’s in stark contrast to GitHubs public communication as being the place to be for OSS. In this case, GitHub prioritized building a closed source paid product over the relationship and duties towards the OSS ecosystem. Given GitHub’s active marketing and the role they attained, GitHub needs to do better. It’s only consequential that Copilot is triggering a lawsuit.

Footnotes

  1. By stating this, I admit that most optimizations developers tend to make about their computer setup is wasted time. Usually we’re lying to ourselves when we claim to increase anything more than floating digits about our productivity when messing with our setup. The reason we do it for fun. It’s a way to nerd out and feel good for ourselves and like we belong to the tribe. Yes, I’ll continue to be ridiculous and buy ZSA Moonlanders as long as I wish, and I’ll claim that it’s indispensable, while knowing better, thank you.

  2. Related: I list difficulties AIs have to mash context from lots of sources and extracting a shared intent as problem in this post. This would appear to be a general challenge for AIs. Lots of fun has been made how ChatGPT switches cause and effect in the fervor of conviction.

  3. There’s been tweets showcasing indicating individual cases, but recently also a Stanford study.