by Rico Trebeljahr

Open AI's Codex

The only way out are ideas

A few days ago I have watched the OpenAI Codex demo and I am still thinking about what to do, because it means that I am going to lose my job as a software engineer pretty soon. And this post is an exploration of what I might be doing to prolong this period for some time into the future

OpenAI what?

OpenAIs codex is a computer program, which knows how to write computer programs. You should check out their first demo, and maybe this video here, showing that it might not take that long before OpenAI codex can write itself...

Codex is still in Beta and has some very rough edges. As of now it is not as capable as a " good" human developer. It can do 37% of the "coding" tasks metric the OpenAI team developed... But that's already bonanza crazy anyways.

Considering that it has only been around for a little bit of time and that the OpenAI team makes giant progress every year, it is at most a few years until it surpasses all human level coding abilities. Essentially Codex will interpret commands given in plain English and use them to write a computer program that fulfills your request. And it will be smart enough to decipher the actual intent, getting what you mean when you say write this or write that!

The last Programming Language

It will transform computer programming from an "arcane" form of art and wizardry into something much more powerful and easy to use.

You won't need to know the exact language the computer uses anymore, and all the knowledge about how computers work in general, to make the computer do something that you want. Because Codex will make the computer understand plain English. Writing computer programs in English has been a dream of programmers for a long time. Slowly languages have moved into "higher" level languages, away from the 0s and 1s the computer knows, and the machine instructions the CPU might use into things that somewhat resemble English.

Now Codex will lift that to a much higher level. The last level. It will abstract all the nitty gritty details of the "high" level programming languages of the past away, leaving people with only one requirement they need to know in order to use Codex and go on and program whatever they want. And this requirement is something that everybody reading this text already has - it's that you know the English language.

In a way that's what Codex is - an awesomely clever compiler that can infer intent from English requirements and translate them into a high level programming language program that fulfills what the person had asked for.

The job of that compiler ... it sounds familiar. Because that's pretty much exactly what my job (and that of every other programmer out there) is at the moment. As a software developer/engineer/whatever this is exactly the thing we are doing - we translate the plain English requirements into high level programming language code that solves the problem we face. When codex can do that on it's own, well, there is no more need for any programmers - or rather - everybody who knows English suddenly turns into one!

So that, in a nutshell, is what Codex will do. It is going to enable everybody to "learn" how to code instantly when they know the English language.

Programming -> democratized

What does all of this mean? It means that soon you can program... Like everybody who can read this text and also write English can program - period. And even better - everybody will get better at programming from now on, even without learning anything! The tool will be further developed by OpenAI therefore enabling everybody to be a "better" programmer over time, because Codex can solve problems better over time, so can you.

So... That's why I am going to lose my job pretty soon, because I can not compete with an AI that can soon code better than I can. And will soon be able to code better than everybody else on the planet. An AI that enables everybody who knows plain English, to write rock solid computer programs which do exactly what they want and only that. Without bugs and high performance.

Learning Programming Stuff becomes obsolete

A consequence from this is, that spending a lot of time on learning something that might seem valuable now is going to be "completely" wasted time and effort (at least from a careers perspective), since the skills learned, will get outdated rapidly, within the next years, months or even weeks. And that is something that scares the shit out of me.

It means that learning new programming languages or how to develop crypto applications or shader coding or how to write game engines and any other such thing is going to be a waste of time, if I only do them for staying competitive in the software engineering industry. There is nothing I can do about that, so I might as well except it and pivot to another activity.

But if coding can be automated - then - what's left?

And the answer to that question is pretty grim. Because there is no answer. Everything from here on out can be automated, and people will do so sooner rather than later if there are economic incentives to do it. But I think there is one exception that will take some more time than all the others: jobs that involve the creation and sifting of ideas.

In a way, what OpenAI builds is an awesome tool to express ideas of human creation within digital reality (code, text, images, whatever). This tool is extremely easy to use (the only requirement is knowing English) and available to everyone.

But... this tooling (for now at least) leaves one aspect of the human intellect un-automated. And that is the aspect of creativity. OpenAI already built things that can generate ideas ( GANs and Auto-Encoders), but sifting through them and choosing only the ideas that "work" is still something that humans need to do. There is no intent to program anything yet from the AI, it can not dream up a new startup or something utterly new that has never been before. It can not ** "innovate", at least for now. And so that's what I could be doing in the future.

At least for the medium term future, that's the way out I can see. Because I can still do that, create and sift through ideas, for maybe the next five to ten years until that will also be automated away by the next OpenAI neural network...

A very weird future.

When codex can generate and implement its own ideas and develop its own notion of "where to go" the world might collapse into a singularity anyways. That's the nature of the exponential progress that can happen when a machine can generate its own completely novel ideas, then implement them with amazing speed and upgrade itself over and over again, in an upward spiral of exponential growth. At that time, worrying about a job or almost anything becomes kind of meaningless. Because we will have created our version of something that might deserve the label "god".

And even if that doesn't happen, there will definitely be enough tools like Codex around that can do everything better than humans can. Including science and engineering and even the generation of ideas. Sure the whole thing doesn't explode into a singularity where everything changes, but the world would still have to look vastly different.

Two Options

Complete automation like this would result in an abundance, that's at least theoretically available for everybody. With this abundance I think, there are essentially two options - either the spoils of automation are divided somewhat fairly - in other words, everybody can have everything necessary for survival and infinite leisure time for free or there is some sort of basic general "income" which is enough to "buy" everything one needs.

Or they aren't divided fairly and access is restricted by a crazy elite, trying to keep everybody else poor and wretched, so that they can enjoy their symbol of status. In that case, one needs to have enough capital to own enough of the production machinery to be able to live from it and be part of that elite. But using the leisure time to push from within to destruct the elite and enable everybody to have their fair share of that abundance. This in a way would be the final revolution Marx has been thinking about, where the production capabilities will be shared by everyone, and the planning and execution will be done automatically as well by an extremely competent AI.

This third future is why I am even thinking about working for the next years until that point comes. I do not want to end up on the bottom of that curve of wealth, where one is part of the poor and wretched.

Because when the point of complete automation comes, then, however hard you might be able to work, you can not change anything about your economic status anymore. Since, by that time everything you might be able to do, can already be done better without you. Therefore at that time there is no need for you anymore, you can not create value anymore and therefore wealth can not be gained anymore, except by returns from owning a part of production machinery.

The Kantian imperative?

All of this makes me feel very gloomy and almost doomsday like. In the sense, that if everything comes to this point eventually anyways, without me doing anything for it - then why am I not spending all of my time - not working at all, but enjoying all the life I can, while the world is still the way it is now?

Why do I bother about learning, about reading, about becoming good at things? The thought crossing my mind is to stop competing entirely, taking my money that I have now and bridge the gap of time until complete automation with stuff that I enjoy doing... There is enough people out there, why should I bother helping to fix the problem and help create a world of complete abundance - if I am already sure that this will happen within my lifetime anyways?

I think this is the same problem as the fundamental problem of cooperation. Which leads to the tragedy of the commons and where the genius of the Kantian imperative lies. In a way the idea of - if everybody would do it like you would, how would the world look - is a test for the tragedy of the commons - using rationality to sift out suboptimal but evolutionarily stable solutions from ones behavior.

So that's the answer then - if everybody where to think like that - everybody would start to travel and do whatever the fuck they want, leading to a collapsing society, without values except ones own pleasure and further delay that ideal dream of utopian abundance until tomorrow, forever more. In other words it would never happen, because of the decisions of all the individuals, thinking that it would happen without them, leading to giving up their responsibility entirely, waiting on the sidelines for somebody else to come around and do it.

And that's obviously bad and the second reason why I am still worrying about working and not already retiring with some good books on a beach somewhere in the middle of nowhere...

The human purpose crisis

The last problem I see with all of this is the loss of purpose in peoples lives. In a way people have defined and identified themselves by what they were doing for centuries now. And if that gets taken away from them, the question of what is left, becomes very loud and clear. In other words - if there is no problems to be solved, nothing to do, only infinite leisure time - than what to do? How can you stretch your mind and exert creative powers and freedom and responsibility if everything is already taken care of by machines? Then what is your purpose? And how do you decide to spend your time? And if those questions can not be sufficiently answered, then we should start worrying about whether or not we want this future, in other words, if we might not want to go all the way with automation, leaving some aspects of work - maybe the creation of ideas - still up to us humans...

Subscribe to Live and Learn

Twice a month. Quotes, photos, booknotes and interesting links. Bundled together in one heck of a Newsletter. No spam. No noise.