OpenAI Dev Day, Github Universe and Sam Altman leaving OpenAI – Live and Learn #29
Welcome to this edition of Live and Learn. The last weeks were an absolute bonanza in terms of things that happened in the world of AI. The Presidential Executive Order, OpenAIs Dev Day, the GitHub Universe conference, the SnapDragon Summit, an AI automatically creating valid legal contracts, Runways Gen-2 update, Lumas text to 3D model, xAi releasing Grok, and OpenAI firing Sam Altman.
Frankly, it has been a little overwhelming. The pace of development and the direction everything is going feels insane. People are building magical tools, right now and everything seems in constant flux. And still, the things now are only a glimpse of what is yet to come. This edition of Live and Learn will be one of the longest ones yet, but I hope you enjoy it anyway.
✨ Quote ✨
What an astonishing thing a book is. It’s a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it and you’re inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.
– Carl Sagan - (source)
🖇️ Links 🖇️
Hu.ma.ne AI Pin. The AI Pin by Humane has been released. It's a wearable device that brings the power of AI everywhere you go. The demos are quite wild. People who wear something like this, become wizards in a way. Able to seamlessly access and utilize information to deal with their surroundings more efficiently. Moving the power of AI into the background–to always be there to help people live their lives more meaningful, to understand the world, and access ideas and information about everything, when needed. There is a TED talk, about this same product that is worth watching as well. I wonder where computing devices and platforms like this will go next and what kind of wizardry we will be able to use in the not-so-distant future. We truly live in exciting times.
GitHub Universe Developer Conference. The whole event was epic in terms of the scale of the announcements and the quality of the presentations and I suggest watching more of it than just the Keynote. Some of the coolest presentations include the role of AI in cyber security, and how GitHub is using AI to enhance internal developer productivity. To me, the craziest announcement was that of GitHub Workspaces, which will enable the semi-automatic resolution of issues opened in Github. Somebody has a problem with a repo, and the AI understands enough context and the problem description to automatically generate a step-by-step plan to address the issue and even open a PR with suggested fixes. Basically, Magic.
Snapdragon Summit. All of the AI workloads and generative AI breakthroughs need compute to enable them. AIs need to run on our devices to make experiences like virtual reality possible. And Snapdragon is inventing the new chips that make these experiences possible. During this announcement, they show how their new flagship models make things like running generative AI models like Stable Diffusion on high-end smartphones in sub-second times possible. They also showed that they are building an Oryon chip that outperforms the M2 max, the high-end Apple chip–both in terms of raw compute as well as power efficiency. This is simply nuts. And to me all of this is exciting, to be able to see how all these pieces fit together and how the whole industry, both on the software and hardware sides comes together to build real-life magic. How, with the innovation of all these companies, we are going towards a future where computation and artificial intelligence are deeply woven into the fabric of our everyday lives. Many of these keynotes and announcements seem like science fiction now, the only difference is that, well... they aren't science fiction, they are real products, real breakthroughs, real technology that exists right now.
OpenAI DevDay. This Developer Conference eclipsed all of the other things that have happened in the last few weeks. OpenAI absolutely killed it with their announcements and they are making ChatGPT better at such a rapid pace that it feels unreal. I am still wrapping my head around how ChatGPT just came out last year... And now it is multimodal, can access up-to-date information, browse the web, execute code, interact with files, generate images, has a 128k context window, can be customized into "GPT" agents, is integratable into existing products as an assistant, can understand and call your own code to build crazy new experiences and has gotten much cheaper and faster to run?! Within a year?! WTF. It's worth reading the announcements like the introduction of GPTs and their new products in OpenAIs own words. Their progress has been insane, and as with many other updates recently (Meta, Snapdragon, Github…), Satya Nadella and Microsoft play a role in empowering these companies. It's wild to see the positioning of Microsoft as a platform company play out.
Presidential Executive Order on Safe AI Development. It seems like the regulatory apparatus of the United States is kicking into gear trying to control and safeguard the development of AI. They want to increase the chance that AI developments are "good", and that the US can control them. There is a solid summary of what all of this means on the HAI blog of Stanford University, as well as an official fact sheet by the White House, that gives a short summary of what the executive order is trying to accomplish.
Gen-2 Updates by RunwayML. Runway's Gen-2 is an AI for creating short video clips. Kind of like Midjourney or Dalle but for videos instead of images. Now it has gotten an update that massively increased the quality of its output and people have gone crazy over how much of a step-up it has been. I think soon we can generate entire movies without ever touching a camera by using Runway Gen-x or some model like it and that is just crazy.
Text to 3D by LumaAI. You have to sign up on Discord to use the research preview of the model but I think it's worth trying it out. It's another magical moment, just like when Midjourney was first released... pretty soon we will have production-ready 3D models entirely generated by a text prompt. This in turn will enable game creators and CGI artists to go completely nuts, since they won't have to learn or use the hard and time-consuming skill of modeling 3D objects anymore. They can simply generate them on the fly with a tool like this.
Grok by xAI. This LLM, built by a team founded by Elon Musk, is a little different from the helpful assistants created by the likes of OpenAI, Google, or Anthropic. Instead of a boring, "Hey I can't do this for you because I am a large language model designed by OpenAI" they aim to make their chatbot into more of a whimsical character. Something, that is fun to interact with, jokes and uses sarcasm, asks you questions in return, and is generally a little more humanlike. It can also access information from X in real time but what is craziest to me is the speed with which they developed it. They say it only took them four months to create a model that is competitive with other big LLMs out there and that is quite crazy. Sadly it's limited in access to users from the US right now and in an invite-only beta but all of this will change in the future and I am looking forward to the point where they release a model that is better than GPT-4. Let’s see where this is going.
First Legal Contract Without Human Involvement by CNBC. In this article CNBC describes how a company by the name of Luminance is creating and selling AI tools to automate parts of the legal workflow. Luminance aims to make the lives of lawyers easier as well as to sell legal automation solutions to companies directly so they can do more legal work with fewer lawyers. Luminance has now, for the first time ever used an AI to completely generate and negotiate a valid contract from scratch. Their AI helps lawyers, much like Githubs Copilot is helping developers. They boast to be able to save companies 90% of the time they spend on legal issues, which is wild.
Turn Sketches into Working Code by tldraw. This web app is essentially a whiteboard for sketching out UI and website ideas with a magical make real button powered by GPT-4. It turns the sketch into a working website including JS, CSS, and HTML code, and does it all within mere seconds. This stuff is absolutely nuts and the demos of people using this tool are insane. Just look at this, or this, or this, or this. Or simply browse through the tldraw X account to see more crazy examples of this in action... things like this are magic now but will become commonplace soon enough and the next generation of tools will really look like magic from today's point of view.
Sam Altman is fired from OpenAI. Lastly, I have to mention this here. It came as a surprise to many and the internet is on fire trying to piece together what might have happened. X, formerly Twitter is completely wild with ideas, ranging from "they figured out AGI" to “Sam Altman using the OpenAI brand to build competitors with a different agenda involving Saudi oil money for investments”... Personally, I think it might be that the changes Sam Altman was pushing for with the commercialization of ChatGPT, went against the core ideas of OpenAI's original charter. Trading safety aspects in the development of AGI in favor of better market penetration and money... As this situation unfolds, I am curiously following what is happening… Now there are even rumors of Sam Altman coming back after all so I am thrilled to see where this all is going.
🌌 Midjourney 🌌
Midjourney also dropped its style tuner feature, where you can preserve the style of a generated image across prompts. At first, you generate a style with the /tune command, then you select a bunch of images you like to copy the style of, and then get a style hash that can be used with --style after any /imagine prompt. It's fun to play around with. The below images were generated with this style parameter:
I like how it generally tries to preserve the color palette and rough art style but feel that it could be a little better. The images look similar in style but at least to me not yet the same. I think this is a good first step though and I am excited to see how this feature will evolve over time and how it will change how people produce art with Midjourney, replacing complex prompts with long-winded descriptions with a simple style hash that always produces the exact look you want.
🎶 Song 🎶
Tak Tak To Ja by Hania Rani
That's all for this time. I hope you found this newsletter useful, beautiful, or even both!
Have ideas for improving it? As always please let me know.