Fifth Starship Flight, Machines of Loving Grace and Firefly Video – Live and Learn #53
Welcome to this edition of Live and Learn. This time with an article by Dario Amodei on the benefits of AI, the fifth Starship flight by SpaceX, and two new video models, one by the University of Tokyo and one by Adobe. As always, I hope you enjoy this Edition of Live and Learn!
✨ Quote ✨
Do something that makes you feel more creative, more alive, more human. Something that sets your brain on fire. Something that you love doing more than anything else in the world. Something that no one can do just like you. Something that can’t be playbooked. You are not a machine. Don’t act like one.
– NotBoring - (source)
Links
Machines of Loving Grace by Dario Amodei. This essay by Dario Amodei is an ode to how beneficial and good AI could turn out to be in the future if we get it right. He structured the advances that he expects to happen into rough categories: biology and health, neuroscience and the mind, economic development and poverty, peace and governance and lastly work and meaning. In all of them, he hopes the world to be changed drastically for the better by the sensible use of AI. Let's see if that vision comes true or not.
Starships 5th Test Flight by SpaceX. I cried while watching this because it's just so inspirational and I want to live in a future where traveling to space is as normal as flying in an airplane is today. The moment when they caught the rocket that just came back from space with giant metal chopsticks... well that was just insane. In the livestream, it looks like magic and in a way it is. It's sufficiently advanced technology that looks and feels like magic, but in the end it's real and it's another giant step forward toward fully and rapidly reusable rockets. For a deeper dive into what this means, read this essay by Tomas Pueyo: How Starship Will Change Humanity.
Tango: Create Video Avatar Scenes for ANY audio by University of Tokio. This model can take a photo of an arbitrary person and some audio of somebody speaking and then create a video where it looks like the person from the photo is saying the things from the audio input, including hand and body gestures. Deepfakes are here. You can have a look at more examples on their official paper page. For this paper, I really wonder: What is the use case, beyond showing that this is possible? They mention virtual YouTube content creation in their paper, but I think that's about it. Is it worth it to create more tools for deepfakes like this?
Firefly Video by Adobe. Adobe released their video model and their main differentiator is that Fireflies creations will be safe to use commercially. They can offer this guarantee because they own all the training data that they used to create this model. I am still waiting for the headlines when this model produces trademarked Disney characters regardless of that 😉. For even more announcements on what Adobe is working on you can also watch the full MAX keynote. In short, they are integrating AI features into all their products, enabling you to create sounds, images, videos, designs, and more, all with the help of AI.
🌌 Travel 🌌
For now, I am back in Germany for another month before leaving for South America over the winter. But autumn here has its beautiful sides too. The colors of the leaves, the clouds, the little drops of dew on the grass in the morning... I don't know why but I appreciate these little details of the things that I see tremendously, they make me happy.
🎶 Song 🎶
Traveller by JOSIN and Lambert
That's all for this time. I hope you found this newsletter useful, beautiful, or even both!
Have ideas for improving it? As always please let me know.
Cheers,
– Rico