Research on generative models is some of the most interesting AI work that’s happening right now, and also what provokes the most discussion.

Recently, Open AI recently published a post about language models that generate coherent paragraphs of text, and is able to perform across a number of language tasks (like translation or summarization) - without any task-specific training. They chose not to release the whole model for fear of misuse.

A few months ago, there was a lot of discussion about deep fakes, audio and video generated by an AI model. Some examples were generated videos of Vladimir Putin and audio of Barack Obama.

These are a few thoughts on this topic (mostly open questions):

  • First, what happens when everyone has access to this and it becomes weaponized? What does that world look like and how do we deal with it? How should we be thinking about this massive change coming in what we read, watch and hear online?
  • Many questions come up around identity. How do we know who we are talking to online is a real human? Does this matter? Should we regulate AI-generated images and text to have clear labels that they were generated by a model?
  • How can we use generative AI to extend human creativity, or to augment human capability?
  • What role do networks play anymore? If sufficiently good content is being generated by a machine, what is the role of content communities that previously were the only way to achieve that scale?
  • What does this mean for journalism? When stories are being written and translated by a machine - what role does traditional journalism play in our society? Given journalism’s role in upholding democratic values - what does that mean for democracy itself?

I’ve probably extrapolated a bit there.

Steve Jobs called computers a bicycle for the mind. AI feels like it’s become the “bicycle” for a computer.