OpenAI's missed opportunity with disinformation
The giants of artificial intelligence face the same moral dilemma that confronted social media two decades ago, when they began their rapid expansion: responsibility or profits. And we already know what they chose. I read that OpenAI has decided to withdraw Sora2, the application that allowed the creation of realistic videos with utmost ease. The result of this ease? According to the Brazilian fact-checker Aos Fatos, four out of ten videos made with this tool that went viral spread diverse disinformation, often about supernatural disasters or public safety issues. But the company also responsible for ChatGPT has not canceled the service due to an attack of responsibility: it simply wants to put its eggs in another basket, that of trying to integrate AI into robots to solve physical tasks. At least, since the computing requirements are not as intensive as hundreds of thousands of lunatics trying to poison the networks, there will also be an environmental benefit, in addition to the informational one.
It is a shame that OpenAI has not taken advantage of the change in strategy to champion a discourse based on ethics and the watch for the common good. We are in an early stage of this technology, with a handful of big companies fighting to position themselves in the market, and perhaps a company that presented itself with this flag could win the battle of discourse. The European Union, and all that hangs from its member states, could introduce responsibility criteria in its awards, and perhaps this would allow the balance to tip slightly towards technology with a humanistic vision. I know there is a point, or seventeen, of utopia in what I propose, but citizens must begin to demand the same standards from technology companies, for example, that we demand from the food industry: transparency, traceability, and safety.