Much better at applying feedback
Most new AI image models have a feedback function that lets you describe what you’d like to change after the initial output you are given. From personal experience – and I’m sure many of you can relate – those feedback features can easily suck you into an endless loop of iterations that eventually end in you closing your tab out of frustration.
That all just changed.
Somehow OpenAI’s new image generator actually understands your feedback and applies it. It’s not always perfect and it might still take a few tries but you eventually get your desired outcome – or something close to it.
The ethics and legality of it all ⚖️
Ability is one thing, but morality and legality are entirely separate things.
While you’ll probably have a tough time finding anyone who is going to dispute the capabilities of OpenAI’s new model (and its ability to Ghiblify images), you don’t have to look far to see that not everyone feels so warm and fuzzy about it. Least of all is probably the founder of Studio Ghibli itself – Hayao Miyazaki.
Although he hasn’t publicly come out to comment on the Ghibli AI trend specifically, his name immediately began popping up as the trend took off. Critics started quoting what he said in a documentary from almost a decade ago. In the documentary, after being shown a crude AI-generated animation demo of a zombie, he responded with:
“Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted… I strongly feel that this is an insult to life itself.”
Those are some harsh words. Many people are understandably upset and they are quoting Miyazaki as a kind of authoritative validation that confirms their views. Like “hey, look, the guy who’s style you’re emulating would be against this.”
What the law says
Unfortunately, personal opinions and legal interpretations are not the same – hence why Altman and Co. have been able to effectively get away with using the Ghibli style in their AI models.
The OpenAI CEO even seemed to mock naysayers on his X account by posting “just because you should doesn’t mean you can” – a play on words that reverses the position of should and can to change the meaning of the classic appeal to morality.
Now I’m no lawyer here, but I don’t need to be one to understand that OpenAI and their legal team believe they can handle the smoke. Or at least that the fire will bring them enough money to offset any potential damage they might have to deal with later. This is the approach they have historically taken. And even though they (and others) have been slapped with lawsuits, it doesn’t seem to be changing their mind about altering course. 1
From what I do understand in my capacity as a non-legal expert, is that above all, the legality of AI models using existing works is not set in stone – but it also depends on where you’re talking about.
A matter of jurisdiction
Legally speaking, OpenAI is a U.S. company headquartered in San Francisco. But in practice, it’s a transnational corporate behemoth. The work they train their models on comes from every corner of the web-connected Earth. In scenarios where another U.S. entity is upset with OpenAI about their work being used without permission and decides to sue them, it’s easy to establish jurisdiction.
But what if – hypothetically speaking – a Japanese company like Studio Ghibli wanted to sue OpenAI for similar reasons?
Unless OpenAI had substantial assets in Japan, enforcing a Japanese judgment against a U.S. company would be challenging, if not impossible. Not only that, but Japanese laws would offer them little protection anyway. Japan’s Article 30-4 explicitly permits the use of copyrighted works for AI training purposes, even for commercial use. 2
So if they wanted some kind of legal recourse, Studio Ghibli’s only option would be to file a suit in U.S. courts. The case would be tried under American copyright law, where there are two main factors at play:
- Generative AI can potentially violate copyright law if the program has access to a copyright owner’s work and is generating output that is “substantially similar” to the copyright owner’s existing work. But – and this is a big BUT – there is no federal legal consensus that determines what constitutes “substantial similarity.” 3
- Beyond that, the use of copyrighted material for AI training falls under the “fair use” doctrine. This is a legal principle that allows limited use of copyrighted material without permission under certain circumstances.
In short, it’s a legal grey area and it’s likely going to stay a legal grey area for the foreseeable future. In OpenAI’s view, that doesn’t equate to an orange light, but a very green one. Ethics be damned.
Where we are heading 🛣️
As a fan and user of AI tools, I have mixed feelings about their impact on society. I think about it often and have even written about it on multiple occasions. In one particular post, from July of 2023, I wrote about how AI will eventually overtake humanity. Within the post, I added seven AI-generated images that were created using OpenAI’s DALL E-2. Below are three of them: