Jump to content

15.ai: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Undid revision 1248731614 by Ltbdl (talk) Just stop that.
Tags: Undo Reverted Mobile edit Mobile web edit Advanced mobile edit
 
(37 intermediate revisions by 8 users not shown)
Line 1: Line 1:
{{Short description|Real-time text-to-speech tool using artificial intelligence}}
{{Short description|Real-time text-to-speech tool using artificial intelligence}}
{{pp|small=yes}}
{{Good article}}
{{Good article}}
{{Multiple issues|section=|
{{pp-protected|small=yes}}
{{COI|date=October 2024}}
{{POV|date=October 2024}}
{{cite check|date=October 2024}}
}}
{{Use mdy dates|date=July 2022}}
{{Use mdy dates|date=July 2022}}
{{Infobox website
{{Infobox website
| name = 15.ai
| name = 15.ai
| logo_caption = {{Deletable file-caption|Thursday, 24 October 2024|F7}}
| logo = File:15 ai logo transparent.png
| screenshot =
| screenshot =
| caption =
| caption =
Line 66: Line 71:
|archive-url= https://web.archive.org/web/20210118213308/https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/
|archive-url= https://web.archive.org/web/20210118213308/https://www.rockpapershotgun.com/2021/01/18/put-words-in-game-characters-mouths-with-this-fascinating-text-to-speech-tool/
|url-status= live
|url-status= live
}}</ref> Developed by a [[Pseudonym|pseudonymous]] [[MIT]] researcher under the name '''15''', the project uses a combination of [[audio synthesis]] algorithms, [[speech synthesis]] [[deep neural networks]], and [[sentiment analysis]] models to generate and serve emotive character voices faster than real-time, particularly those with a very small amount of [[training data|trainable]] data.
}}</ref> Developed by a [[pseudonym]]ous [[MIT]] researcher under the name '''15''', the project uses a combination of [[audio synthesis]] algorithms, [[speech synthesis]] [[deep neural networks]], and [[sentiment analysis]] models to generate and serve emotive character voices faster than real-time, particularly those with a very small amount of [[training data|trainable]] data.


Launched in early 2020, 15.ai began as a [[proof of concept]] of the [[democratization of technology|democratization]] of voice acting and dubbing using technology.<ref name="thebatch">
Launched in early 2020, 15.ai began as a [[proof of concept]] of the [[democratization of technology|democratization]] of voice acting and dubbing using technology.<ref name="thebatch">
{{cite web |last=Ng |first=Andrew |date=2020-04-01 |title=Voice Cloning for the Masses |url=https://blog.deeplearning.ai/blog/the-batch-ai-against-coronavirus-datasets-voice-cloning-for-the-masses-finding-unexploded-bombs-seeing-see-through-objects-optimizing-training-parameters |url-status=dead |archive-url=https://web.archive.org/web/20200807111844/https://blog.deeplearning.ai/blog/the-batch-ai-against-coronavirus-datasets-voice-cloning-for-the-masses-finding-unexploded-bombs-seeing-see-through-objects-optimizing-training-parameters |archive-date=2020-08-07 |access-date=2020-04-05 |website=The Batch |quote=}}
{{cite web |last=Ng |first=Andrew |date=2020-04-01 |title=Voice Cloning for the Masses |url=https://blog.deeplearning.ai/blog/the-batch-ai-against-coronavirus-datasets-voice-cloning-for-the-masses-finding-unexploded-bombs-seeing-see-through-objects-optimizing-training-parameters |url-status=dead |archive-url=https://web.archive.org/web/20200807111844/https://blog.deeplearning.ai/blog/the-batch-ai-against-coronavirus-datasets-voice-cloning-for-the-masses-finding-unexploded-bombs-seeing-see-through-objects-optimizing-training-parameters |archive-date=2020-08-07 |access-date=2020-04-05 |website=The Batch |quote=}}
</ref> Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used), ease of use, no [[user account]] registration requirement, and substantial improvements to current text-to-speech implementations have been lauded by users;<ref name="gameinformer"/><ref name="kotaku" /><ref name="pcgamer" /> however, some critics and [[voice actor]]s have questioned the [[15ai#Copyrighted material in deep learning|legality]] and [[Ethics of artificial intelligence|ethicality]] of leaving such technology publicly available and readily accessible.<ref name="thebatch"/><ref name="batch"/><ref name="wccftech"/>
</ref> Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used), ease of use, no [[user account]] registration requirement, and substantial improvements to current text-to-speech implementations have been lauded by users;<ref name="gameinformer"/><ref name="kotaku" /><ref name="pcgamer" /> however, some critics and [[voice actor]]s have questioned the [[15ai#Copyrighted material in deep learning|legality]] and [[Ethics of artificial intelligence|ethicality]] of leaving such technology publicly available and readily accessible.<ref name="wccftech"/>


Credited as the impetus behind the popularization of AI [[audio deepfake|voice cloning]] (also known as ''[[deepfakes|audio deepfakes]]'') in [[content creation]] and as the first publicly available AI vocal synthesis project to involve the use of existing popular fictional characters, 15.ai has a significant impact on multiple Internet [[fandom]]s, most notably the [[My Little Pony: Friendship Is Magic fandom|''My Little Pony: Friendship Is Magic'']], ''[[Team Fortress 2]]'', and ''[[SpongeBob SquarePants]]'' fandoms. Furthermore, 15.ai has inspired the use of [[4chan]]'s '''Pony Preservation Project''' in other [[synthetic media|generative artificial intelligence]] projects.<ref name="automaton"/><ref name="Denfaminicogamer"/>
Credited as the impetus behind the popularization of AI [[audio deepfake|voice cloning]] (also known as ''[[deepfakes|audio deepfakes]]'') in [[content creation]] and as the first publicly available AI vocal synthesis project to involve the use of existing popular fictional characters{{By whom|date=October 2024}}, 15.ai has a significant impact on multiple Internet [[fandom]]s, most notably the [[My Little Pony: Friendship Is Magic fandom|''My Little Pony: Friendship Is Magic'']], ''[[Team Fortress 2]]'', and ''[[SpongeBob SquarePants]]'' fandoms. Furthermore, 15.ai has inspired the use of [[4chan]]'s '''Pony Preservation Project''' in other [[synthetic media|generative artificial intelligence]] projects.<ref name="automaton"/><ref name="Denfaminicogamer"/>


Several commercial alternatives have spawned with the rising popularity of 15.ai, leading to cases of misattribution and theft. In January 2022, it was discovered that '''Voiceverse NFT''', a company that voice actor [[Troy Baker]] announced his partnership with, had [[plagiarism|plagiarized]] 15.ai's work as part of their platform.<ref name="nme">{{cite web
Several commercial alternatives have spawned with the rising popularity of 15.ai, leading to cases of misattribution and theft. In January 2022, it was discovered that '''Voiceverse NFT''', a company that voice actor [[Troy Baker]] announced his partnership with, had [[plagiarism|plagiarized]] 15.ai's work as part of their platform.<ref name="nme">{{cite web
Line 112: Line 117:
}}</ref>
}}</ref>


In September 2022, a year after its last stable release, 15.ai was temporarily taken down in preparation for a future update. As of September 2024, the website is still offline, with 15's most recent post being dated February 2023, where 15 stated that the website's next update would be an accumulation of 1.5 years of work.<ref>{{Cite tweet |number=1628834708653068290 |user=fifteenai |title=If all goes well, the next update should be the culmination of a year and a half of nonstop work put into a huge number of fixes and major improvements to the algorithm. Just give me a bit more time – it should be worth it.}}</ref>
In September 2022, a year after its last stable release, 15.ai was temporarily taken down in preparation for a future update. As of October 2024, the website is still offline, with 15's most recent post being dated February 2023.<ref>{{Cite tweet |number=1628834708653068290 |user=fifteenai |title=If all goes well, the next update should be the culmination of a year and a half of nonstop work put into a huge number of fixes and major improvements to the algorithm. Just give me a bit more time – it should be worth it.}}</ref>


== Features ==
== Features ==
Line 521: Line 526:
{{See also|Audio deepfake}}
{{See also|Audio deepfake}}
[[File:WaveNet animation.gif|thumb|right|A stack of dilated casual convolutional layers used in [[DeepMind]]'s [[WaveNet]].<ref name="deepmind" />]]
[[File:WaveNet animation.gif|thumb|right|A stack of dilated casual convolutional layers used in [[DeepMind]]'s [[WaveNet]].<ref name="deepmind" />]]
In 2016, with the proposal of [[DeepMind]]'s [[WaveNet]], deep-learning-based models for speech synthesis began to gain popularity as a method of modeling waveforms and generating human-like speech.<ref name="arxiv1">{{cite arXiv |last=Hsu |first=Wei-Ning |eprint=1810.07217 |title=Hierarchical Generative Modeling for Controllable Speech Synthesis |class=cs.CL |date=2018 }}</ref><ref name="arxiv2">{{cite arXiv |last=Habib |first=Raza |eprint=1910.01709 |title=Semi-Supervised Generative Modeling for Controllable Speech Synthesis |class=cs.CL |date=2019 }}</ref><ref name="deepmind">{{cite web|url=https://www.deepmind.com/blog/high-fidelity-speech-synthesis-with-wavenet|title=High-fidelity speech synthesis with WaveNet|last1=van den Oord|first1=Aäron|last2=Li|first2=Yazhe|last3=Babuschkin|first3=Igor|date=2017-11-12|website=[[DeepMind]]|access-date=2022-06-05|archive-date=2022-06-18|archive-url=https://web.archive.org/web/20220618205838/https://www.deepmind.com/blog/high-fidelity-speech-synthesis-with-wavenet|url-status=live}}</ref><ref name="thebatch"/> Tacotron2, a neural network architecture for speech synthesis developed by [[Google AI]], was published in 2018 and required tens of hours of audio data to produce intelligible speech; when trained on 2 hours of speech, the model was able to produce intelligible speech with mediocre quality, and when trained on 36 minutes of speech, the model was unable to produce intelligible speech.<ref name="tacotron">{{cite web|url=https://google.github.io/tacotron/publications/semisupervised/index.html|title=Audio samples from "Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis"|date=2018-08-30|access-date=2022-06-05|archive-date=2020-11-11|archive-url=https://web.archive.org/web/20201111222714/https://google.github.io/tacotron/publications/semisupervised/index.html|url-status=live}}</ref><ref name="arxiv3">{{cite arXiv |eprint=1712.05884 |title=Natural TTS Synthesis by Conditioning WaveNet on Mel-Spectrogram Predictions |class=cs.CL |date=2018 |last1=Shen |first1=Jonathan |last2=Pang |first2=Ruoming |last3=Weiss |first3=Ron J. |last4=Schuster |first4=Mike |last5=Jaitly |first5=Navdeep |last6=Yang |first6=Zongheng |last7=Chen |first7=Zhifeng |last8=Zhang |first8=Yu |last9=Wang |first9=Yuxuan |last10=Skerry-Ryan |first10=RJ |last11=Saurous |first11=Rif A. |last12=Agiomyrgiannakis |first12=Yannis |last13=Wu |first13=Yonghui }}</ref>
In 2016, with the proposal of [[DeepMind]]'s [[WaveNet]], deep-learning-based models for speech synthesis began to gain popularity as a method of modeling waveforms and generating human-like speech.<ref name="arxiv1">{{cite arXiv |last=Hsu |first=Wei-Ning |eprint=1810.07217 |title=Hierarchical Generative Modeling for Controllable Speech Synthesis |class=cs.CL |date=2018 }}</ref><ref name="arxiv2">{{cite arXiv |last=Habib |first=Raza |eprint=1910.01709 |title=Semi-Supervised Generative Modeling for Controllable Speech Synthesis |class=cs.CL |date=2019 }}</ref><ref name="deepmind">{{cite web|url=https://www.deepmind.com/blog/high-fidelity-speech-synthesis-with-wavenet|title=High-fidelity speech synthesis with WaveNet|last1=van den Oord|first1=Aäron|last2=Li|first2=Yazhe|last3=Babuschkin|first3=Igor|date=2017-11-12|website=[[DeepMind]]|access-date=2022-06-05|archive-date=2022-06-18|archive-url=https://web.archive.org/web/20220618205838/https://www.deepmind.com/blog/high-fidelity-speech-synthesis-with-wavenet|url-status=live}}</ref> Tacotron2, a neural network architecture for speech synthesis developed by [[Google AI]], was published in 2018 and required tens of hours of audio data to produce intelligible speech; when trained on 2 hours of speech, the model was able to produce intelligible speech with mediocre quality, and when trained on 36 minutes of speech, the model was unable to produce intelligible speech.<ref name="tacotron">{{cite web|url=https://google.github.io/tacotron/publications/semisupervised/index.html|title=Audio samples from "Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis"|date=2018-08-30|access-date=2022-06-05|archive-date=2020-11-11|archive-url=https://web.archive.org/web/20201111222714/https://google.github.io/tacotron/publications/semisupervised/index.html|url-status=live}}</ref><ref name="arxiv3">{{cite arXiv |eprint=1712.05884 |title=Natural TTS Synthesis by Conditioning WaveNet on Mel-Spectrogram Predictions |class=cs.CL |date=2018 |last1=Shen |first1=Jonathan |last2=Pang |first2=Ruoming |last3=Weiss |first3=Ron J. |last4=Schuster |first4=Mike |last5=Jaitly |first5=Navdeep |last6=Yang |first6=Zongheng |last7=Chen |first7=Zhifeng |last8=Zhang |first8=Yu |last9=Wang |first9=Yuxuan |last10=Skerry-Ryan |first10=RJ |last11=Saurous |first11=Rif A. |last12=Agiomyrgiannakis |first12=Yannis |last13=Wu |first13=Yonghui }}</ref>


For years, reducing the amount of data required to train a realistic high-quality text-to-speech model has been a primary goal of scientific researchers in the field of deep learning speech synthesis.<ref>{{cite arXiv |last=Chung |first=Yu-An |eprint=1808.10128 |title=Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis |class=cs.CL |date=2018 }}</ref><ref>{{cite arXiv |last=Ren |first=Yi |eprint=1905.06791 |title=Almost Unsupervised Text to Speech and Automatic Speech Recognition |class=cs.CL |date=2019 }}</ref> The developer of 15.ai claims that as little as 15 seconds of data is sufficient to clone a voice up to human standards, a significant reduction in the amount of data required.<ref name="eurogamer"/>
For years, reducing the amount of data required to train a realistic high-quality text-to-speech model has been a primary goal of scientific researchers in the field of deep learning speech synthesis.<ref>{{cite arXiv |last=Chung |first=Yu-An |eprint=1808.10128 |title=Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis |class=cs.CL |date=2018 }}</ref><ref>{{cite arXiv |last=Ren |first=Yi |eprint=1905.06791 |title=Almost Unsupervised Text to Speech and Automatic Speech Recognition |class=cs.CL |date=2019 }}</ref> The developer of 15.ai claims that as little as 15 seconds of data is sufficient to clone a voice up to human standards, a significant reduction in the amount of data required.<ref name="eurogamer"/>
Line 528: Line 533:
{{Main|Authors Guild, Inc. v. Google, Inc.}}
{{Main|Authors Guild, Inc. v. Google, Inc.}}
A landmark case between [[Google]] and the [[Authors Guild]] in 2013 ruled that [[Google Books]]—a service that searches the full text of printed copyrighted books—was [[Transformative use#Second Circuit—Authors Guild|transformative]], thus meeting all requirements for fair use.<ref>- F.2d – (2d Cir, 2015). (temporary cites: 2015 U.S. App. LEXIS 17988;
A landmark case between [[Google]] and the [[Authors Guild]] in 2013 ruled that [[Google Books]]—a service that searches the full text of printed copyrighted books—was [[Transformative use#Second Circuit—Authors Guild|transformative]], thus meeting all requirements for fair use.<ref>- F.2d – (2d Cir, 2015). (temporary cites: 2015 U.S. App. LEXIS 17988;
[https://salsa3.salsalabs.com/o/50260/images/AGvGoogle.pdf Slip opinion]{{Dead link|date=September 2024 |bot=InternetArchiveBot |fix-attempted=yes }} (October 16, 2015))</ref> This case set an important legal precedent for the field of deep learning and artificial intelligence: using copyrighted material to train a [[discriminative model]] or a ''non-commercial'' [[generative model]] was deemed legal. The legality of ''commercial'' generative models trained using copyrighted material is still under debate; due to the black-box nature of machine learning models, any allegations of copyright infringement via direct competition would be difficult to prove.{{cn|date=June 2024}}
[https://salsa3.salsalabs.com/o/50260/images/AGvGoogle.pdf Slip opinion]{{Dead link|date=September 2024 |bot=InternetArchiveBot |fix-attempted=yes }} (October 16, 2015))</ref> This case set an important legal precedent for the field of deep learning and artificial intelligence: using copyrighted material to train a [[discriminative model]] or a ''non-commercial'' [[generative model]] was deemed legal. The legality of ''commercial'' generative models trained using copyrighted material is still under debate; due to the black-box nature of machine learning models, any allegations of copyright infringement via direct competition would be difficult to prove.{{citation needed|date=June 2024}}


== Development ==
== Development ==
15.ai was designed and created by an anonymous research scientist affiliated with the [[Massachusetts Institute of Technology]] known by the alias ''15''.<ref name="twitter">
15.ai was designed and created by an anonymous research scientist affiliated with the [[Massachusetts Institute of Technology]] known by the alias ''15''.{{Citation needed|date=October 2024}}
{{cite web
|url= https://twitter.com/fifteenai
|title= 15
|last=
|first=
|date= 2022-06-09
|website= [[Twitter]]
|publisher=
|access-date= 2022-06-09
|quote= }}
</ref>

According to posts made by its developer on [[Hacker News]], 15.ai costs several thousands of dollars per month to operate; they are able to support the project due to a successful startup [[exit strategy|exit]].<ref name="hn">{{cite web
|url= https://news.ycombinator.com/item?id=31711118
|title= 15.ai
|last=
|first=
|date= 2022-06-12
|website= [[Hacker News]]
|publisher=
|access-date= 2022-06-13
|quote=
|archive-date= 2022-06-13
|archive-url= https://web.archive.org/web/20220613000443/http://news.ycombinator.com/item?id=31711118
|url-status= live
}}</ref> The developer has stated that during their undergraduate years at MIT, they were paid the [[minimum wage in the United States|minimum hourly rate]] to work on a related project (approximately $14 an hour in [[Massachusetts]]<ref>{{cite web
|url= https://urop.mit.edu/guidelines/participation-considerations/pay-credit-volunteer/
|title= Pay, Credit & Volunteer
|last=
|first=
|date=
|website= [[MIT]] [[UROP]]
|publisher=
|access-date= 2022-06-13
|quote=
|archive-date= 2022-06-19
|archive-url= https://web.archive.org/web/20220619234437/https://urop.mit.edu/guidelines/participation-considerations/pay-credit-volunteer/
|url-status= live
}}</ref>) that eventually evolved into 15.ai. They also stated that the democratization of voice cloning technology is not the only function of the website; in response to a user asking whether the research could be conducted without a public website, the developer wrote:
{{Blockquote
|text=[...] The website has multiple purposes. It serves as a [[proof of concept]] of a platform that allows anyone to create [[content (media)|content]], even if they can't hire someone to voice their projects.

It also demonstrates the progress of my research in a far more engaging manner—by being able to use the actual model, you can discover things about it that even I wasn't aware of (such as getting characters to make gasping noises or moans by placing commas in between certain phonemes).

It also doesn't let me get away with [[cherry picking|picking and choosing the best results]] and [[one-sided argument|showing off only the ones that work]] (which I believe is a big problem endemic in [[machine learning|ML]] today—it's disingenuous and misleading). Being able to interact with the model with no filter allows the user to judge exactly how good the current work is at face value.
|author=15ai, ''Hacker News''<ref name="hn"/>
}}


The algorithm used by the project to facilitate the cloning of voices with minimal viable data has been dubbed '''DeepThroat'''<ref name="15aiabout">{{cite web |last= |first= |date=2022-02-20 |title=15.ai – About |url=https://15.ai/about |url-status=dead |archive-url=https://archive.today/20211006074716/https://15.ai/about |archive-date=2021-10-06 |access-date=2022-02-20 |website=15.ai |publisher= |quote=}}</ref> (a [[double entendre]] in reference to [[speech synthesis]] using [[deep neural networks]] and the sexual act of [[deep-throating]]). The project and algorithm—initially conceived as part of MIT's [[Undergraduate Research Opportunities Program]]—had been in development for years before the first release of the application.<ref name="automaton"/>
The algorithm used by the project to facilitate the cloning of voices with minimal viable data has been dubbed '''DeepThroat'''<ref name="15aiabout">{{cite web |last= |first= |date=2022-02-20 |title=15.ai – About |url=https://15.ai/about |url-status=dead |archive-url=https://archive.today/20211006074716/https://15.ai/about |archive-date=2021-10-06 |access-date=2022-02-20 |website=15.ai |publisher= |quote=}}</ref> (a [[double entendre]] in reference to [[speech synthesis]] using [[deep neural networks]] and the sexual act of [[deep-throating]]). The project and algorithm—initially conceived as part of MIT's [[Undergraduate Research Opportunities Program]]—had been in development for years before the first release of the application.<ref name="automaton"/>


[[File:4chan Logo.png|thumb|The ''Pony Preservation Project'' from [[4chan]]'s /mlp/ board has been integral to the development of 15.ai.<ref name="gwern"/>]]
[[File:4chan Logo.png|thumb|The ''Pony Preservation Project'' from [[4chan]]'s /mlp/ board has been integral to the development of 15.ai.<ref name="gwern">{{cite journal |last=Branwen |first=Gwern |date=2020-03-06 |title="15.ai"⁠, 15, Pony Preservation Project |url=https://www.gwern.net/docs/ai/music/index#15-project-2020-section |url-status=live |publisher=Gwern |archive-url=https://web.archive.org/web/20220318160737/https://www.gwern.net/docs/ai/music/index#15-project-2020-section |archive-date=2022-03-18 |access-date=2022-06-17 |website=Gwern.net}}</ref>]]
The developer has also worked closely with the Pony Preservation Project from /mlp/, the ''[[My Little Pony: Friendship Is Magic|My Little Pony]]'' [[Internet Forum|board]] of [[4chan]]. The '''Pony Preservation Project''', which began in 2019, is a "collaborative effort by /mlp/ to build and curate pony datasets" with the aim of creating applications in artificial intelligence.<ref name="gwern">{{cite journal
The developer has also worked closely with the Pony Preservation Project from /mlp/, the ''[[My Little Pony: Friendship Is Magic|My Little Pony]]'' [[Internet Forum|board]] of [[4chan]]. The '''Pony Preservation Project''', which began in 2019, is a "collaborative effort by /mlp/ to build and curate pony datasets" with the aim of creating applications in artificial intelligence.<ref>{{cite web
|url= https://www.gwern.net/docs/ai/music/index#15-project-2020-section
|title= "15.ai"⁠, 15, Pony Preservation Project
|last= Branwen
|first= Gwern
|date= 2020-03-06
|website= Gwern.net
|publisher= Gwern
|access-date= 2022-06-17
|url-status= live
|archive-date= 2022-03-18
|archive-url= https://web.archive.org/web/20220318160737/https://www.gwern.net/docs/ai/music/index#15-project-2020-section
}}</ref><ref>{{cite web
|url= https://www.equestriadaily.com/2020/03/neat-pony-preservation-project-using.html
|url= https://www.equestriadaily.com/2020/03/neat-pony-preservation-project-using.html
|title= Neat "Pony Preservation Project" Using Neural Networks to Create Pony Voices
|title= Neat "Pony Preservation Project" Using Neural Networks to Create Pony Voices
Line 616: Line 562:
|publisher= Desuarchive
|publisher= Desuarchive
|access-date= 2022-02-20
|access-date= 2022-02-20
|quote= }}</ref> The ''Friendship Is Magic'' voices on 15.ai were trained on a large dataset [[crowdsource]]d by the Pony Preservation Project: audio and dialogue from the show and related media—including [[List of My Little Pony: Friendship Is Magic episodes|all nine seasons of ''Friendship Is Magic'']], [[My Little Pony: The Movie (2017 film)|the 2017 movie]], [[Pony Life|spinoffs]], [[data breach|leaks]], and various other content voiced by the same voice actors—were [[audio signal processing|parsed]], [[transcription (linguistics)|hand-transcribed]], and [[noise reduction|processed]] to remove background noise. According to the developer, the collective efforts and constructive criticism from the Pony Preservation Project have been integral to the development of 15.ai.<ref name="gwern"/>
|quote= }}</ref> The ''Friendship Is Magic'' voices on 15.ai were trained on a large dataset [[crowdsource]]d by the Pony Preservation Project: audio and dialogue from the show and related media—including [[List of My Little Pony: Friendship Is Magic episodes|all nine seasons of ''Friendship Is Magic'']], [[My Little Pony: The Movie (2017 film)|the 2017 movie]], [[Pony Life|spinoffs]], [[data breach|leaks]], and various other content voiced by the same voice actors—were [[audio signal processing|parsed]], [[transcription (linguistics)|hand-transcribed]], and [[noise reduction|processed]] to remove background noise.

In addition, the developer has stated that the logo of 15.ai, which features a robotic [[Twilight Sparkle]], is an homage to the fact that her voice (as originally portrayed by [[Tara Strong]]) was indispensable to the implementation of emotional contextualizers.<ref name="hn"/>


== Reception ==
== Reception ==
15.ai has been met with largely positive reception. Liana Ruppert of ''[[Game Informer]]'' described 15.ai as "simplistically brilliant."<ref name="gameinformer"/> Lauren Morton of ''[[Rock, Paper, Shotgun]]'' and Natalie Clayton of ''[[PCGamer]]'' called it "fascinating,"<ref name="rockpapershotgun"/><ref name="pcgamer"/> and José Villalobos of ''[[:es:LaPS4|LaPS4]]'' wrote that it "works as easy as it looks."<ref name="LaPS4"/>{{efn|Translated from original quote written in Spanish: ''"La dirección es 15.AI y funciona tan fácil como parece."''<ref name="LaPS4"/>}} Users praised the ability to easily create audio of popular characters that sound believable to those unaware that the voices had been synthesized by artificial intelligence: Zack Zwiezen of ''[[Kotaku]]'' reported that "[his] girlfriend was convinced it was a new voice line from [[GLaDOS]]' voice actor, [[Ellen McLain]]".<ref name="kotaku"/>
[[File:Andrew Ng WSJ (2).jpg|thumb|170px|right|Computer scientist [[Andrew Ng]] wrote that the technology behind 15.ai could potentially open up to cases of [[Deepfake#Concerns|impersonation and fraud]].]]
15.ai has been met with largely positive reception. Liana Ruppert of ''[[Game Informer]]'' described 15.ai as "simplistically brilliant."<ref name="gameinformer"/> Lauren Morton of ''[[Rock, Paper, Shotgun]]'' and Natalie Clayton of ''[[PCGamer]]'' called it "fascinating,"<ref name="rockpapershotgun"/><ref name="pcgamer"/> and José Villalobos of ''[[:es:LaPS4|LaPS4]]'' wrote that it "works as easy as it looks."<ref name="LaPS4"/>{{efn|Translated from original quote written in Spanish: ''"La dirección es 15.AI y funciona tan fácil como parece."''<ref name="LaPS4"/>}} Users praised the ability to easily create audio of popular characters that sound believable to those unaware that the voices had been synthesized by artificial intelligence: Zack Zwiezen of ''[[Kotaku]]'' reported that "[his] girlfriend was convinced it was a new voice line from [[GLaDOS]]' voice actor, [[Ellen McLain]],"<ref name="kotaku"/> while Rionaldi Chandraseta of ''Towards Data Science'' wrote that, upon watching a [[YouTube]] video featuring popular character voices generated by 15.ai, "[his] first thought was the video creator used [[Cameo (website)|cameo.com]] to pay for new dialogues from the original voice actors" and stated that "the quality of voices done by 15.ai is miles ahead of [its competitors]."


Reception has also been largely acclaimed overseas, especially in [[Japan]]. Takayuki Furushima of ''Den Fami Nico Gamer'' has described 15.ai as "like magic," and Yuki Kurosawa of ''Automaton Media'' called it "revolutionary."<ref name="Denfaminicogamer"/><ref name="automaton"/>
Reception has also been largely acclaimed overseas, especially in [[Japan]]. Takayuki Furushima of ''Den Fami Nico Gamer'' has described 15.ai as "like magic," and Yuki Kurosawa of ''Automaton Media'' called it "revolutionary."<ref name="Denfaminicogamer"/><ref name="automaton"/>

Computer scientist and technology entrepreneur [[Andrew Ng]] commented in his newsletter ''The Batch'' that the technology behind 15.ai could be "enormously productive" and could "revolutionize the use of [[virtual actor]]s"; he also noted that "synthesizing a human actor's voice without consent is arguably unethical and possibly illegal" and could potentially open up to cases of [[Deepfake#Concerns|impersonation and fraud]].<ref name="thebatch"/><ref name="batch"/> In his blog ''[[Marginal Revolution (blog)|Marginal Revolution]]'', [[economist]] [[Tyler Cowen]] deemed 15 one of the "most underrated talents in AI and machine learning."<ref>{{cite web
|url= https://marginalrevolution.com/marginalrevolution/2022/05/the-most-underrated-talent-in-ai.html
|title= The most underrated talent in AI?
|last= Cowen
|first= Tyler
|date= 2022-05-12
|website= [[Marginal Revolution (blog)]]
|access-date= 2022-06-16
|url-status= live
|archive-date= 2022-06-19
|archive-url= https://web.archive.org/web/20220619203626/https://marginalrevolution.com/marginalrevolution/2022/05/the-most-underrated-talent-in-ai.html
}}</ref>


== Impact ==
== Impact ==
=== Fandom content creation ===
=== Fandom content creation ===
<!-- Deleted image removed: [[File:Screenshot from The Tax Breaks.png|thumb|left|300px|The fan-created episode ''The Tax Breaks'', based on a work of fanfiction published in 2014, was entirely voiced using 15.ai.<ref name="taxbreaks"/>]] -->
<!-- Deleted image removed: [[File:Screenshot from The Tax Breaks.png|thumb|left|300px|The fan-created episode ''The Tax Breaks'', based on a work of fanfiction published in 2014, was entirely voiced using 15.ai.<ref name="taxbreaks"/>]] -->
15.ai has been frequently used for [[content creation]] in various [[fandom]]s, including the [[My Little Pony: Friendship Is Magic fandom|''My Little Pony: Friendship Is Magic'' fandom]], the ''[[Team Fortress 2]]'' fandom, the ''[[Portal (series)|Portal]]'' fandom, and the ''[[SpongeBob SquarePants]]'' fandom, with numerous videos and projects containing speech from 15.ai having gone [[viral video|viral]].<ref name="kotaku" /><ref name="gameinformer" />
15.ai has been frequently used for [[content creation]] in various [[fandom]]s, including the [[My Little Pony: Friendship Is Magic fandom|''My Little Pony: Friendship Is Magic'' fandom]], the ''[[Team Fortress 2]]'' fandom, the ''[[Portal (series)|Portal]]'' fandom, and the ''[[SpongeBob SquarePants]]'' fandom, with numerous videos and projects containing speech from 15.ai having gone [[viral video|viral]].<ref name="kotaku" /><ref name="gameinformer" />


The ''My Little Pony: Friendship Is Magic'' fandom has seen a resurgence in video and musical content creation as a direct result, inspiring a new genre of fan-created content assisted by artificial intelligence. Some [[fanfiction|fanfictions]] have been adapted into fully voiced "episodes": ''The Tax Breaks'' is a 17-minute long animated video rendition of a fan-written story published in 2014 that uses voices generated from 15.ai with [[sound effects]] and [[audio editing]], emulating the episodic style of the early seasons of ''Friendship Is Magic''.<ref name="taxbreaks">{{cite web
The ''My Little Pony: Friendship Is Magic'' fandom has seen a resurgence in video and musical content creation as a direct result, inspiring a new genre of fan-created content assisted by artificial intelligence. Some [[fanfiction]]s have been adapted into fully voiced "episodes": ''The Tax Breaks'' is a 17-minute long animated video rendition of a fan-written story published in 2014 that uses voices generated from 15.ai with [[sound effects]] and [[audio editing]], emulating the episodic style of the early seasons of ''Friendship Is Magic''.<ref name="taxbreaks">{{cite web
|url= https://www.equestriadaily.com/2022/05/full-simple-animated-episode-tax-breaks.html
|url= https://www.equestriadaily.com/2022/05/full-simple-animated-episode-tax-breaks.html
|title= Full Simple Animated Episode – The Tax Breaks (Twilight)
|title= Full Simple Animated Episode – The Tax Breaks (Twilight)
Line 759: Line 689:
|archive-date= 2022-01-31
|archive-date= 2022-01-31
|archive-url= https://web.archive.org/web/20220131172752/https://www.tweaktown.com/news/84299/last-of-us-actor-troy-baker-heeds-fans-abandons-nft-plans/index.html
|archive-url= https://web.archive.org/web/20220131172752/https://www.tweaktown.com/news/84299/last-of-us-actor-troy-baker-heeds-fans-abandons-nft-plans/index.html
|url-status= live
}}</ref><ref name="wgtc">{{cite web
|url= https://wegotthiscovered.com/gaming/the-last-of-us-actor-troy-baker-reverses-course-on-nfts-amid-fan-backlash/
|title= 'The Last of Us' actor Troy Baker reverses course on NFTs amid fan backlash
|last= Peterson
|first= Danny
|date= 2022-01-31
|website= We Got This Covered
|access-date= 2022-02-14
|quote=
|archive-date= 2022-02-14
|archive-url= https://web.archive.org/web/20220214191046/https://wegotthiscovered.com/gaming/the-last-of-us-actor-troy-baker-reverses-course-on-nfts-amid-fan-backlash/
|url-status= live
|url-status= live
}}</ref><ref>{{Cite web|last=Peters|first=Jay|date=2022-01-31|title=The voice of Joel from The Last of Us steps away from NFT project after outcry|url=https://www.theverge.com/2022/1/31/22910633/troy-baker-voiceverse-nft-voice-actor-project-the-last-of-us|access-date=2022-02-04|website=The Verge|language=en|archive-date=2022-02-04|archive-url=https://web.archive.org/web/20220204042246/https://www.theverge.com/2022/1/31/22910633/troy-baker-voiceverse-nft-voice-actor-project-the-last-of-us|url-status=live}}</ref>
}}</ref><ref>{{Cite web|last=Peters|first=Jay|date=2022-01-31|title=The voice of Joel from The Last of Us steps away from NFT project after outcry|url=https://www.theverge.com/2022/1/31/22910633/troy-baker-voiceverse-nft-voice-actor-project-the-last-of-us|access-date=2022-02-04|website=The Verge|language=en|archive-date=2022-02-04|archive-url=https://web.archive.org/web/20220204042246/https://www.theverge.com/2022/1/31/22910633/troy-baker-voiceverse-nft-voice-actor-project-the-last-of-us|url-status=live}}</ref>


===Reactions from voice actors===
===Reactions from voice actors===
Some voice actors have publicly decried the use of voice cloning technology. Cited reasons include concerns about [[Deepfake#Concerns|impersonation and fraud]], unauthorized use of an actor's voice in [[Deepfake pornography|pornography]], and the potential of [[technological unemployment|AI being used to make voice actors obsolete]].<ref name="thebatch"/><ref name="batch">{{cite web
Some voice actors have publicly decried the use of voice cloning technology. Cited reasons include concerns about [[Deepfake#Concerns|impersonation and fraud]], unauthorized use of an actor's voice in [[Deepfake pornography|pornography]], and the potential of [[technological unemployment|AI being used to make voice actors obsolete]].<ref name="wccftech"/>
|url= https://read.deeplearning.ai/the-batch/issue-83/
|title= Weekly Newsletter Issue 83
|last= Ng
|first= Andrew
|date= 2021-03-07
|website= The Batch
|access-date= 2021-03-07
|quote=
|archive-date= 2022-02-26
|archive-url= https://web.archive.org/web/20220226175907/https://read.deeplearning.ai/the-batch/issue-83/
|url-status= live
}}</ref><ref name="wccftech"/>


== See also ==
== See also ==
Line 814: Line 720:


==External links==
==External links==
* [https://ghostarchive.org/archive/iA306 Archived frontend]
* {{Official website|15.ai}}
* {{Official website|15.ai}}
* {{Twitter | id= fifteenai | name= 15 }}
* {{Twitter | id= fifteenai | name= 15 }}

Latest revision as of 14:40, 28 October 2024

15.ai
Type of site
Artificial intelligence, speech synthesis, machine learning, deep learning
Available inEnglish
Founder(s)15
URL15.ai
CommercialNo
RegistrationNone
LaunchedInitial release: March 12, 2020; 4 years ago (2020-03-12)
Stable release: v24.2.1
Current statusUnder maintenance

15.ai is a non-commercial freeware artificial intelligence web application that generates natural emotive high-fidelity[a] text-to-speech voices from an assortment of fictional characters from a variety of media sources.[4][5][6][7] Developed by a pseudonymous MIT researcher under the name 15, the project uses a combination of audio synthesis algorithms, speech synthesis deep neural networks, and sentiment analysis models to generate and serve emotive character voices faster than real-time, particularly those with a very small amount of trainable data.

Launched in early 2020, 15.ai began as a proof of concept of the democratization of voice acting and dubbing using technology.[8] Its gratis and non-commercial nature (with the only stipulation being that the project be properly credited when used), ease of use, no user account registration requirement, and substantial improvements to current text-to-speech implementations have been lauded by users;[5][4][6] however, some critics and voice actors have questioned the legality and ethicality of leaving such technology publicly available and readily accessible.[9]

Credited as the impetus behind the popularization of AI voice cloning (also known as audio deepfakes) in content creation and as the first publicly available AI vocal synthesis project to involve the use of existing popular fictional characters[by whom?], 15.ai has a significant impact on multiple Internet fandoms, most notably the My Little Pony: Friendship Is Magic, Team Fortress 2, and SpongeBob SquarePants fandoms. Furthermore, 15.ai has inspired the use of 4chan's Pony Preservation Project in other generative artificial intelligence projects.[10][11]

Several commercial alternatives have spawned with the rising popularity of 15.ai, leading to cases of misattribution and theft. In January 2022, it was discovered that Voiceverse NFT, a company that voice actor Troy Baker announced his partnership with, had plagiarized 15.ai's work as part of their platform.[12][13][14]

In September 2022, a year after its last stable release, 15.ai was temporarily taken down in preparation for a future update. As of October 2024, the website is still offline, with 15's most recent post being dated February 2023.[15]

Features

HAL 9000, known for his sinister robotic voice, is one of the available characters on 15.ai.[4]

Available characters include GLaDOS and Wheatley from Portal, characters from Team Fortress 2, Twilight Sparkle and a number of main, secondary, and supporting characters from My Little Pony: Friendship Is Magic, SpongeBob from SpongeBob SquarePants, Daria Morgendorffer and Jane Lane from Daria, the Tenth Doctor from Doctor Who, HAL 9000 from 2001: A Space Odyssey, the Narrator from The Stanley Parable, the Wii U/3DS/Switch Super Smash Bros. Announcer (formerly), Carl Brutananadilewski from Aqua Teen Hunger Force, Steven Universe from Steven Universe, Dan from Dan Vs., and Sans from Undertale.[11][10][16][17]

The deep learning model used by the application is nondeterministic: each time that speech is generated from the same string of text, the intonation of the speech will be slightly different. The application also supports manually altering the emotion of a generated line using emotional contextualizers (a term coined by this project), a sentence or phrase that conveys the emotion of the take that serves as a guide for the model during inference.[10][11] Emotional contextualizers are representations of the emotional content of a sentence deduced via transfer learned emoji embeddings using DeepMoji, a deep neural network sentiment analysis algorithm developed by the MIT Media Lab in 2017.[18][19] DeepMoji was trained on 1.2 billion emoji occurrences in Twitter data from 2013 to 2017, and has been found to outperform human subjects in correctly identifying sarcasm in Tweets and other online modes of communication.[20][21][22]

15.ai uses a multi-speaker model—hundreds of voices are trained concurrently rather than sequentially, decreasing the required training time and enabling the model to learn and generalize shared emotional context, even for voices with no exposure to such emotional context.[23] Consequently, the entire lineup of characters in the application is powered by a single trained model, as opposed to multiple single-speaker models trained on different datasets.[24] The lexicon used by 15.ai has been scraped from a variety of Internet sources, including Oxford Dictionaries, Wiktionary, the CMU Pronouncing Dictionary, 4chan, Reddit, and Twitter. Pronunciations of unfamiliar words are automatically deduced using phonological rules learned by the deep learning model.[10]

The application supports a simplified version of a set of English phonetic transcriptions known as ARPABET to correct mispronunciations or to account for heteronyms—words that are spelled the same but are pronounced differently (such as the word read, which can be pronounced as either /ˈrɛd/ or /ˈrd/ depending on its tense). While the original ARPABET codes developed in the 1970s by the Advanced Research Projects Agency supports 50 unique symbols to designate and differentiate between English phonemes,[25] the CMU Pronouncing Dictionary's ARPABET convention (the set of transcription codes followed by 15.ai[10]) reduces the symbol set to 39 phonemes by combining allophonic phonetic realizations into a single standard (e.g. AXR/ER; UX/UW) and using multiple common symbols together to replace syllabic consonants (e.g. EN/AH0 N).[26][27] ARPABET strings can be invoked in the application by wrapping the string of phonemes in curly braces within the input box (e.g. {AA1 R P AH0 B EH2 T} to denote /ˈɑːrpəˌbɛt/, the pronunciation of the word ARPABET).[10]

The following is a table of phonemes used by 15.ai and the CMU Pronouncing Dictionary:[28]

Vowels
ARPABET Rspl. IPA Example
AA ah ɑ odd
AE a æ at
AH0 ə ə about
AH u, uh ʌ hut
AO aw ɔ ought
AW ow cow
AY eye hide
EH e, eh ɛ Ed
Vowels
ARPABET Rspl. IPA Example
ER ur, ər ɝ, ɚ hurt
EY ay ate
IH i, ih ɪ it
IY ee i eat
OW oh oat
OY oy ɔɪ toy
UH uu ʊ hood
UW oo u two
Stress
AB Description
0 No stress
1 Primary stress
2 Secondary stress
Consonants
ARPABET Rspl. IPA Example
B b b be
CH ch, tch cheese
D d d dee
DH dh ð thee
F f f fee
G g ɡ green
HH h h he
JH j gee
Consonants
ARPABET Rspl. IPA Example
K k k key
L l l lee
M m m me
N n n knee
NG ng ŋ ping
P p p pee
R r r read
S s, ss s sea
Consonants
ARPABET Rspl. IPA Example
SH sh ʃ she
T t t tea
TH th θ theta
V v v vee
W w, wh w we
Y y j yield
Z z z zee
ZH zh ʒ seizure

Background

Speech synthesis

A stack of dilated casual convolutional layers used in DeepMind's WaveNet.[3]

In 2016, with the proposal of DeepMind's WaveNet, deep-learning-based models for speech synthesis began to gain popularity as a method of modeling waveforms and generating human-like speech.[29][30][3] Tacotron2, a neural network architecture for speech synthesis developed by Google AI, was published in 2018 and required tens of hours of audio data to produce intelligible speech; when trained on 2 hours of speech, the model was able to produce intelligible speech with mediocre quality, and when trained on 36 minutes of speech, the model was unable to produce intelligible speech.[31][32]

For years, reducing the amount of data required to train a realistic high-quality text-to-speech model has been a primary goal of scientific researchers in the field of deep learning speech synthesis.[33][34] The developer of 15.ai claims that as little as 15 seconds of data is sufficient to clone a voice up to human standards, a significant reduction in the amount of data required.[35]

Copyrighted material in deep learning

A landmark case between Google and the Authors Guild in 2013 ruled that Google Books—a service that searches the full text of printed copyrighted books—was transformative, thus meeting all requirements for fair use.[36] This case set an important legal precedent for the field of deep learning and artificial intelligence: using copyrighted material to train a discriminative model or a non-commercial generative model was deemed legal. The legality of commercial generative models trained using copyrighted material is still under debate; due to the black-box nature of machine learning models, any allegations of copyright infringement via direct competition would be difficult to prove.[citation needed]

Development

15.ai was designed and created by an anonymous research scientist affiliated with the Massachusetts Institute of Technology known by the alias 15.[citation needed]

The algorithm used by the project to facilitate the cloning of voices with minimal viable data has been dubbed DeepThroat[37] (a double entendre in reference to speech synthesis using deep neural networks and the sexual act of deep-throating). The project and algorithm—initially conceived as part of MIT's Undergraduate Research Opportunities Program—had been in development for years before the first release of the application.[10]

The Pony Preservation Project from 4chan's /mlp/ board has been integral to the development of 15.ai.[38]

The developer has also worked closely with the Pony Preservation Project from /mlp/, the My Little Pony board of 4chan. The Pony Preservation Project, which began in 2019, is a "collaborative effort by /mlp/ to build and curate pony datasets" with the aim of creating applications in artificial intelligence.[39][40] The Friendship Is Magic voices on 15.ai were trained on a large dataset crowdsourced by the Pony Preservation Project: audio and dialogue from the show and related media—including all nine seasons of Friendship Is Magic, the 2017 movie, spinoffs, leaks, and various other content voiced by the same voice actors—were parsed, hand-transcribed, and processed to remove background noise.

Reception

15.ai has been met with largely positive reception. Liana Ruppert of Game Informer described 15.ai as "simplistically brilliant."[5] Lauren Morton of Rock, Paper, Shotgun and Natalie Clayton of PCGamer called it "fascinating,"[7][6] and José Villalobos of LaPS4 wrote that it "works as easy as it looks."[16][b] Users praised the ability to easily create audio of popular characters that sound believable to those unaware that the voices had been synthesized by artificial intelligence: Zack Zwiezen of Kotaku reported that "[his] girlfriend was convinced it was a new voice line from GLaDOS' voice actor, Ellen McLain".[4]

Reception has also been largely acclaimed overseas, especially in Japan. Takayuki Furushima of Den Fami Nico Gamer has described 15.ai as "like magic," and Yuki Kurosawa of Automaton Media called it "revolutionary."[11][10]

Impact

Fandom content creation

15.ai has been frequently used for content creation in various fandoms, including the My Little Pony: Friendship Is Magic fandom, the Team Fortress 2 fandom, the Portal fandom, and the SpongeBob SquarePants fandom, with numerous videos and projects containing speech from 15.ai having gone viral.[4][5]

The My Little Pony: Friendship Is Magic fandom has seen a resurgence in video and musical content creation as a direct result, inspiring a new genre of fan-created content assisted by artificial intelligence. Some fanfictions have been adapted into fully voiced "episodes": The Tax Breaks is a 17-minute long animated video rendition of a fan-written story published in 2014 that uses voices generated from 15.ai with sound effects and audio editing, emulating the episodic style of the early seasons of Friendship Is Magic.[41][42]

Viral videos from the Team Fortress 2 fandom that feature voices from 15.ai include Spy is a Furry (which has gained over 3 million views on YouTube total across multiple videos[yt 1][yt 2][yt 3]) and The RED Bread Bank, both of which have inspired Source Filmmaker animated video renditions.[10] Other fandoms have used voices from 15.ai to produce viral videos. As of July 2022, the viral video Among Us Struggles (which uses voices from Friendship Is Magic) has over 5.5 million views on YouTube;[yt 4] YouTubers, TikTokers, and Twitch streamers have also used 15.ai for their videos, such as FitMC's video on the history of 2b2t—one of the oldest running Minecraft servers—and datpon3's TikTok video featuring the main characters of Friendship Is Magic, which have 1.4 million and 510 thousand views, respectively.[yt 5][tt 1]

Some users have created AI virtual assistants using 15.ai and external voice control software. One user on Twitter created a personal desktop assistant inspired by GLaDOS using 15.ai-generated dialogue in tandem with voice control system VoiceAttack, with the program being able to boot up applications, utter corresponding random dialogues, and thank the user in response to actions.[10][11]

Troy Baker / Voiceverse NFT plagiarism scandal

Troy Baker Twitter logo, a stylized blue bird
@TroyBakerVA

I’m partnering with @VoiceverseNFT to explore ways where together we might bring new tools to new creators to make new things, and allow everyone a chance to own & invest in the IP’s they create. We all have a story to tell. You can hate. Or you can create. What'll it be?

January 14, 2022[tweet 1]

In December 2021, the developer of 15.ai posted on Twitter that they had no interest in incorporating non-fungible tokens (NFTs) into their work.[9][13][tweet 2]

On January 14, 2022, it was discovered that Voiceverse NFT, a company that video game and anime dub voice actor Troy Baker announced his partnership with, had plagiarized voice lines generated from 15.ai as part of their marketing campaign.[12][13][14] Log files showed that Voiceverse had generated audio of Twilight Sparkle and Rainbow Dash from the show My Little Pony: Friendship Is Magic using 15.ai, pitched them up to make them sound unrecognizable from the original voices, and appropriated them without proper credit to falsely market their own platform—a violation of 15.ai's terms of service.[35][9][14]

15 Twitter logo, a stylized blue bird
@fifteenai

I've been informed that the aforementioned NFT vocal synthesis is actively attempting to appropriate my work for their own benefit. After digging through the log files, I have evidence that some of the voices that they are taking credit for were indeed generated from my own site.

January 14, 2022[tweet 3]

Voiceverse Origins Twitter logo, a stylized blue bird
@VoiceverseNFT

Hey @fifteenai we are extremely sorry about this. The voice was indeed taken from your platform, which our marketing team used without giving proper credit. Chubbiverse team has no knowledge of this. We will make sure this never happens again.

January 14, 2022[tweet 4]

15 Twitter logo, a stylized blue bird
@fifteenai

Go fuck yourself.

January 14, 2022[tweet 5]

A week prior to the announcement of the partnership with Baker, Voiceverse made a (now-deleted) Twitter post directly responding to a (now-deleted) video posted by Chubbiverse—an NFT platform with which Voiceverse had partnered—showcasing an AI-generated voice and claimed that it was generated using Voiceverse's platform, remarking "I wonder who created the voice for this? ;)"[12][tweet 6] A few hours after news of the partnership broke, the developer of 15.ai—having been alerted by another Twitter user asking for his opinion on the partnership, to which he speculated that it "sounds like a scam"[tweet 7]—posted screenshots of log files that proved that a user of the website (with their IP address redacted) had submitted inputs of the exact words spoken by the AI voice in the video posted by Chubbiverse,[tweet 8] and subsequently responded to Voiceverse's claim directly, tweeting "Certainly not you :)".[35][13][tweet 9]

Following the tweet, Voiceverse admitted to plagiarizing voices from 15.ai as their own platform, claiming that their marketing team had used the project without giving proper credit and that the "Chubbiverse team [had] no knowledge of this." In response to the admission, 15 tweeted "Go fuck yourself."[12][13][14][35] The final tweet went viral, accruing over 75,000 total likes and 13,000 total retweets across multiple reposts.[tweet 10][tweet 11][tweet 12]

The initial partnership between Baker and Voiceverse was met with severe backlash and universally negative reception.[12] Critics highlighted the environmental impact of and potential for exit scams associated with NFT sales.[43] Commentators also pointed out the irony in Baker's initial Tweet announcing the partnership, which ended with "You can hate. Or you can create. What'll it be?", hours before the public revelation that the company in question had resorted to theft instead of creating their own product. Baker responded that he appreciated people sharing their thoughts and their responses were "giving [him] a lot to think about."[44][45] He also acknowledged that the "hate/create" part in his initial Tweet might have been "a bit antagonistic," and asked fans on social media to forgive him.[13][46] Two weeks later, on January 31, Baker announced that he would discontinue his partnership with Voiceverse.[47][48]

Reactions from voice actors

Some voice actors have publicly decried the use of voice cloning technology. Cited reasons include concerns about impersonation and fraud, unauthorized use of an actor's voice in pornography, and the potential of AI being used to make voice actors obsolete.[9]

See also

Notes

  1. ^ The phrase "high-fidelity" in TTS research is often used to describe vocoders that are able to reconstruct waveforms with very little distortion, and is not simply synonymous with "high quality." See the papers for HiFi-GAN,[1] GAN-TTS,[2] and parallel WaveNet[3] for unbiased examples of this usage of terminology.
  2. ^ Translated from original quote written in Spanish: "La dirección es 15.AI y funciona tan fácil como parece."[16]

References

Notes
  1. ^ Kong, Jungil (2020). "HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis". arXiv:2010.05646v2 [cs].
  2. ^ Binkowski, Mikołaj (2019). "High Fidelity Speech Synthesis with Adversarial Networks". arXiv:1909.11646v2 [cs].
  3. ^ a b c van den Oord, Aäron; Li, Yazhe; Babuschkin, Igor (November 12, 2017). "High-fidelity speech synthesis with WaveNet". DeepMind. Archived from the original on June 18, 2022. Retrieved June 5, 2022.
  4. ^ a b c d e Zwiezen, Zack (January 18, 2021). "Website Lets You Make GLaDOS Say Whatever You Want". Kotaku. Archived from the original on January 17, 2021. Retrieved January 18, 2021.
  5. ^ a b c d Ruppert, Liana (January 18, 2021). "Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App". Game Informer. Archived from the original on January 18, 2021. Retrieved January 18, 2021.
  6. ^ a b c Clayton, Natalie (January 19, 2021). "Make the cast of TF2 recite old memes with this AI text-to-speech tool". PC Gamer. Archived from the original on January 19, 2021. Retrieved January 19, 2021.
  7. ^ a b Morton, Lauren (January 18, 2021). "Put words in game characters' mouths with this fascinating text to speech tool". Rock, Paper, Shotgun. Archived from the original on January 18, 2021. Retrieved January 18, 2021.
  8. ^ Ng, Andrew (April 1, 2020). "Voice Cloning for the Masses". The Batch. Archived from the original on August 7, 2020. Retrieved April 5, 2020.
  9. ^ a b c d Lopez, Ule (January 16, 2022). "Troy Baker-backed NFT firm admits using voice lines taken from another service without permission". Wccftech. Archived from the original on January 16, 2022. Retrieved June 7, 2022.
  10. ^ a b c d e f g h i j Kurosawa, Yuki (January 19, 2021). "ゲームキャラ音声読み上げソフト「15.ai」公開中。『Undertale』や『Portal』のキャラに好きなセリフを言ってもらえる". AUTOMATON. Archived from the original on January 19, 2021. Retrieved January 19, 2021.
  11. ^ a b c d e Yoshiyuki, Furushima (January 18, 2021). "『Portal』のGLaDOSや『UNDERTALE』のサンズがテキストを読み上げてくれる。文章に込められた感情まで再現することを目指すサービス「15.ai」が話題に". Denfaminicogamer. Archived from the original on January 18, 2021. Retrieved January 18, 2021.
  12. ^ a b c d e Williams, Demi (January 18, 2022). "Voiceverse NFT admits to taking voice lines from non-commercial service". NME. Archived from the original on January 18, 2022. Retrieved January 18, 2022.
  13. ^ a b c d e f Wright, Steve (January 17, 2022). "Troy Baker-backed NFT company admits to using content without permission". Stevivor. Archived from the original on January 17, 2022. Retrieved January 17, 2022.
  14. ^ a b c d Henry, Joseph (January 18, 2022). "Troy Baker's Partner NFT Company Voiceverse Reportedly Steals Voice Lines From 15.ai". Tech Times. Archived from the original on January 26, 2022. Retrieved February 14, 2022.
  15. ^ @fifteenai (February 23, 2023). "If all goes well, the next update should be the culmination of a year and a half of nonstop work put into a huge number of fixes and major improvements to the algorithm. Just give me a bit more time – it should be worth it" (Tweet) – via Twitter.
  16. ^ a b c Villalobos, José (January 18, 2021). "Descubre 15.AI, un sitio web en el que podrás hacer que GlaDOS diga lo que quieras". LaPS4. Archived from the original on January 18, 2021. Retrieved January 18, 2021.
  17. ^ Moto, Eugenio (January 20, 2021). "15.ai, el sitio que te permite usar voces de personajes populares para que digan lo que quieras". Yahoo! Finance. Archived from the original on March 8, 2022. Retrieved January 20, 2021.
  18. ^ Felbo, Bjarke (2017). "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm". Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pp. 1615–1625. arXiv:1708.00524. doi:10.18653/v1/D17-1169. S2CID 2493033.
  19. ^ Corfield, Gareth (August 7, 2017). "A sarcasm detector bot? That sounds absolutely brilliant. Definitely". The Register. Archived from the original on June 2, 2022. Retrieved June 2, 2022.
  20. ^ "An Algorithm Trained on Emoji Knows When You're Being Sarcastic on Twitter". MIT Technology Review. August 3, 2017. Archived from the original on June 2, 2022. Retrieved June 2, 2022.
  21. ^ "Emojis help software spot emotion and sarcasm". BBC. August 7, 2017. Archived from the original on June 2, 2022. Retrieved June 2, 2022.
  22. ^ Lowe, Josh (August 7, 2017). "Emoji-Filled Mean Tweets Help Scientists Create Sarcasm-Detecting Bot That Could Uncover Hate Speech". Newsweek. Archived from the original on June 2, 2022. Retrieved June 2, 2022.
  23. ^ Valle, Rafael (2020). "Mellotron: Multispeaker expressive voice synthesis by conditioning on rhythm, pitch and global style tokens". arXiv:1910.11997 [eess].
  24. ^ Cooper, Erica (2020). "Zero-Shot Multi-Speaker Text-To-Speech with State-of-the-art Neural Speaker Embeddings". arXiv:1910.10838 [eess].
  25. ^ Klautau, Aldebaro (2001). "ARPABET and the TIMIT alphabet" (PDF). Archived from the original (PDF) on June 3, 2016. Retrieved September 8, 2017.
  26. ^ "Phonetics" (PDF). Columbia University. 2017. Archived (PDF) from the original on June 19, 2022. Retrieved June 11, 2022.
  27. ^ Loots, Linsen (March 2010). Data-Driven Augmentation of Pronunciation Dictionaries (MSc). Stellenbosch University, Department of Electrical & Electronic Engineering. CiteSeerX 10.1.1.832.2872. Archived from the original on June 11, 2022. Retrieved June 11, 2022. Table 3.2
  28. ^ "The CMU Pronouncing Dictionary". CMU Pronouncing Dictionary. July 16, 2015. Archived from the original on June 3, 2022. Retrieved June 4, 2022.
  29. ^ Hsu, Wei-Ning (2018). "Hierarchical Generative Modeling for Controllable Speech Synthesis". arXiv:1810.07217 [cs.CL].
  30. ^ Habib, Raza (2019). "Semi-Supervised Generative Modeling for Controllable Speech Synthesis". arXiv:1910.01709 [cs.CL].
  31. ^ "Audio samples from "Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis"". August 30, 2018. Archived from the original on November 11, 2020. Retrieved June 5, 2022.
  32. ^ Shen, Jonathan; Pang, Ruoming; Weiss, Ron J.; Schuster, Mike; Jaitly, Navdeep; Yang, Zongheng; Chen, Zhifeng; Zhang, Yu; Wang, Yuxuan; Skerry-Ryan, RJ; Saurous, Rif A.; Agiomyrgiannakis, Yannis; Wu, Yonghui (2018). "Natural TTS Synthesis by Conditioning WaveNet on Mel-Spectrogram Predictions". arXiv:1712.05884 [cs.CL].
  33. ^ Chung, Yu-An (2018). "Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis". arXiv:1808.10128 [cs.CL].
  34. ^ Ren, Yi (2019). "Almost Unsupervised Text to Speech and Automatic Speech Recognition". arXiv:1905.06791 [cs.CL].
  35. ^ a b c d Phillips, Tom (January 17, 2022). "Troy Baker-backed NFT firm admits using voice lines taken from another service without permission". Eurogamer. Archived from the original on January 17, 2022. Retrieved January 17, 2022.
  36. ^ - F.2d – (2d Cir, 2015). (temporary cites: 2015 U.S. App. LEXIS 17988; Slip opinion[permanent dead link] (October 16, 2015))
  37. ^ "15.ai – About". 15.ai. February 20, 2022. Archived from the original on October 6, 2021. Retrieved February 20, 2022.
  38. ^ Branwen, Gwern (March 6, 2020). ""15.ai"⁠, 15, Pony Preservation Project". Gwern.net. Gwern. Archived from the original on March 18, 2022. Retrieved June 17, 2022.
  39. ^ Scotellaro, Shaun (March 14, 2020). "Neat "Pony Preservation Project" Using Neural Networks to Create Pony Voices". Equestria Daily. Archived from the original on June 23, 2021. Retrieved June 11, 2022.
  40. ^ "Pony Preservation Project (Thread 108)". 4chan. Desuarchive. February 20, 2022. Retrieved February 20, 2022.
  41. ^ Scotellaro, Shaun (May 15, 2022). "Full Simple Animated Episode – The Tax Breaks (Twilight)". Equestria Daily. Archived from the original on May 21, 2022. Retrieved May 28, 2022.
  42. ^ The Terribly Taxing Tribulations of Twilight Sparkle. April 27, 2014. Archived from the original on June 30, 2022. Retrieved April 28, 2022. {{cite book}}: |website= ignored (help)
  43. ^ Phillips, Tom (January 14, 2022). "Video game voice actor Troy Baker is now promoting NFTs". Eurogamer. Archived from the original on January 14, 2022. Retrieved January 14, 2022.
  44. ^ McWhertor, Michael (January 14, 2022). "The Last of Us voice actor wants to sell 'voice NFTs,' drawing ire". Polygon. Archived from the original on January 14, 2022. Retrieved January 14, 2022.
  45. ^ "Last Of Us Voice Actor Pisses Everyone Off With NFT Push". Kotaku. January 14, 2022. Archived from the original on January 14, 2022. Retrieved January 14, 2022.
  46. ^ Purslow, Matt (January 14, 2022). "Troy Baker Is Working With NFTs, but Fans Are Unimpressed". IGN. Archived from the original on January 14, 2022. Retrieved January 14, 2022.
  47. ^ Strickland, Derek (January 31, 2022). "Last of Us actor Troy Baker heeds fans, abandons NFT plans". Tweaktown. Archived from the original on January 31, 2022. Retrieved January 31, 2022.
  48. ^ Peters, Jay (January 31, 2022). "The voice of Joel from The Last of Us steps away from NFT project after outcry". The Verge. Archived from the original on February 4, 2022. Retrieved February 4, 2022.
Tweets
  1. ^ @TroyBakerVA (January 14, 2022). "I'm partnering with @VoiceverseNFT to explore ways where together we might bring new tools to new creators to make new things, and allow everyone a chance to own & invest in the IP's they create. We all have a story to tell. You can hate. Or you can create. What'll it be?" (Tweet) – via Twitter.
  2. ^ @fifteenai (December 12, 2021). "I have no interest in incorporating NFTs into any aspect of my work. Please stop asking" (Tweet) – via Twitter.
  3. ^ @fifteenai (January 14, 2022). "I've been informed that the aforementioned NFT vocal synthesis is actively attempting to appropriate my work for their own benefit. After digging through the log files, I have evidence that some of the voices that they are taking credit for were indeed generated from my own site" (Tweet) – via Twitter.
  4. ^ @VoiceverseNFT (January 14, 2022). "Hey @fifteenai we are extremely sorry about this. The voice was indeed taken from your platform, which our marketing team used without giving proper credit. Chubbiverse team has no knowledge of this. We will make sure this never happens again" (Tweet) – via Twitter.
  5. ^ @fifteenai (January 14, 2022). "Go fuck yourself" (Tweet) – via Twitter.
  6. ^ @VoiceverseNFT (January 7, 2022). "I wonder who created the voice for this? ;)" (Tweet). Archived from the original on January 7, 2022 – via Twitter.
  7. ^ @fifteenai (January 14, 2022). "Sounds like a scam" (Tweet) – via Twitter.
  8. ^ @fifteenai (January 14, 2022). "Give proper credit or remove this post" (Tweet) – via Twitter.
  9. ^ @fifteenai (January 14, 2022). "Certainly not you :)" (Tweet) – via Twitter.
  10. ^ @fifteenai (January 14, 2022). "Go fuck yourself" (Tweet) – via Twitter.
  11. ^ @yongyea (January 14, 2022). "The NFT scheme that Troy Baker is promoting is already finding itself in trouble after stealing and profiting off of somebody else's work. Who could've seen this coming" (Tweet) – via Twitter.
  12. ^ @BronyStruggle (January 15, 2022). "actual" (Tweet) – via Twitter.
YouTube (referenced for view counts and usage of 15.ai only)
  1. ^ "SPY IS A FURRY". YouTube. January 17, 2021. Archived from the original on June 13, 2022. Retrieved June 14, 2022.
  2. ^ "Spy is a Furry Animated". YouTube. Archived from the original on June 14, 2022. Retrieved June 14, 2022.
  3. ^ "[SFM] – Spy's Confession – [TF2 15.ai]". YouTube. January 15, 2021. Archived from the original on June 30, 2022. Retrieved June 14, 2022.
  4. ^ "Among Us Struggles". YouTube. September 21, 2020. Retrieved July 15, 2022.
  5. ^ "The UPDATED 2b2t Timeline (2010–2020)". YouTube. March 14, 2020. Archived from the original on June 1, 2022. Retrieved June 14, 2022.
TikTok
  1. ^ "She said " 👹 "". TikTok. Retrieved July 15, 2022.