Synthetic media

From Wikipedia, the free encyclopedia

Synthetic media (also known as AI-generated media,[1][2] generative media,[3] personalized media,[4] and colloquially as deepfakes[5]) is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning.[6][7][8][9] Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more.[9] Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology (and often use "deepfakes" as a euphemism, e.g. "deepfakes for text" for natural-language generation; "deepfakes for voices" for neural voice cloning, etc.)[10][11] Significant attention arose towards the field of synthetic media starting in 2017 when Vice reported on the emergence of pornographic videos altered with the use of AI algorithms to insert the faces of famous actresses.[12] Fears of synthetic media include the potential to supercharge fake news, the spread of misinformation, distrust of reality,[12] mass automation of creative and journalistic jobs, and potentially a complete retreat into AI-generated fantasy worlds.[13] Synthetic media is an applied form of artificial imagination.[12]

History[]

Pre-1950s[]

Maillardet's automaton is drawing a picture

Synthetic media as a process of automated art dates back to the automata of ancient Greek civilization, where inventors such as Daedalus and Hero of Alexandria designed machines capable of writing text, generating sounds, and playing music.[14][15] The tradition of automaton-based entertainment flourished throughout history, with mechanical beings' seemingly magical ability to mimic human creativity often drawing crowds throughout Europe,[16] China,[17] India,[18] and so on. Other automated novelties such as Johann Philipp Kirnberger's "Musikalisches Würfelspiel" (Musical Dice Game) 1757 also amused audiences.[19]

Despite the technical capabilities of these machines, however, none were capable of generating original content and were entirely dependent upon their mechanical designs.

Rise of artificial intelligence[]

The field of AI research was born at a workshop at Dartmouth College in 1956,[20] begetting the rise of digital computing used as a medium of art as well as the rise of generative art. Initial experiments in AI-generated art included the Illiac Suite, a 1957 composition for string quartet which is generally agreed to be the first score composed by an electronic computer.[21] Lejaren Hiller, in collaboration with Leonard Issacson, programmed the ILLIAC I computer at the University of Illinois at Urbana–Champaign (where both composers were professors) to generate compositional material for his String Quartet No. 4.

In 1960, Russian researcher R.Kh.Zaripov published worldwide first paper on algorithmic music composing using the "Ural-1" computer.[22]

In 1965, inventor Ray Kurzweil premiered a piano piece created by a computer that was capable of pattern recognition in various compositions. The computer was then able to analyze and use these patterns to create novel melodies. The computer was debuted on Steve Allen's I've Got a Secret program, and stumped the hosts until film star Henry Morgan guessed Ray's secret.[23]

Before 1989, artificial neural networks have been used to model certain aspects of creativity. Peter Todd (1989) first trained a neural network to reproduce musical melodies from a training set of musical pieces. Then he used a change algorithm to modify the network's input parameters. The network was able to randomly generate new music in a highly uncontrolled manner.[24][25]

In 2014, Ian Goodfellow and his colleagues developed a new class of machine learning systems: generative adversarial networks (GAN).[26] Two neural networks contest with each other in a game (in the sense of game theory, often but not always in the form of a zero-sum game). Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning,[27] fully supervised learning,[28] and reinforcement learning.[29] In a 2016 seminar, Yann LeCun described GANs as "the coolest idea in machine learning in the last twenty years".[30]

In 2017, Google unveiled transformers,[31] a new type of neural network architecture specialized for language modeling that enabled for rapid advancements in natural language processing. Transformers proved capable of high levels of generalization, allowing networks such as GPT-3 and Jukebox from OpenAI to synthesize text and music respectively at a level approaching humanlike ability.[32][33]

Branches of synthetic media[]

Deepfakes[]

Deepfakes (a portmanteau of "deep learning" and "fake"[34]) are the most prominent form of synthetic media.[35][36] They are media that take a person in an existing image or video and replace them with someone else's likeness using artificial neural networks.[37] They often combine and superimpose existing media onto source media using machine learning techniques known as autoencoders and generative adversarial networks (GANs).[38][39] Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud.[40][41][42][43] This has elicited responses from both industry and government to detect and limit their use.[44][45]

The term deepfakes originated around the end of 2017 from a Reddit user named "deepfakes".[37] He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities’ faces swapped onto the bodies of actresses in pornographic videos,[37] while non-pornographic content included many videos with actor Nicolas Cage’s face swapped into various movies.[46] In December 2017, Samantha Cole published an article about r/deepfakes in Vice that drew the first mainstream attention to deepfakes being shared in online communities.[47] Six weeks later, Cole wrote in a follow-up article about the large increase in AI-assisted fake pornography.[37] In February 2018, r/deepfakes was banned by Reddit for sharing involuntary pornography.[48] Other websites have also banned the use of deepfakes for involuntary pornography, including the social media platform Twitter and the pornography site Pornhub.[49] However, some websites have not yet banned Deepfake content, including 4chan and 8chan.[50]

Non-pornographic deepfake content continues to grow in popularity with videos from YouTube creators such as Ctrl Shift Face and Shamook,[51][52][53][54] reaching millions of views. The Reddit community /SFWdeepfakes was created specifically for the sharing of videos created for entertainment, parody, and satire.[55] A mobile application, , was launched for iOS in March 2020. The app provides a platform for users to deepfake celebrity faces into videos in a matter of minutes.[56]

Image synthesis[]

Image synthesis is the artificial production of visual media, especially through algorithmic means. In the emerging world of synthetic media, the work of digital-image creation—once the domain of highly skilled programmers and Hollywood special-effects artists—could be automated by expert systems capable of producing realism on a vast scale.[57] One subfield of this includes human image synthesis, which is the use of neural networks to make believable and even photorealistic renditions[58][59] of human-likenesses, moving or still. It has effectively existed since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work. The website This Person Does Not Exist showcases fully automated human image synthesis by endlessly generating images that look like facial portraits of human faces.[60] The website was published in February 2019 by Phillip Wang.

Audio synthesis[]

Beyond deepfakes and image synthesis, audio is another area where AI is used to create synthetic media.[61] Synthesized audio will be capable of generating any conceivable sound that can be achieved through audio waveform manipulation, which might conceivably be used to generate stock audio of sound effects or simulate audio of currently imaginary things.[62]

Music generation[]

The capacity to generate music through autonomous, non-programmable means has long been sought after since the days of Antiquity, and with developments in artificial intelligence, two particular domains have arisen:

  1. The robotic creation of music, whether through machines playing instruments or sorting of virtual instrument notes (such as through MIDI files)[63][64]
  2. Directly generating waveforms that perfectly recreate instrumentation and human voice without the need for instruments, MIDI, or organizing premade notes.[65]

In 2016, Google DeepMind unveiled WaveNet, a deep generative model of raw audio waveforms that could learn to understand which waveforms best resembled human speech as well as musical instrumentation.[66] Other networks capable of generating music through waveform manipulation include TacoTron (by Google) and DeepVoice (by Baidu).

Speech synthesis[]

Speech synthesis has been identified as a popular branch of synthetic media[67] and is defined as the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.[68]

Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output.[69]

Virtual assistants such as Siri and Alexa have the ability to turn text into audio and synthesize speech.[70] WaveNet, DeepMind's a deep generative model of raw audio waveforms, specialized on human speech.[66] TacoTron and LyreBird are other networks capable of generating believably-human speech.[71]

Natural-language generation[]

Natural-language generation (NLG, sometimes synonymous with text synthesis) is a software process that transforms structured data into natural language. It can be used to produce long form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out by a text-to-speech system. Interest in natural-language generation increased in 2019 after OpenAI unveiled GPT2, an AI system that generates text matching its input in subject and tone.[72] GPT2 is a transformer, a deep machine learning model introduced in 2017 used primarily in the field of natural language processing (NLP).[73]

Interactive media synthesis[]

AI-generated media can be used to develop a hybrid graphics system that could be used in video games, movies, and virtual reality,[74] as well as text-based games such as AI Dungeon 2, which uses either GPT-2 or GPT-3 to allow for near-infinite possibilities that are otherwise impossible to create through traditional game development methods.[75][76] Computer hardware company Nvidia has also worked on developed AI-generated video game demos, such as a model that can generate an interactive game based on non-interactive videos.[77] Through procedural generation, synthetic media techniques may eventually be used to "help designers and developers create art assets, design levels, and even build entire games from the ground up."[77]

Concerns and controversies[]

Deepfakes have been used to misrepresent well-known politicians in videos. In separate videos, the face of the Argentine President Mauricio Macri has been replaced by the face of Adolf Hitler, and Angela Merkel's face has been replaced with Donald Trump's.[78][79]

In June 2019, a downloadable Windows and Linux application called DeepNude was released which used neural networks, specifically generative adversarial networks, to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50.[80][81] On June 27 the creators removed the application and refunded consumers.[82]

The US Congress held a senate meeting discussing the widespread impacts of synthetic media, including deepfakes, describing it as having the "potential to be used to undermine national security, erode public trust in our democracy and other nefarious reasons."[83]

In 2019, voice cloning technology was used to successfully impersonate a chief executive's voice and demand a fraudulent transfer of €220,000.[84] The case raised concerns about the lack of encryption methods over telephones as well as the unconditional trust often given to voice and to media in general.[85]

Starting in November 2019, multiple social media networks began banning synthetic media used for purposes of manipulation in the lead-up to the 2020 US Presidential Election.[86]

Potential uses and impacts[]

Synthetic media techniques involve generating, manipulating, and altering data to emulate creative processes on a much faster and more accurate scale.[87] As a result, the potential uses are as wide as human creativity itself, ranging from revolutionizing the entertainment industry to accelerating the research and production of academia. The initial application has been to synchronise lip-movements to increase the engagement of normal dubbing[88] that is growing fast with the rise of OTTs.[89] In the broader picture, synthetic media will democratize media production cost and limit the need for expensive cameras, recording equipment and visual effects.[90] Big news organizations are already exploring how they can use video synthesis and other synthetic media technologies to become more efficient and engaging.[91][92] Potential future hazards include the use of a combination of different subfields to generate fake news,[93] natural-language bot swarms generating trends and memes, false evidence being generated, and potentially addiction to personalized content and a retreat into AI-generated fantasy worlds within virtual reality.[13]

In 2019, Elon Musk warned of the potential use of advanced text-generating bots to manipulate humans on social media platforms.[94] In the future, even more advanced bots may be employed for means of astroturfing or demonizing apps, websites, and political movements, as well as supercharging memes and cultural trends— including those generated for the sole purpose of being promoted by bots until humans perpetuate them without further assistance.

Deep reinforcement learning-based natural-language generators have the potential to be the first AI systems to pass the Turing Test and potentially be used as advanced chatbots,[95] which may then be used to forge artificial relationships in a manner similar to the 2013 film Her and spam believable comments on news articles.

One use case for natural-language generation is to generate or assist with writing novels and short stories,[96] while other potential developments are that of stylistic editors to emulate professional writers.[97] The same technique could then be used for songwriting, poetry, and technical writing, as well as rewriting old books in other authors' styles and generating conclusions to incomplete series.[98]

Image synthesis tools may be able to streamline or even completely automate the creation of certain aspects of visual illustrations, such as animated cartoons, comic books, and political cartoons.[99][100] Because the automation process takes away the need for teams of designers, artists, and others involved in the making of entertainment, costs could plunge to virtually nothing and allow for the creation of "bedroom multimedia franchises" where singular people can generate results indistinguishable from the highest budget productions for little more than the cost of running their computer.[101] Character and scene creation tools will no longer be based on premade assets, thematic limitations, or personal skill but instead based on tweaking certain parameters and giving enough input.[102]

A combination of speech synthesis and deepfakes has been used to automatically redub an actor's speech into multiple languages without the need for reshoots or language classes.[101]

An increase in cyberattacks has also been feared due to methods of phishing, catfishing, and social hacking being automated by new technological methods.[85]

Natural-language generation bots mixed with image synthesis networks may theoretically be used to clog search results, filling search engines with trillions of otherwise useless but legitimate-seeming blogs, websites, and marketing spam.[103]

There has been speculation about deepfakes being used for creating digital actors for future films. Digitally constructed/altered humans have already been used in films before, and deepfakes could contribute new developments in the near future.[104] Amateur deepfake technology has already been used to insert faces into existing films, such as the insertion of Harrison Ford's young face onto Han Solo's face in Solo: A Star Wars Story,[105] and techniques similar to those used by deepfakes were used for the acting of Princess Leia in Rogue One.[106]

GANs can be used to create photos of imaginary fashion models, with no need to hire a model, photographer, makeup artist, or pay for a studio and transportation.[107] GANs can be used to create fashion advertising campaigns including more diverse groups of models, which may increase intent to buy among people resembling the models[108] or family members.[109] GANs can also be used to create portraits, landscapes and album covers. The ability for GANs to generate photorealistic human bodies presents a challenge to industries such as fashion modeling, which may be at heightened risk of being automated.[110][111]

In 2019, Dadabots unveiled an AI-generated stream of death metal which remains ongoing with no pauses.[112]

Musical artists and their respective brands may also conceivably be generated from scratch, including AI-generated music, videos, interviews, and promotional material. Conversely, existing music can be completely altered at will, such as changing lyrics, singers, instrumentation, and composition.[113] In 2018, using a process by WaveNet for timbre musical transfer, researchers were able to shift entire genres from one to another.[114] Through the use of artificial intelligence, old bands and artists may be "revived" to release new material without pause, which may even include "live" concerts and promotional images.

Neural network-powered photo manipulation has the potential to abet the behaviors of totalitarian and absolutist regimes.[115] A sufficiently paranoid totalitarian government or community may engage in a total wipe-out of history using all manner of synthetic technologies, fabricating history and personalities as well as any evidence of their existence at all times. Even in otherwise rational and democratic societies, certain social and political groups may utilize synthetic to craft cultural, political, and scientific cocoons that greatly reduce or even altogether destroy the ability of the public to agree on basic objective facts. Conversely, the existence of synthetic media will be used to discredit factual news sources and scientific facts as "potentially fabricated."[57]

See also[]

References[]

  1. ^ Goodstein, Anastasia. "Will AI Replace Human Creativity?". Adlibbing.org. Retrieved 30 January 2020.
  2. ^ Waddell, Kaveh. "Welcome to our new synthetic realities". Axios.com. Retrieved 30 January 2020.
  3. ^ "Why Now Is The Time to Be a Maker in Generative Media". Product Hunt. Retrieved 2020-02-15.
  4. ^ Ignatidou, Sophia. "AI-driven Personalization in Digital Media Political and Societal Implications" (PDF). Chatham House. International Security Department. Retrieved 30 January 2020.
  5. ^ Dirik, Iskender. "Why it's time to change the conversation around synthetic media". Venture Beat. Retrieved 4 October 2020.
  6. ^ Vales, Aldana. "An introduction to synthetic media and journalism". Medium. Wall Street Journal. Retrieved 30 January 2020.
  7. ^ Harvey, Del. "Help us shape our approach to synthetic and manipulated media". Twitter Blog. Retrieved 30 January 2020.
  8. ^ Rosenbaum, Steven. "What Is Synthetic Media?". MediaPost. Retrieved 30 January 2020.
  9. ^ Jump up to: a b "A 2020 Guide to Synthetic Media". Paperspace Blog. 2020-01-17. Retrieved 30 January 2020.
  10. ^ Ovadya, Aviv. "Deepfake Myths: Common Misconceptions About Synthetic Media". Securing Democracy. Retrieved 30 January 2020.
  11. ^ Pangburn, DJ. "You've been warned: Full body deepfakes are the next step in AI-based human mimicry". Fast Company. Retrieved 30 January 2020.
  12. ^ Jump up to: a b c Vales, Aldana (October 14, 2019). "An Introduction to Synthetic Media and Journalism". Medium.
  13. ^ Jump up to: a b Pasquarelli, Walter. "Towards Synthetic Reality: When DeepFakes meet AR/VR". Oxford Insights. Retrieved 30 January 2020.
  14. ^ Noel Sharkey (July 4, 2007), A programmable robot from 60 AD, 2611, New Scientist
  15. ^ Brett, Gerard (July 1954), "The Automata in the Byzantine "Throne of Solomon"", Speculum, 29 (3): 477–487, doi:10.2307/2846790, ISSN 0038-7134, JSTOR 2846790.
  16. ^ Waddesdon Manor (22 July 2015). "A Marvellous Elephant - Waddesdon Manor" – via YouTube.
  17. ^ Kolesnikov-Jessop, Sonia (November 25, 2011). "Chinese Swept Up in Mechanical Mania". The New York Times. Retrieved November 25, 2011. Mechanical curiosities were all the rage in China during the 18th and 19th centuries, as the Qing emperors developed a passion for automaton clocks and pocket watches, and the "Sing Song Merchants", as European watchmakers were called, were more than happy to encourage that interest.
  18. ^ Koetsier, Teun (2001). "On the prehistory of programmable machines: musical automata, looms, calculators". Mechanism and Machine Theory. Elsevier. 36 (5): 589–603. doi:10.1016/S0094-114X(01)00005-2.
  19. ^ Nierhaus, Gerhard (2009). Algorithmic Composition: Paradigms of Automated Music Generation, pp. 36 & 38n7. ISBN 9783211755396.
  20. ^ Dartmouth conference:
    • McCorduck 2004, pp. 111–136
    • Crevier 1993, pp. 47–49, who writes "the conference is generally recognized as the official birthdate of the new science."
    • Russell & Norvig 2003, p. 17, who call the conference "the birth of artificial intelligence."
    • NRC 1999, pp. 200–201
  21. ^ Denis L. Baggi, "The Role of Computer Technology in Music and Musicology", lim.dico.unimi.it (December 9, 1998).
  22. ^ Zaripov, R.Kh. (1960). "Об алгоритмическом описании процесса сочинения музыки (On algorithmic description of process of music composition)". Proceedings of the USSR Academy of Sciences. 132 (6).
  23. ^ "About Ray Kurzweil".
  24. ^ Bharucha, J.J.; Todd, P.M. (1989). "Modeling the perception of tonal structure with neural nets". Computer Music Journal. 13 (4): 44–53. doi:10.2307/3679552. JSTOR 3679552.
  25. ^ Todd, P.M., and Loy, D.G. (Eds.) (1991). Music and connectionism. Cambridge, MA: MIT Press.
  26. ^ Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). Generative Adversarial Networks (PDF). Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014). pp. 2672–2680.
  27. ^ Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). "Improved Techniques for Training GANs". arXiv:1606.03498 [cs.LG].
  28. ^ Isola, Phillip; Zhu, Jun-Yan; Zhou, Tinghui; Efros, Alexei (2017). "Image-to-Image Translation with Conditional Adversarial Nets". Computer Vision and Pattern Recognition.
  29. ^ Ho, Jonathon; Ermon, Stefano (2016). "Generative Adversarial Imitation Learning". Advances in Neural Information Processing Systems: 4565–4573. arXiv:1606.03476. Bibcode:2016arXiv160603476H.
  30. ^ LeCun, Yann. "RL Seminar: The Next Frontier in AI: Unsupervised Learning".
  31. ^ Uszkoreit, Jakob. "Transformer: A Novel Neural Network Architecture for Language Understanding". Google AI Blog. Retrieved 21 June 2020.
  32. ^ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; et al. (2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
  33. ^ Dhariwal, Prafulla; Jun, Heewoo; Payne, Christine; Jong Wook Kim; Radford, Alec; Sutskever, Ilya (2020). "Jukebox: A Generative Model for Music". arXiv:2005.00341 [eess.AS].
  34. ^ Brandon, John (2018-02-16). "Terrifying high-tech porn: Creepy 'deepfake' videos are on the rise". Fox News. Retrieved 2018-02-20.
  35. ^ Gregory, Samuel. "Heard about deepfakes? Don't panic. Prepare". WE Forum. World Economic Forum. Retrieved 30 January 2020.
  36. ^ Barrabi, Thomas. "Twitter developing 'synthetic media' policy to combat deepfakes, other harmful posts". Fox Business. Fox News. Retrieved 30 January 2020.
  37. ^ Jump up to: a b c d Cole, Samantha (24 January 2018). "We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now". Vice. Retrieved 4 May 2019.
  38. ^ Schwartz, Oscar (12 November 2018). "You thought fake news was bad? Deep fakes are where truth goes to die". The Guardian. Retrieved 14 November 2018.
  39. ^ PhD, Sven Charleer (2019-05-17). "Family fun with deepfakes. Or how I got my wife onto the Tonight Show". Medium. Retrieved 2019-11-08.
  40. ^ "What Are Deepfakes & Why the Future of Porn is Terrifying". Highsnobiety. 2018-02-20. Retrieved 2018-02-20.
  41. ^ "Experts fear face swapping tech could start an international showdown". The Outline. Retrieved 2018-02-28.
  42. ^ Roose, Kevin (2018-03-04). "Here Come the Fake Videos, Too". The New York Times. ISSN 0362-4331. Retrieved 2018-03-24.
  43. ^ Schreyer, Marco; Sattarov, Timur; Reimer, Bernd; Borth, Damian (2019). "Adversarial Learning of Deepfakes in Accounting". arXiv:1910.03810. Bibcode:2019arXiv191003810S. Cite journal requires |journal= (help)
  44. ^ "Join the Deepfake Detection Challenge (DFDC)". deepfakedetectionchallenge.ai. Retrieved 2019-11-08.
  45. ^ Clarke, Yvette D. (2019-06-28). "H.R.3230 - 116th Congress (2019-2020): Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019". www.congress.gov. Retrieved 2019-10-16.
  46. ^ Haysom, Sam (31 January 2018). "People Are Using Face-Swapping Tech to Add Nicolas Cage to Random Movies and What Is 2018". Mashable. Retrieved 4 April 2019.
  47. ^ Cole, Samantha (11 December 2017). "AI-Assisted Fake Porn Is Here and We're All Fucked". Vice. Retrieved 19 December 2018.
  48. ^ Kharpal, Arjun (2018-02-08). "Reddit, Pornhub ban videos that use A.I. to superimpose a person's face over an X-rated actor". CNBC. Retrieved 2018-02-20.
  49. ^ Cole, Samantha (2018-02-06). "Twitter Is the Latest Platform to Ban AI-Generated Porn". Vice. Retrieved 2019-11-08.
  50. ^ Hathaway, Jay (8 February 2018). "Here's where 'deepfakes,' the new fake celebrity porn, went after the Reddit ban". The Daily Dot. Retrieved 22 December 2018.
  51. ^ Walsh, Michael (19 August 2019). "Deepfake Technology Turns Bill Hader Into Tom Cruise". Nerdist.
  52. ^ Ctrl Shift Face (6 August 2019). "Bill Hader channels Tom Cruise [DeepFake]".
  53. ^ Moser, Andy (5 September 2019). "Will Smith takes Keanu's place in 'The Matrix' in new deepfake". Mashable.
  54. ^ Shamook (3 September 2019). "Will Smith as Neo in The Matrix [DeepFake]".
  55. ^ "Deepfakes that are Safe for Work". www.reddit.com.
  56. ^ Thalen, Mikael. "You can now deepfake yourself into a celebrity with just a few clicks". daily dot. Retrieved 2020-04-03.
  57. ^ Jump up to: a b Rothman, Joshua. "In The Age of A.I., Is Seeing Still Believing?". New Yorker. Retrieved 30 January 2020.
  58. ^ Physics-based muscle model for mouth shape control on IEEE Explore (requires membership)
  59. ^ Realistic 3D facial animation in virtual space teleconferencing on IEEE Explore (requires membership)
  60. ^ Horev, Rani (2018-12-26). "Style-based GANs – Generating and Tuning Realistic Artificial Faces". Lyrn.AI. Retrieved 2019-02-16.
  61. ^ Ovadya, Aviv; Whittlestone, Jess. "Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning". researchgate.net. Retrieved 30 January 2020.
  62. ^ "Ultra Fast Audio Synthesis with MelGAN". Descript.com. Retrieved 30 January 2020.
  63. ^ "Combining Deep Symbolic and Raw Audio Music Models". people.bu.edu.
  64. ^ Linde, Helmut; Schweizer, Immanuel (July 5, 2019). "A White Paper on the Future of Artificial Intelligence" – via ResearchGate.
  65. ^ Engel, Jesse; Agrawal, Kumar Krishna; Chen, Shuo; Gulrajani, Ishaan; Donahue, Chris; Roberts, Adam (September 27, 2018). "GANSynth: Adversarial Neural Audio Synthesis" – via openreview.net.
  66. ^ Jump up to: a b "WaveNet: A Generative Model for Raw Audio".
  67. ^ Kambhampati, Subbarao. "Perception won't be reality, once AI can manipulate what we see". TheHill. Retrieved 30 January 2020.
  68. ^ Allen, Jonathan; Hunnicutt, M. Sharon; Klatt, Dennis (1987). From Text to Speech: The MITalk system. Cambridge University Press. ISBN 978-0-521-30641-6.
  69. ^ Rubin, P.; Baer, T.; Mermelstein, P. (1981). "An articulatory synthesizer for perceptual research". Journal of the Acoustical Society of America. 70 (2): 321–328. Bibcode:1981ASAJ...70..321R. doi:10.1121/1.386780.
  70. ^ Oyedeji, Miracle. "Beginner's Guide to Synthetic Media and its Effects on Journalism". State of Digital Publishing. Retrieved 1 February 2020.
  71. ^ "Deepfakes and Synthetic Media: What should we fear? What can we do?". WITNESS Blog. 2018-07-30. Retrieved 2020-02-12.
  72. ^ Clark, Jack; Brundage, Miles; Solaiman, Irene. "GPT-2: 6-Month Follow-Up". OpenAI. OpenAI. Retrieved 1 February 2020.
  73. ^ Polosukhin, Illia; Kaiser, Lukasz; Gomez, Aidan N.; Jones, Llion; Uszkoreit, Jakob; Parmar, Niki; Shazeer, Noam; Vaswani, Ashish (2017-06-12). "Attention Is All You Need". arXiv:1706.03762 [cs.CL].
  74. ^ Vincent, James (December 3, 2018). "Nvidia has created the first video game demo using AI-generated graphics". The Verge.
  75. ^ Boog, Jason (December 14, 2019). "The Creator of AI Dungeon 2 Shares GPT-2 Finetuning Advice". Medium.
  76. ^ Walton, Nick (July 14, 2020). "AI Dungeon: Dragon Model Upgrade". Medium.
  77. ^ Jump up to: a b Oberhaus, Daniel (December 3, 2018). "AI Can Generate Interactive Virtual Worlds Based on Simple Videos".
  78. ^ "Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos". az Aargauer Zeitung. 2018-02-03.
  79. ^ Patrick Gensing. "Deepfakes: Auf dem Weg in eine alternative Realität?".
  80. ^ Cole, Samantha; Maiberg, Emanuel; Koebler, Jason (26 June 2019). "This Horrifying App Undresses a Photo of Any Woman with a Single Click". Vice. Retrieved 2 July 2019.
  81. ^ Cox, Joseph (July 9, 2019). "GitHub Removed Open Source Versions of DeepNude". Vice Media.
  82. ^ "pic.twitter.com/8uJKBQTZ0o". 27 June 2019.
  83. ^ "Deepfake Report Act of 2019". Congress.gov. Retrieved 30 January 2020.
  84. ^ "Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case".
  85. ^ Jump up to: a b Janofsky, Adam (2018-11-13). "AI Could Make Cyberattacks More Dangerous, Harder to Detect". Wall Street Journal.
  86. ^ Newton, Casey. "Facebook's deepfakes ban has some obvious workarounds". The Verge. Retrieved 30 January 2020.
  87. ^ "2020 Guide to Synthetic Media". Paperspace Blog. January 17, 2020.
  88. ^ "Dubbing is coming to a small screen near you". The Economist. ISSN 0013-0613. Retrieved 2020-02-13.
  89. ^ "Netflix's Global Reach Sparks Dubbing Revolution: "The Public Demands It"". The Hollywood Reporter. Retrieved 2020-02-13.
  90. ^ Riparbelli, Victor (2019-07-23). "Our Vision for the Future of Synthetic Media". Medium. Retrieved 2020-02-13.
  91. ^ "Reuters and Synthesia unveil AI prototype for automated video reports". Reuters. 2020-02-07. Retrieved 2020-02-13.
  92. ^ "Can synthetic media drive new content experiences?". BBC. 2020-01-29. Retrieved 2020-02-13.
  93. ^ Shao, Grace (October 15, 2019). "Fake videos could be the next big problem in the 2020 elections". CNBC.
  94. ^ Hamilton, Isobel (2019-09-26). "Elon Musk has warned that 'advanced AI' could poison social media".
  95. ^ "Chatbot using OpenAI GPT-2 transformer model". dwjbosman.github.io.
  96. ^ Merchant, Brian (October 1, 2018). "When an AI Goes Full Jack Kerouac". The Atlantic.
  97. ^ Merchant, Brian (1 October 2018). "When an AI Goes Full Jack Kerouac". The Atlantic.
  98. ^ Trivedi, Chintan (May 26, 2019). "OpenAI GPT-2 writes alternate endings for Game of Thrones". Medium.
  99. ^ "Pixar veteran creates AI tool for automating 2D animations". June 2, 2017.
  100. ^ McBride, Sarah (April 9, 2019). "Synthetic Camp is real". Medium.
  101. ^ Jump up to: a b "Synthesia". www.synthesia.io. Retrieved 2020-02-12.
  102. ^ Ban, Yuli (January 3, 2020). "The Age of Imaginative Machines: The Coming Democratization of Art, Animation, and Imagination". Medium.
  103. ^ Vincent, James (July 2, 2019). "Endless AI-generated spam risks clogging up Google's search results". The Verge.
  104. ^ Kemp, Luke (2019-07-08). "In the age of deepfakes, could virtual actors put humans out of business?". The Guardian. ISSN 0261-3077. Retrieved 2019-10-20.
  105. ^ Radulovic, Petrana (2018-10-17). "Harrison Ford is the star of Solo: A Star Wars Story thanks to deepfake technology". Polygon. Retrieved 2019-10-20.
  106. ^ Winick, Erin. "How acting as Carrie Fisher's puppet made a career for Rogue One's Princess Leia". MIT Technology Review. Retrieved 2019-10-20.
  107. ^ Wong, Ceecee. "The Rise of AI Supermodels". CDO Trends.
  108. ^ Dietmar, Julia. "GANs and Deepfakes Could Revolutionize The Fashion Industry". Forbes.
  109. ^ Hamosova, Lenka. "Personalized Synthetic Advertising — the future for applied synthetic media". Medium.
  110. ^ "Generative Fashion Design".
  111. ^ "AI Creates Fashion Models With Custom Outfits and Poses". Synced. August 29, 2019.
  112. ^ "Meet Dadabots, the AI death metal band playing non-stop on Youtube". New Atlas. April 23, 2019.
  113. ^ Porter, Jon (April 26, 2019). "OpenAI's MuseNet generates AI music at the push of a button". The Verge.
  114. ^ https://www.youtube.com/watch?v=YQAupr7JxNY
  115. ^ Watts, Chris. "The National Security Challenges of Artificial Intelligence, Manipulated Media, and "Deepfakes" - Foreign Policy Research Institute". Retrieved 2020-02-12.
Retrieved from ""