3

OpenAI Says Authors' ChatGPT Copyright Claims 'Defective' In Motion to Dismiss

 1 year ago
source link: https://www.vice.com/en/article/4a349m/openai-says-authors-chatgpt-copyright-claims-defective-in-motion-to-dismiss
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

OpenAI Says Authors' ChatGPT Copyright Claims 'Defective' In Motion to Dismiss

OpenAI Says Authors' ChatGPT Copyright Claims 'Defective' In Motion to Dismiss

The firm behind ChatGPT is asking a court to dismiss all but one of authors' claims in lawsuits alleging AI outputs infringe their copyright.
August 30, 2023, 2:19pm
OpenAI Says Authors' ChatGPT Copyright Claims 'Defective' In Motion to Dismiss
Image: Nurphoto/Contri

As AI tools become more commonplace, lawsuits centering around the training data used to create machine learning models are piling up. 

Now, OpenAI, the company behind the popular ChatGPT text-generation tool and underlying models, has moved to dismiss most of the claims against it in lawsuits filed by authors who allege that AI outputs infringe on their copyright. 

Advertisement

The machine learning models that power ChatGPT, such as GPT-4, are trained on massive amounts of data scraped from the internet. Although OpenAI has not released any information on the training data for GPT-4, the firm has admitted that training data for its earlier GPT-3 model included "internet-based books corpora," meaning a database of books available online. The authors suing OpenAI in two separate lawsuits—Sarah Silverman, Christopher Golden, and Richard Kadrey filed one, and Paul Tremblay and Mona Awad filed another—allege that every output that ChatGPT makes is thus a derivative work of their books and infringes copyright. 

OpenAI filed identical motions to dismiss the majority of claims in both lawsuits in a California court on Monday. According to the company, the authors' claims are "defective" and should all be dismissed except for one. The claims that OpenAI has filed to dismiss include vicarious copyright infringement, violating the Digital Millennium Copyright Act, unfair competition, negligence, and unjust enrichment. If the motion is granted, then the cases will center around one claim of direct copyright infringement. 

"It is important for these claims to be trimmed from the suit at the outset," the OpenAI motion states, "so that these cases do not proceed to discovery and beyond with legally infirm theories of liability." Discovery is a legal process that forces parties to disclose documents and details about their internal processes if they are relevant to the lawsuit. 

Advertisement

The firm's reasons for filing to dismiss are varied, but generally rely on framing the authors' claims as misunderstanding the technology and being overbroad. OpenAI argues that "in only a remote and colloquial sense" are all ChatGPT outputs "based on" any individual author's work, and that considering every output to violate everybody's copyright—even if the output bears no resemblance to the copyrighted work—would be "frivolous" and is "not how copyright law works." 

Since the rise of AI tools like ChatGPT and DALL-E, creators have called for restricting or banning the use of AI generators. They point out that the tools enable companies to generate works without paying artists and authors, creating a massive labor shift that has already begun to infiltrate newsrooms, publishing, and the film industry.

“Generative AI art is vampirical, feasting on past generations of artwork even as it sucks the lifeblood from living artists,” reads an open letter signed by more than 3,000 artists and creators opposing the use of AI tools in publishing. “While illustrators’ careers are set to be decimated by generative-AI art, the companies developing the technology are making fortunes. Silicon Valley is betting against the wages of living, breathing artists through its investment in AI.”

OpenAI argues that GPT models use "a staggeringly large series of statistical correlations" to generate text, and that such statistical information including "word frequencies, syntactic patterns, and thematic markers" is not copyrightable. Thus, the claim that any GPT output violates the authors' copyright should be thrown out, OpenAI argues, i.e. the claim of vicarious copyright infringement. As for direct copyright infringement, OpenAI argues that authors have not made a specific claim and that generating a wholesale copy of a work in order to create a new non-infringing product is fair use. 

Whether or not the court grants OpenAI's motion to dismiss the majority of claims against it, plaintiffs have demanded a jury trial. 

ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX.

Your Email:

By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.

A Viral AI-Generated Drake Song by ‘Ghostwriter’ Has Millions of Listens

"Heart on My Sleeve" is taking off on TikTok and Spotify. Is it a watershed for AI-generated music, a PR stunt, or something else?
April 17, 2023, 3:19pm
A Viral ‘AI-Generated’ Drake Song by ‘Ghostwriter’ Has Millions of Listens
Screengrabs: TikTok/@Ghostwriter977

A new Drake song featuring The Weeknd is blowing up on TikTok and Spotify and racking up millions of listens. That wouldn’t be surprising, except “Heart on My Sleeve” was released by a faceless producer wearing a white sheet who says that it was generated by AI. 

The song first appeared in a TikTok posted over the weekend by a producer going by “Ghostwriter” that currently has over 9 million views. The video’s text says “I used AI to make a drake song ft. the weeknd” and Ghostwriter hangs out in a white sheet and sunglasses as “Heart on My Sleeve” plays. The producer has since posted several more TikToks boosting the song and even seemingly courting a legal battle. “POV: vibing to this GHOSTWR!TER & Drake before the lawsuit,” one video states. Following a link to download the song takes the user to a platform called Laylo that asks for their phone number. 

Advertisement

Ghostwriter claims that they were “a ghostwriter for years and got paid close to nothing just for major labels to profit,” they wrote in one comment. A spokesperson for Laylo told Motherboard that “Ghostwriter did not partner with Laylo, this was completely unexpected—Laylo is a service that any artist or creator can use to connect with fans to notify them of upcoming drops.”

So far, people are into the song. People have posted more than a thousand TikToks using the audio, mostly commenting on how good it sounds or joking about a lawsuit. There are a lot of unknowns, however, including whether the song is really AI-generated. 

What’s notable is that we can’t really tell. The vocals on “Heart on My Sleeve” really do sound like Drake and The Weeknd. Without more information, it could be exactly what its apparent creator says it is: The first true AI-generated hit. It could also be a real song by the Canadian superstars, released as “AI” as a marketing stunt. 

In a statement to Motherboard, Universal Music Group (UMG) appeared to confirm that the track really is an unsanctioned AI-generated riff and expressed displeasure with Ghostwriter’s work.

“UMG’s success has been, in part, due to embracing new technology and putting it to work for our artists—as we have been doing with our own innovation around AI for some time already.  With that said, however, the training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” the statement said. “These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.  We’re encouraged by the engagement of our platform partners on these issues—as they recognize they need to be part of the solution.”

Advertisement

While they haven’t shared details on how the song was made, Ghostwriter claimed that they wrote and produced the song and then replaced their vocals with Drake and The Weeknd’s. It sounds plausible; AI vocal-switching tech is getting better all the time, and a recent viral remix that used AI to generate a version of Ice Spice’s hit “Munch” with Drake’s vocals caught the rapper’s attention. “This the final straw AI,” Drake wrote in an Instagram post about the song. The vocals in “Heart on My Sleeve” sound much more realistic than in the “Munch” AI cover.

Even though the song has gained over 250,000 plays on Spotify alone, it hasn’t been taken down.

In a recent incident similar to “Heart on My Sleeve,” UMG took down YouTube videos featuring AI-generated vocals by Eminem rapping about cats for infringing their copyright. The label has also sent letters to streaming services including Apple Music and Spotify to stop people from scraping their music in order to generate AI-generated copies. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG said in one letter, FT reported.

Update: This article was updated with comment from Laylo and UMG.

Advertisement

The New GPT-4 AI Gets Top Marks in Law, Medical Exams, OpenAI Claims

The successor to GPT-3 could get into top universities without having trained on the exams, according to OpenAI.
March 14, 2023, 6:51pm
Photo by Soumil Kumar on Pexels​
Photo by Soumil Kumar on Pexels

Machine learning software company OpenAI just unveiled its newest AI model, GPT-4, and is already making big claims about the “unprecedented” stability and capabilities of the system. 

According to the company, GPT-4 is extremely good at taking standardized tests, according to scores on simulated exams posted by the company. 

In an announcement on Tuesday, OpenAI claimed that GPT-4 scored in the 90th or higher percentiles for the bar exam, the verbal GRE, and the reading and writing portions of the SAT—without specifically training the model for these tests. This means that an automated system took these exams without “studying,” and was able to score higher than the majority of humans. 

Advertisement

According to the company, it could score a 1410 on the SAT, pass the Bar and GREs, and scored all fours and fives on AP Art History, Biology, Calculus BC, and Chemistry exams—high enough to get college credit.

It also, somewhat randomly, did very well on sommelier tests. GPT-4 may have a future in writing the descriptions on the backs of wine bottles. 

GPT-4 is the latest generation of OpenAI’s Generative Pre-trained Transformer models, which use machine learning to produce text from the material it’s trained on. GPT-3 launched in 2020, and has since generated a lot of buzz around how powerful the model is, how AI could contribute to problems like disinformation, hate speech, and cheating, and whether robots are coming for all of our jobs.  

With the model being open-access, OpenAI’s GPT-3 has been used by companies to create realistic AI companions, mental health counseling, and chatbots that impersonate dead dictators—each with questionably successful results. 

The new GPT-4 will be able to write in multiple coding languages, generate narrative scripts, answer complicated questions with step-by-step instructions, and interact with images, the company claims. 

Founded in 2015 as an open-source, non-profit research group by Silicon Valley investors Sam Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, OpenAI has since become a for-profit company leading the AI arms race—an industry that has resulted in uncertainty in industries like design and news-writing, where AI is increasingly being used to generate content for the internet. 

When OpenAI announced GPT-2 in 2019, it said that it wouldn’t release the training model's source code due to “concerns about malicious applications of the technology.” Three months later, it released the code on Github. 

OpenAI will sell access to GPT-4 on a waitlisted, token-based system.  

“GPT-4 and successor models have the potential to significantly influence society in both beneficial and harmful ways,” OpenAI said in its press release for GPT-4. “We are collaborating with external researchers to improve how we understand and assess potential impacts, as well as to build evaluations for dangerous capabilities that may emerge in future systems.” 

Advertisement
Inside the Discord Were Thousands of Rogue Producers Are Making AI Music
Album art for UTOP-AI

Inside the Discord Where Thousands of Rogue Producers Are Making AI Music

On Saturday, they released an entire album using an AI-generated copy of Travis Scott's voice, and labels are trying to kill it.
April 24, 2023, 3:46pm

Last week, a viral track that used AI to create an original song using Drake and The Weeknd’s voices went viral and gained millions of listens across the internet before being taken down after a major label complained. The success of "Heart on My Sleeve" has a lot of people wondering whether it represents the looming future of music, but it looks a lot more like the present: there are hundreds of other AI songs populating across social media and streaming platforms, and an entire community online dedicated to making AI music. 

Advertisement

These songs include both original tracks and covers, such as Rihanna singing “Cuff It” By Beyoncé, or Drake and Kanye West singing “WAP” by Cardi B. and Megan Thee Stallion, and rights holders are moving as fast as they can to take them down. On Saturday, a group of music producers and songwriters even released an entire album using AI-generated versions of rapper Travis Scott's voice and other artists, called UTOP-AI. The album was taken down three hours after being released on YouTube due to a copyright claim from Warner Music Group. It was then uploaded to Soundcloud, but was quickly taken offline there.

As AI music becomes more accessible and popular, it has become the center of a cultural debate. AI creators defend the technology as a way to make music more accessible, while many music industry professionals and other critics accuse creators of copyright infringement and cultural appropriation. 

A Discord server called AI Hub hosts a large community of AI music creators behind some of the most viral AI songs. This server was created on March 25 and now has over 21,000 users. AI Hub is dedicated to making and sharing AI music and teaches people how to create songs, with guides and even ready-made AI models tailored to mimic specific artists' voices available to new creators. People can post songs they make and ask troubleshooting questions to each other. 

Advertisement

“I never really expected the server to grow how it did. In only a month, the group has grown to twenty thousand members. It's pretty surreal, since our server has accidentally become the hub for a huge new technology,” the creator of AI Hub, who goes by the pseudonym Snoop Dogg, told Motherboard. “I've had people I know IRL bring up AI stuff that I've made here just for fun. At the start of the server it was mostly me making models, but our community has made more than 70 of them by now.” 

While using AI to transform a nobody's voice into a superstar's might seem arcane, it's shockingly easy. Using the instructions posted in the Discord, Motherboard tried two different ways to create AI covers and found it can be done in just a few minutes. This is possible because members of the Discord server have created code templates that can be run on Google’s Colab platform and AI voice models of over 30 popular singers that can be inserted into the template. 

To create a cover of a song using a different singer’s voice, you start by downloading a song from YouTube, then separate the backtrack from the acapella vocals using one of several free websites, transform the acapella audio file into a new voice using AI, and then put the two tracks together using music editing software. 

“What I like about AI music is the freedom it gives," a music producer in Ukraine who goes by Wonderson told Motherboard. "Every producer dreams of hearing how his beat will sound with Drake or Kendrick or Westside Gunn. But artists are few, and producers and songwriters are millions. Even the most talented of them will never get to work with every artist he or she is interested in. But artificial intelligence can fix that,” Wonderson said, adding that as a producer in Ukraine, it has been particularly difficult for him to get into the Western music industry.

Advertisement

“I can see the same freedom for listeners. Look at how much new content has been created based on AI covers. A lot of tracks have gotten a second chance and even a new interpretation, and some of them sound even better than the original,” he added. 

Many members of this community have dedicated a lot of their time to constantly improving AI voice models, with new versions released regularly. To them, making AI music is a hobby through which they create tracks they envision without needing the resources that were once required to do so. At the same time, videos of AI tracks are often taken down as soon as they're posted and labels and publishers are gearing up to tackle this new issue in the music industry.  

The copyright issue in AI music is being heavily debated following the success of “Heart on my sleeve,” which was created by an anonymous producer called Ghostwriter, who wrote and recorded the song, and used AI to replace his vocals with Drake's and The Weeknd's. After seeing this, Universal Music Group (UMG), where both Drake and The Weeknd are signed, flagged the song and AI content to music streaming services where it was immediately removed. 

“These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” UMG told Motherboard in a statement about “Heart on my sleeve.” 

Advertisement

Ghostwriter, for their part, claimed that they were (fittingly) a ghostwriter in the music industry but were not fairly compensated while labels profited.

In March, UMG told streaming platforms including Spotify and Apple to block AI apps from taking melodies and lyrics from their copyrighted music and told the platforms that AI systems have been trained on copyrighted content without obtaining the required consent from the people who own and produce the content. UMG Executive Michael Nash also published an op-ed in February where he wrote that AI is “diluting the market, making original creations harder to find and violating artists’ legal rights to compensation from their work.” 

“People are deeply concerned by AI but many also acknowledge that AI as a tool is a good thing to increase workflow, navigate creative block and become more efficient,” Karl Fowlkes, an entertainment and business attorney, told Motherboard. “There are a lot of positives to AI in the music industry but generative AI is something that all stakeholders in the industry need to attack. UMG's notice to [streaming platforms] was a major domino publicly”. 

In an attempt to get around this thorny issue, the rules of AI Hub on Discord include  “no illegal distribution of copyrighted materials such as leaks, audio files, and illegal streaming,” and “no violating anyone’s intellectual property or rights."

Advertisement

There are now many ways to transform a song’s vocals into a new voice. The original way was to run code on a Colab page that the mods of the server created. Then, someone created a Discord bot on another server called Sable AI Hub in which you can run the model using text commands. Now, there is also the first music AI creation app called Musicfy that allows users to directly import an audio file, choose an artist, and export the new vocals. 

This app was made by a student hacker who goes by the online pseudonym ak24 and is also a member of the AI Hub community. The app saw over a hundred thousand uses a day after a day launching, he said. “This is going to be completely free. The way I'm thinking right now is creating a platform for people to create AI music of whatever they want. But using these models—the Drake, Kanye, and famous people models—I'm not gonna profit off of these,” he told Motherboard. 

“I love how AI music lets us transform existing songs and create new songs. It's great to have been a fan of many artists, and now being actually able to make new material if they don't drop often,” an admin of AI Hub and manager of the Discord’s corresponding YouTube channel Plugging AI, who goes by Qo, told Motherboard. “Traditional music will always be superior but AI music to me is just a cool way for fans to appreciate and conceptualize new ideas, the possibilities are almost limitless.” 

Advertisement

UTOP-AI, the album created by the Discord community, features original songs using AI-generated vocals from famous artists including Travis Scott, Drake, Baby Keem, and Playboi Carti. Qo, Snoop Dogg, and twenty other people involved in the AI Hub community worked on it. 

This album puts into practice what drew Qo and Dogg to AI music in the first place—the ability to create material for artists they wish to hear more of. “If you're not aware, Utopia is an upcoming album that Travis Scott has been teasing for quite some time, but has never been released. A couple of members decided ‘You know what? We should just make Utopia ourselves at this point. We have the technology now.’ It's entirely written and produced by community members, and is being released sometime soon,” AI Hub's Snoop Dogg said. 

“We have a lot of very talented vocalists and producers that have worked on it,” Qo said before the album's release. “The only issue now is that our first single to it was just striked, as it was blowing up on tiktok, so we are unsure of where we will be putting it for streaming. Most likely it will be exclusively YouTube and Soundcloud.”

After the album was released on YouTube on Saturday, it was taken down about three hours later after being flagged for copyright by Warner Music Group. It was also taken down due to copyright infringement on Soundcloud but has since been reuploaded on YouTube by a fan account. 

Advertisement

“It got ~150k plays on SoundCloud and ~17k total on YouTube with 500 people watching the premiere,” Qo said. 

“The way AI is trained feels like a major hurdle for any argument against copyright infringement”

The album had a disclaimer in the description section stating that the video is exempt from copyright laws under the Fair Use doctrine, which states that people are allowed to use copyrighted materials for free for select purposes, including non-profit and educational purposes. Whether or not something is Fair Use is determined based on four factors: the purpose of the use, the nature of the original copyrighted work, the amount of the work used in proportion to its whole, and the effect of the new work on the market it belongs to. 

The Fair Use argument is what many AI music creators are using to defend their work, stating that they are not profiting off of the music and instead, are either parodying the song or making songs for educational purposes. 

“The fan and consumer experience as it relates to music is bigger than the music itself. Fandom is created through experience, concept and the personal relationships that fans have with their favorite artists,” Fowlkes said. “Still, it's important that artists have control over their art.” 

Because AI is so new, Fowlkes said there is still no concrete definition or criteria that have determined what exactly about an AI song infringes copyright.

Advertisement

“There really isn't any precedent that states that someone's vocal tone is copyrightable so the two most obvious legal issues relate to the right of publicity and the ingestion of copyrighted material to create new works,” he added. “The right of publicity extends the legal right to control how your name, image and likeness is commercially exploited by others which can extend to someone's voice. Additionally, although the Drake and The Weeknd replica song didn't explicitly sample any lyrics from their songs, the way the new song was created was by directly ingesting Drake, The Weeknd and Metro Boomin songs to create something that sounds similar to their work. The way AI is trained feels like a major hurdle for any argument against copyright infringement.”

AI is not exactly new to the music industry. In fact, many industry professionals have already been using AI as part of the production process. Pop artist Taryn Southern partnered with the AI music service Amper Music to develop the instrumental parts of her song “Break Free.” There’s also a growing group of music startups that are currently focused on how to automate parts of the music-making process, including mastering a track, writing lyrics, and generating videos for songs. 

Advertisement

Grimes tweeted her support of the technology on Sunday, writing: “I'll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.” 

ak24 said that many labels and industry professionals reached out to him after they saw the beta version of Musicfy in hopes of creating a partnership and gaining access to his app. “The media has this one perception where it's like all these music groups want this to be pretty much shut down. But it's interesting why they want it to get shut down because they want the tech for themselves,” he said. 

The creators of AI music see their work as less of a way to make money or to steal artists’ fame, but to instead simply take their fan appreciation to the next level. 

“I know a lot of people say that this is going to massively change the music industry, but I honestly don't think it'll affect it that much. People are saying ‘Oh, they can make AI Drake and that will affect Drake’ but the truth is that people only care about AI Drake because of what the real Drake has done. There's no appeal in making an entirely new artist with AI,” the AI music artist going by Snoop Dogg said. “On the other hand, labels could possibly try to get in on the demand and try to get artists to sign off the exclusive rights to their own voices. As of right now, the voices of the artists aren't signed to the label, so artists can technically do whatever with their voice in AI. I hope labels don't do this.” 

"There is no magic button to ‘create a beautiful song’”

“Personally, I think songs created with AI should be tagged, but not deleted. They are not harmful, but rather expand the boundaries of creativity,” Wonderson said. “Thousands of people around the world are creating entirely new songs and albums using the voices of their favorite artists, and millions of people are enjoying listening to them. The release of a Travis Scott-inspired AI album will not make his songs any less popular, but rather the opposite.” 

AI music has been accused of accelerating cultural appropriation and racism, largely because some of the most viral AI songs use the voices of black rappers including Kanye West and Drake. In fact, twenty-seven of the thirty-two AI artist models are black artists. These artists speak from their own cultural and racial perspectives, and AI can use their voices to say things that portray them in stereotypical ways. Also, these are already marginalized people within a white-dominated industry, facing the possibility of the further removal of credit, compensation, and other recognition for their art. 

 “This opens an even bigger issue because more times than not, these examples of AI-generated songs on the internet are creating Black music without using the Black people that created it,” Noah A. McGee wrote on The Root. “Non-Black people who are sitting at home behind a computer can do the same thing by creating a song that sounds like it was created by their favorite rapper, but not deal with the consequences of stealing their likeness.” 

“It’s another way for people who are not Black to put on the costume of a Black person—to put their hands up Kanye or Drake and make him a puppet—and that is alarming to me,” Lauren Chanel, a writer on tech and culture, told The New York Times. “This is just another example in a long line of people underestimating what it takes to create the type of art that, historically, Black people make.”

“I'm not really concerned unless it’s something along the lines of saying racial slurs that you aren't necessarily allowed to say through the AI, or trying to something to get an artist in trouble. As the server grew I feel it has become a way for anyone to express creativity if they don't like their [own] voice, or if they are a big fan of an artist,” Qo told Motherboard. “99% of AI covers/original songs are just to experiment with music and pay homage to artists people enjoy. Nothing has been done with any ill-intent to paint an artist in a bad light or appropriate music and we are hopeful that it will remain this way.” 

In the end, Wonderson said, AI is just a tool. Right now, an AI model cannot spit out a #1 hit single, fully formed. 

"There is no magic button to ‘create a beautiful song’ or ‘create a groovy beat.’ It is possible that such a feature will appear in the future, but at the moment it is not available,” Wonderson told Motherboard. “Even if you use the AI to create a record in the style of some artist using a replica of their voice, you still have to write the beat or use a beat written by a human. You also have to write the lyrics, record, and perform the vocal.” 

Advertisement

Developers Are Connecting Multiple AI Agents to Make More ‘Autonomous’ AI

They hope to create an agent that can do a number of tasks on its own, such as developing a website, creating a newsletter, and writing on a Google doc.
April 4, 2023, 3:34pm
Image: Getty Images
Image: Getty Images

Multiple developers are trying to create an "autonomous" system by stringing together multiple instances of OpenAI’s large language model (LLM) GPT that can do a number of things on its own, such as execute a series of tasks without intervention, write, debug, and develop its own code, and critique and fix its own mistakes in written outputs.

As opposed to just prompting ChatGPT to produce code for an application, something that anyone with access to the public version of OpenAI’s system can currently do, these "autonomous" systems could potentially make multiple AI “agents” work in concert to develop a website, create a newsletter, compile online pages in response to a user’s inquiry, and complete other tasks comprised of multiple steps and an iteration process. 

Advertisement

“Auto-GPT,” for example, is an application that was trending on GitHub and made by a game developer named Toran Bruce Richards, who goes by the alias Significant Gravitas. 

“Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, autonomously develops and manages businesses to increase net worth,” the GitHub introduction reads. “As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.” 

According to its GitHub page, the program accesses the internet to search and gather information, uses GPT-4 to generate text and code, and GPT-3.5 to store and summarize files.

“Existing AI models, while powerful, often struggle to adapt to tasks that require long-term planning, or are unable to autonomously refine their approaches based on real-time feedback,” Richards told Motherboard. “This inspiration led me to develop Auto-GPT (initially to email me the daily AI news so that I could keep up) which can apply GPT4's reasoning to broader, more complex problems that require long-term planning and multiple steps.” 

A video demonstrating Auto-GPT shows the developer giving it goals: to demonstrate its coding abilities, make a piece of code better, test it, shut itself down, and write its outputs to a file. The program creates a to-do list—it adds reading the code to its tasks and puts shutting itself down after writing its outputs—and completes them one by one. Another video posted by Richards shows Auto-GPT Googling and ingesting news articles to learn more about a subject in order to make a viable business.

Advertisement

The program asks the user for permission to proceed to the next step while Googling, and the Auto-GPT GitHub cautions against using "continuous mode" as it "it is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize."

Auto-GPT isn't the only effort in this vein. A venture capital partner at Untapped Capital and developer named Yohei Nakajima created a “task-driven autonomous agent” that uses GPT-4, a vector database called Pinecone, and a framework for developing apps powered by LLMs called LangChain. 

“Our system is capable of completing tasks, generating new tasks based on completed results, and prioritizing tasks in real-time,” Nakajima wrote in a blog post. “The significance of this research lies in demonstrating the potential of AI-powered language models to autonomously perform tasks within various constraints and contexts.” 

A user provides the app with an objective and a task and there are a few agents within the program, including a task execution agent, a task creation agent, and a task prioritization agent, that will complete tasks, send results, and reprioritize and send new tasks. All these agents are currently run by GPT-4.

Nakajima told Motherboard that the most complicated task his app was able to run was to research the web based on an input, write a paragraph based on the web search, and create a Google Doc with that paragraph. 

Advertisement

“I am interested in learning about how to leverage technology to make the world a better place, such as using autonomous technology to scale value creation,” Nakajima said. “It’s important to have constant human supervision, especially as these agents are provided with increasing capabilities—such as accessing databases and communicating with people. The goal is not removing human supervision—the opportunity here is for many people to move from doing tasks to managing the tasks.” 

Richards echoed Nakajima’s point that these systems have autonomous technologies, they still require human oversight. 

“The ability to function with minimal human input is a crucial aspect of Auto-GPT. It transforms a large language model from what is essentially an advanced auto-complete, into an independent agent capable of carrying out actions and learning from its mistakes,” Richards told Motherboard. “However, as we move toward greater autonomy, it is essential to balance the benefits with potential risks. Ensuring that the agent operates within ethical and legal boundaries while respecting privacy and security concerns should be a priority. This is why human supervision is still recommended, as it helps mitigate potential issues and guide the agent towards desired outcomes.” 

These attempts at autonomy are part of a long march in AI research to get models to simulate chains of thought, reasoning, and self-critique to accomplish a list of tasks and subtasks. As a recent paper from researchers at Northeastern University and MIT explains, LLM's tend to "hallucinate" (an industry term for making things up) the further down a list of subtasks that one gets. That paper used a "self-reflection" LLM to help another LLM-driven agent get through its tasks without losing the plot. 

Advertisement

Eric Jang, the Vice President of AI at 1X Technologies, wrote a blog post following the release of that paper. Jang tried to take the paper's thrust and turn it into an LLM prompt, and asked GPT-4 to write a poem that does not rhyme, and when it produced a poem that did rhyme, he then asked, “did the poem meet the assignment?” to which GPT-4 said, “Apologies, I realize now that the poem I provided did rhyme, which did not meet the assignment. Here’s a non-rhyming poem for you:”. 

Jang presented a number of anecdotal examples in his blogpost and concluded, “I’m fairly convinced now that LLMs can effectively critique outputs better than they can generate them, which suggests that we can combine them with search algorithms to further improve LLMs.”

Andrej Karpathy, a developer and co-founder at OpenAI, responded to Richards on Twitter, saying that he thinks “AutoGPTs” are the “next frontier of prompt engineering.” 

"1 GPT call is a bit like 1 thought. Stringing them together in loops creates agents that can perceive, think, and act, their goals defined in English in prompts," he wrote. 

Karpathy went on to describe AutoGPT with psychological and cognitive metaphors for LLMs, while highlighting their current limitations.

"Interesting non-obvious note on GPT psychology is that unlike people they are completely unaware of their own strengths and limitations. E.g. that they have finite context window. That they can just barely do mental math. That samples can get unlucky and go off the rails. Etc.," he said, adding that prompts could mitigate this. 

Stacking AI models on top of one another in order to complete more complex tasks does not mean we’re about to see the emergence of artificial general intelligence, but it does, as we've seen, let systems run continuously and accomplish tasks with less human intervention and oversight. 

These examples don’t even show that GPT-4 is even necessarily “autonomous,” but that with plug-ins and other techniques, it has greatly improved its ability to self-reflect and self-critique, and introduces a new stage of prompt engineering that can result in more accurate responses from the language model.

Advertisement
© 2023 VICE MEDIA GROUP

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK