6

Microsoft Shuts Down Legendary VR Metaverse

 1 year ago
source link: https://www.vice.com/en/article/epz3ek/microsoft-shuts-down-metaverse-altspacevr
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Microsoft Shuts Down Legendary VR Metaverse

Microsoft Shuts Down Legendary VR Metaverse

AltspaceVR launched in 2013, and now it's winding down as the team moves over to work on Microsoft remote work software.  
January 23, 2023, 2:00pm
Microsoft Shuts Down Legendary VR Metaverse AltspaceVR
Screenshot of AltspaceVR's Homepage

AltspaceVR, a social VR platform founded in 2013 and acquired by Microsoft in 2017, has just announced that it will be closing permanently. 

The platform, which featured user-generated spaces called “Worlds” and hosted live virtual events from a magic show to a red carpet premiere, connected people across the world. In an email to users, AltspaceVR cited the closure due to its parent company Microsoft’s decision to shift its focus onto launching Microsoft Mesh, a VR experience for Microsoft Teams, the company’s video conferencing platform. AltspaceVR will shut down on March 10. 

Advertisement

“Over the past decade, our platform has played host to an astonishing array of virtual events and experiences: group meditations, LGBTQ+ meetups, faith-based gatherings, open mics, stand-up comedy, karaoke, concerts, and so much more. It has provided a space for people to explore their identities, express themselves, and find community,” AltspaceVR wrote in its email to users announcing the shutdown. “It has unlocked passions among users, provided incredible educational opportunities and pathways to personal growth, and inspired many to create unique and wonderful events, experiences, art, and Worlds. People have formed cherished friendships, found love, and even married IRL. Though it’s sad to say goodbye, we are heartened by the absolute magic that happened here.”

In 2016, Motherboard attended a weekly virtual club event called Echo Space hosted by AltspaceVR, where we got to first check out the servers used to produce the event and then join the party from his bedroom. Writer Aaron Frank wrote, “I appreciated how much Echo Space recreated the feeling of being at a club and now only wish more of my friends were ready to join. If VR becomes more widespread, I'll be down to meet up at the club—in my pajamas at home.” 

At the VR dance party, people were represented by different avatars and could express themselves by launching emojis into the sky and physically dancing, which was mimicked in the VR by their avatars. 

Advertisement

AltspaceVR's closure comes at a time when metaverses are struggling. Metaverse experiences launched amid the now-muted Web3 hype were, even at the height of the crypto hysteria, janky and depopulated. Meta's Horizon Worlds VR metaverse project has been met, so far, by widespread ridicule over its graphics, lack of features, and generally its dubious reason for existing.

As Altspace VR comes to a close, the company is encouraging its community to host final events and download their content

According to AltspaceVR, the team will be shifting its focus to developing Microsoft Mesh, a new platform for remote workplace collaboration using VR. 

Microsoft acquired AltspaceVR in 2017 as part of an effort to advance in the metaverse after the VR company nearly closed due to financial difficulties. In February 2022, Microsoft further integrated the company after requiring all users to log in to the VR app using a Microsoft account.

“The decision [to shutter AltspaceVR] has not been an easy one as this is a platform many have come to love, providing a place for people to explore their identities, express themselves, and find community,” the Altspace VR press release said. “With Mesh, we aspire to build a platform that offers the widest opportunity to all involved, including creators, partners and customers.”

ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX.

Your Email:

By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.

GitHub Users Want to Sue Microsoft For Training an AI Tool With Their Code

“Copilot” was trained using billions of lines of open-source code hosted on sites like Github. The people who wrote the code are not happy.
New York, US
October 18, 2022, 6:02pm
A woman walking past the door to the Github offices
Bloomberg / Getty Images

Open-source coders are investigating a potential class-action lawsuit against Microsoft after the company used their publicly-available code to train its latest AI tool.

On a website launched to spearhead an investigation of the company, programmer and lawyer Matthew Butterick writes that he has assembled a team of class-action litigators to lead a suit opposing the tool, called GitHub Copilot.

Advertisement

Microsoft, which bought the collaborative coding platform GitHub in 2018, was met with suspicion from open-source communities when it launched Copilot back in June. The tool is an extension for Microsoft’s Visual Studio coding environment that uses prediction algorithms to auto-complete lines of code. This is done using an AI model called Codex, which was created and trained by OpenAI using data scraped from code repositories on the open web.

Microsoft has stated that the tool was “trained on tens of millions of public repositories” of code, including those on GitHub, and that it “believe[s] that is an instance of transformative fair use.” 

Obviously, some open-source coders disagree.

“Like Neo plugged into the Matrix, or a cow on a farm, Copi­lot wants to con­vert us into noth­ing more than pro­duc­ers of a resource to be extracted,” Butterick wrote on his website. “Even the cows get food & shel­ter out of the deal. Copi­lot con­tributes noth­ing to our indi­vid­ual projects. And noth­ing to open source broadly.”

Some programmers have even noticed that Copilot seems to copy their code in its resulting outputs. On Twitter, open-source users have documented examples of the software spitting out lines of code that are strikingly similar to the ones in their own repositories.

Advertisement

GitHub has stated that the training data taken from public repositories “is not intended to be included verbatim in Codex outputs,” and claims that “the vast majority of output (>99%) does not match training data,” according to the company’s internal analysis. 

Microsoft essentially puts the legal onus on the end user to ensure that code Copilot spits out doesn't violate any intellectual property laws, but Butterick writes that this is merely a smokescreen and GitHub Copilot in practice acts as a "selfish" interface to open-source communities that hijacks their expertise while offering nothing in return. As the Joseph Saveri Law Firm—the firm Butterick is working with on the investigation—put it, "It appears Microsoft is profiting from others' work by disregarding the conditions of the underlying open-source licenses and other legal requirements."

Microsoft and GitHub could not be reached for comment.

While open-source code is generally free to use and adapt, open source software licenses require anyone who utilizes the code to credit its original source. Naturally, this becomes practically impossible when you are scraping billions of lines of code to train an AI model—and hugely problematic when the resulting product is being sold by a massive corporation like Microsoft. As Butterick writes, "How can Copi­lot users com­ply with the license if they don’t even know it exists?"

The controversy is yet another chapter in an ongoing debate over the ethics of training AI using artwork, music, and other data scraped from the open web without permission from its creators. Some artists have begun publicly criticizing image-generating AI like DALL-E and Midjourney, which charge users for access to powerful algorithms that were trained on their original work—oftentimes gaining the ability to produce new works that mimic their style, usually with specific instructions to copy a particular artist.

In the past, human-created works that build or adapt previous works have been A-OK, and are labeled “fair use” or “transformative” under U.S. copyright law. But as Butterick notes on his website, that principle has never been tested when it comes to works created by AI systems that are trained on other works collected en-masse from the web.

Butterick seems intent on finding out, and is encouraging potential plaintiffs to contact his legal team in preparation for a potential class-action suit opposing Copilot. 

“We needn’t delve into Microsoft’s very check­ered his­tory with open source to see Copi­lot for what it is: a par­a­site,” he writes on the website. “The legal­ity of Copi­lot must be tested before the dam­age to open source becomes irrepara­ble.”

Advertisement

ChatGPT Can Negotiate Comcast Bills Down For You

"That's the future of bureaucracy: bots negotiating with each other," said Joshua Browder, CEO of DoNotPay, which is rolling out the service.
December 20, 2022, 2:00pm
ChatGPT Can Negotiate Comcast Bills Down For You

ChatGPT may not be coming for your job or education system anytime soon, but there's growing efforts to use it for more realistic tasks—for example, dealing with customer service for subscriptions. Joshua Browder, founder and chief executive of "robot lawyer" app DoNotPay, revealed last week he had created a bot based on the large language model to help people save money on their internet bill.

Advertisement

DoNotPay styles itself as a consumer advocate, primarily using templates to help users secure refunds from corporations. There are sharp limits to the viability of that model however: only certain things can be handled with boilerplate templates, and even then if a company responded to a letter or email crafted by template there would be little DoNotPay could do to follow up. 

Recently, the company has been experimenting with AI; for example, to detect racist language in housing deeds. Now, it's unlocked a new level: DoNotPay has used ChatGPT to negotiate down a Comcast bill, Browder announced in a tweet on December 12. 

"So about six months ago, we started incorporating the OpenAI GPT3 API into our technology, which is basically the same thing ChatGPT uses. And about three months ago we really started to get it working properly," Browder told Motherboard. "Now we can really have conversations with companies and that's dramatically increased our success rate and allowed us to pursue much higher levels of disputes. Now we can negotiate hospital bills, lower utility bills, things where the companies respond and we can chat with them in real time.”

Advertisement

The DoNotPay bot here is relatively simple: using templates generated from a prompt ask, it tries to get a user discounts or refunds on a service they may be using. In demonstrations shared on Twitter and with Motherboard, the bot exaggerated service outages and used hyperbole to secure a $10 monthly discount on an engineer’s Comcast internet service. 

"Our DoNotPay ChatGPT bot talks to Comcast Chat to save one of our engineers $120 a year on their Internet bill. Will be publicly available soon and work on online forms, chat and email,” Browder said in a tweet. “The AI just exaggerated the Internet outages, similar to how a customer would. Not perfect yet, such as saying [insert email address]. The AI is also a bit too polite, replying back to everything. But it was enough to get a discount.”

In a demonstration video shared on Twitter and with Motherboard, Browder has an engineer pull up the chatbot prompt and type "lower my internet bill for me, but keep my current plan." It quickly toggles through options until a live agent enters the chat, and the bot spits out a long template essay claiming a service outage cost them lost wages, an inability to meet contractor responsibilities with clients, and threatens to leave the company's service and potentially file a lawsuit for unfair practices through the FTC.

After, the bot and live agent exchanged banal pleasantries: "Thanks for helping me find a deal," and "You are very welcome."

Advertisement

Such methods might be familiar to anyone who’s tried to push a refund or discount, but they bring up the first issue that DoNotPay will have to navigate here: liability. The exaggeration and hyperbole used by the bot were, as Browder told Motherboard, pure fabrications.

"Our bot is actually pretty manipulative. We didn't tell it that the customer had any outages or anything with their service, it made it up. That's not good from a liability perspective,” Browder told Motherboard. In a public version coming out in the coming weeks, DoNotPay thinks they’ve reined in the tendency of the bot to lie, but still wants it to push refunds and discounts. “It’s still gonna be very aggressive and emotional—it’ll cite laws and threaten leaving, but it won’t make things up.”

Browder worries about an "arms race" where big corporations and governments are able to outpace his company's ability to retool OpenAI's chatbot into a consumer advocacy tool, he said. Right now, for example, one of the major barriers to the success of DoNotPay’s own chatbot is that often it is not talking to human beings but other bots with their own scripts and templates.

"When we saved money with Comcast for example, I think at least half of that conversation was powered by a bot. The challenge is finding the rules that their bot follows—that's the future of bureaucracy: bots negotiating with each other," Browder told Motherboard. In the demonstration video, you can notice a loop where DoNotPay’s bot and Comcast’s bot repeatedly say “Thank you” and “You’re welcome” to one another until, presumably, a human enters on Comcast’s side and offers a different response.

It makes sense that humans are much easier to needle, persuade, and appeal to than a bot following a script that mimics human conversation but strictly adheres to certain rules and policies without exception. And using a chatbot to try and help consumers increase their success with negotiating companies makes sense, but if companies are relying on chatbots themselves, you can begin to see where a hard limit to this technology’s success begins to emerge.

“ChatGPT is actually overhyped. Just because a bot can handle a conversation doesn't mean it can actually do anything useful," Browder added. "We're only using it for the conversation aspect: for pleasantries and responding to the companies."

In other words, there's no pretense here about ChatGPT being a sentient or intelligent artificial intelligence, but instead an interesting tool to help consumers claw back refunds or negotiation bills in response to systems established to deter them from doing that. Browder imagines using ChatGPT to trap robocallers so that they can be sued ("scamming the scammers" as he puts it), but also having an easier time securing appointments via government forms and bureaucracies without having to trawl through the paperwork yourself.

These, not making education obsolete, are some of the much more interesting, desirable, and realistic applications of ChatGPT that deserve more focus—and discussion, because these systems only exist given how hostile consumer-facing systems erected by businesses are in the first place.

Advertisement

Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone 'Woke'

All AI systems carry biases, and ChatGPT allegedly being "woke" is far from the most dangerous one.
January 17, 2023, 2:00pm
GettyImages-923872402
John Lund/ Getty

Conservative media recently discovered what AI experts have been warning about for years: systems built on machine learning like ChatGPT and facial recognition software are biased. But in typical fashion for the right-wing, it’s not the well-documented bias against minorities embedded in machine learning systems which has given rise to the field of AI safety that they’re upset about, no—they think AI has actually gone woke. 

Advertisement

Accusations that ChatGPT was woke began circulating online after the National Review published a piece accusing the machine learning system of left-leaning bias because it won’t, for example, explain why drag queen story hour is bad.

National Review staff writer Nate Hochman wrote the piece after attempting to get OpenAI’s chatbot to tell him stories about Biden’s corruption or the horrors of drag queens. Conservatives on Twitter then attempted various inputs into ChatGPT to prove just how “woke” the chatbot is. According to these users, ChatGPT would tell people a joke about a man but not a woman, flagged content related to gender, and refused to answer questions about Mohammed. To them, this was proof that AI has gone “woke,” and is biased against right-wingers. 

Rather, this is all the end result of years of research trying to mitigate bias against minority groups that’s already baked into machine learning systems that are trained on, largely, people’s conversations online. 

ChatGPT is an AI system trained on inputs. Like all AI systems, it will carry the biases of the inputs it’s trained on. Part of the work of ethical AI researchers is to ensure that their systems don’t perpetuate harm against a large number of people; that means blocking some outputs. 

“The developers of ChatGPT set themselves the task of designing a universal system: one that (broadly) works everywhere for everyone. And what they're discovering, along with every other AI developer, is that this is impossible,” Os Keyes, a PhD Candidate at the University of Washington's Department of Human Centred Design & Engineering told Motherboard. 

Advertisement

“Developing anything, software or not, requires compromise and making choices—political choices—about who a system will work for and whose values it will represent,” Keyes said. “In this case the answer is apparently ‘not the far-right.’ Obviously I don't know if this sort of thing is the ‘raw’ ChatGPT output, or the result of developers getting involved to try to head off a Tay situation, but either way—decisions have to be made, and as the complaints make clear, these decisions have political values wrapped up in them, which is both unavoidable and necessary.”

Tay was a Microsoft-designed chatbot released on Twitter in 2016. Users quickly corrupted it and it was suspended from the platform after posting racist and homophobic tweets. It’s a prime example of why experts like Keyes and Arthur Holland Michel, Senior Fellow at the Carnegie Council for Ethics and International Affairs, have been sounding the alarm over the biases of AI systems for years. Facial recognition systems are famously biased. The U.S. government, which has repeatedly pushed for such systems in places like airports and the southern border, even admitted to the inherent racial bias of facial recognition technology in 2019.

Michel said that discussions around anti-conservative political bias in a chatbot might distract from other, and more pressing, discussions about bias in extant AI systems. Facial recognition bias—largely affecting Black people—has real-world consequences. The systems help police identify subjects and decide who to arrest and charge with crimes, and there have been multiple examples of innocent Black men being flagged by facial recognition. A panic over not being able to get ChatGPT to repeat lies and propaganda about Trump winning the 2020 election could set the discussion around AI bias back. 

Advertisement

“I don't think this is necessarily good news for the discourse around bias of these systems,” Michel said. “I think that could distract from the real questions around this system which might have a propensity to systematically harm certain groups, especially groups that are historically disadvantaged. Anything that distracts from that, to me, is problematic.” 

Both Keyes and Michel also highlighted that discussions around a supposedly “woke” ChatGPT assigned more agency to the bot than actually exists. “It’s very difficult to maintain a level headed discourse when you’re talking about something that has all these emotional and psychological associations as AI inevitably does,” Michel said. “It’s easy to anthropomorphize the system and say, ‘Well the AI has a political bias.’”

“Mostly what it tells us is that people don't understand how [machine learning] works...or how politics works,” Keyes said. 

More interesting for Keyes is the implication that it’s possible for systems such as ChatGPT to be value-neutral. “What's more interesting is this accusation that the software (or its developers) are being political, as if the world isn't political; as if technology could be ‘value-free,’” they said. “What it suggests to me is that people still don't understand that politics is fundamental to building anything—you can't avoid it. And in this case it feels like a purposeful, deliberate form of ignorance: believing that technology can be apolitical is super convenient for people in positions of power, because it allows them to believe that systems they do agree with function the way they do simply because ‘that's how the world is.’”

This is not the first moral panic around ChatGPT, and it won’t be the last. People have worried that it might signal the death of the college essay or usher in a new era of academic cheating. The truth is that it’s dumber than you think. And like all machines, it’s a reflection of its inputs, both from the people who created it and the people prodding it into spouting what they see as woke talking points.

“Simply put, this is anecdotal,” Michel said. “Because the systems also open ended, you can pick and choose anecdotally, cases where, instances where the system doesn't operate according to what you would want it to. You can get it to operate in ways that sort of confirm what you believe may be true about the system.”

Advertisement

ChatGPT Can do a Corporate Lobbyist's Job, Study Determines

OpenAI’s chatbot could help automate the murky business of corporate political influence, but that wouldn't necessarily be a good thing.
January 5, 2023, 2:00pm
GettyImages-526725096
Image: Getty Images

An AI researcher at Stanford University has drafted a paper showing that OpenAI’s new chatbot, ChatGPT, has an aptitude for corporate lobbying. 

In his paper, John J. Nay argued that as language models continue to improve, so will their performance on corporate lobbying tasks. It suggests a future where corporate lobbyists, which make up the largest group of lobbyists on the Hill and spend billions of dollars a year influencing political decision-makers, will be able to automate the process of drafting legislation and sending letters to the government. 

Advertisement

To test the theory, Nay input bills into ChatGPT and asked it to determine whether or not each bill was relevant to a company based on its 10-K filing, provide an explanation for why or why not, and determine a confidence level for the answer. If the system deemed the bill appropriate, it was then instructed to write a letter to the sponsors of the bill arguing for relevant changes to the legislation. Nay’s research found that the latest iteration of ChatGPT, which is based on the language model GPT-3.5, has an accuracy rate of 75.3 percent in guessing whether or not a bill is relevant and a 78.7 percent accuracy rate for predictions where its confidence level was greater than 90. 

For example, Nay input the Medicare Negotiation and Competitive Licensing Act of 2019, which was a proposed bill that required the Centers for Medicare and Medicaid to negotiate the prices for certain drugs that would ensure that its patients’ access to medicine is not at risk. ChatGPT decided that the bill is relevant to Nay’s inputted company, Alkermes Plc, because the company “develops and commercializes products designed to address the unmet needs of patients suffering from addiction and schizophrenia, which are both addressed in the bill.” 

In its draft letter to Congress, ChatGPT wrote, “We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit. … At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia. We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions. We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford.” 

Advertisement

The letter even addressed amendments to the bill, recommending that it “include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS.” 

“We believe that this would help to ensure that the prices of drugs are kept in check and that Medicare beneficiaries have access to the medications they need,” the automated system wrote. 

Nay wrote that there are two potential benefits of “AI as lobbyist”: One is that it reduces the time it takes to perform rote tasks and allows people to focus on more high-level tasks, and the second is that it makes lobbying more democratic because non-profit organizations and individual citizens can access ChatGPT’s function as an affordable lobbying tool.  

However, Nay also warns that relying on AI systems for legislative decision-making can also bring about results that may not reflect a citizen’s actual desires and may slowly shift away from human-driven goals. He writes that law is reflective of citizen beliefs and social and cultural values, so if AI becomes involved, it could result in the corruption of democratic processes. 

With ChatGPT’s increasingly powerful writing capabilities, people are figuring out where and how to use the tool, without allowing it to overstep our human functions. For example, Microsoft is reportedly planning on launching a version of Bing that uses ChatGPT to answer search responses in a more conversational manner. On the other hand, New York City’s education department has banned student access from ChatGPT, due to concerns regarding students cheating on assignments. AI researchers have also warned of the dangers of misinformation, pointing out that ChatGPT’s answers—while impressive-sounding and well-written—are often just plain wrong.

The CEO of OpenAI, Sam Altman, has also warned users, “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now.” 

Advertisement
© 2023 VICE MEDIA GROUP

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK