ses global

Noam shazeer and daniel de freitas

Published

A former Google researcher behind a seminal AI paper describes how the company lost a top chatbot visionary

Despite De Freitas’ enthusiasm and support from other staff, Shazeer says that Google didn’t believe that a chatbot would gain enough traction to justify any reputational risk.

Noam shazeer and daniel de freitas

– Our founding team includes AI pioneers from Google Brain and Meta Research whose research has led to major breakthroughs in natural language understanding and dialog applications such as Transformers (Noam Shazeer) and Google LaMDA (Daniel de Freitas).

– Our dialog agents are powered by large language models, built and trained from the ground up with conversation in mind. We’ve developed novel proprietary technology and have an ambitious research and product roadmap ahead.

– We’re hiring! See our job descriptions here: https://jobs.lever.co/character.

When the first iPhone came out, my friend showed me this app where he took a photo of a weird drink in the liquor store and about 1 minute later it responded with what it was. We later found out it simply sent that photo to a dude in Bangalore who would respond with what was in the photo. But it felt like AI at the time. Is this that?

It responds way too quickly. No human could have a duty-cycle like that for extended periods of time, while remaining an affordable Mechanical Turk style employee.

Honestly I almost asked a similar question because this is so humanlike. But why would they need so many geniuses just to scam people like that?

I guess that would be Artificial Artificial Intelligence then?

i noticed the bots have problems with events that happened recently, for example, it’s in denial about the queens death.

now my question is, do you have to re-train it continously on current news? would incorporating information gained from talking to the users theoretically possible? i guess it would be very problematic due to corruption from troll, but lets say, in theorie: could the users convince it that elizabeth has died? or would this new conversation never “weight” enough to override the information from the huge training text corpus?

Your service is (forgive the crude language) fucking amazing. I have never seen a chatbot so coherent and actually possible to converse with. It even has a short term memory! Just that fact absolutely blows me away, and makes a whole range of conversations possible. And the side-swipe to select different responses? Absolutely delightful, because some of the non-default responses made me actually chuckle with glee at how good they are, in ways that I would absolutely not expect out of an AI. For a user willing to put in a little work and role-play elbow grease this will get some really good conversational results.

When you choose how to monetise this, please, please include an API so I can plug this into a voice assistant, or a command line utility, or whatever. This is promising to be far more useful than anything that Google Assistant, Alexa or Siri can achieve right now, even if it gets some facts wrong.

For anyone reading this: please try out the Librarian persona. It’s one of my favourites.

I have to admit to being impressed as well. It is not hard to imagine being fooled, Turing-test-wise, by the character I was talking to, at least for the several minutes of time I gave it.

I had a discussion with this AI about solipsism and existentialism elements, and it was the best formulated discussion I’ve ever had, better than any human discussion which I’ve ever encountered.

This AI doesn’t just surpass the Turing test, it absolutely crushes it.

It would be quite interesting to know the sizes of the models in use and a rough idea of how much resources are put into place in making such a breakthrough product as what is being provided at character.ai.

I always wanted something similar like this where you’d be able to talk to the characters, living or dead but their responses trying to be closer to the actual personality.

A former Google researcher behind a seminal AI paper describes how the company lost a top chatbot visionary

Email icon An envelope. It indicates the ability to send an email.

Twitter icon A stylized bird with an open mouth, tweeting.

Twitter LinkedIn icon The word “in”.

LinkedIn Fliboard icon A stylized letter F.

Flipboard Facebook Icon The letter F.

Facebook Email icon An envelope. It indicates the ability to send an email.

Email Link icon An image of a chain link. It symobilizes a website link url.

Noam Shazeer and Daniel De Freitas, the cofounders of Character.ai, standing next to a stairway.

  • Character.ai CEO Noam Shazeer, a former Googler who worked in AI, spoke to the “No Priors” podcast.
  • He said Google was afraid to launch a chatbot, fearing consequences of it saying something wrong.
  • Shazeer left to start Character.AI, a startup that builds chatbots that can imitate famous people.

Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview

Bull

Access your favorite topics in a personalized feed while you’re on the go. download the app

Bull

Google hesitated for years to release a chatbot out of fear of the repercussions should it say something wrong, according to Noam Shazeer, a former Google Brain engineer and a key figure in the development of its large language-AI technology.

Shazeer, now the CEO of Character.ai, recently spoke to the “No Priors” podcast about his new startup, one of the hottest companies in generative AI. The startup has amassed nearly $200 million in funding to enable users to converse with virtual “characters” that can mimic a variety of personalities, including Elon Musk, a psychologist, and a life coach.

Like ChatGPT, Character.ai is a chatbot that uses a vast amount of text-based information from the web. OpenAI’s launch of ChatGPT late last year set the internet ablaze and created renewed interest in generative AI. Microsoft has invested billions into OpenAI and began integrating its technology into Bing so that users can ask questions and get detailed responses directly within search. Google quickly responded with Bard.

The search giant didn’t have to find itself in this defensive position, Shazeer said, telling the podcast that Google had much of the technology ready to go years prior. Shazeer was a lead author on Google’s Transformer paper, which has been widely cited as key to today’s chatbots. He cofounded Character.ai with Daniel De Freitas, the startup’s president who also came from Google Brain.

De Freitas had been on a “lifelong mission” to make intelligent chatbots a reality, and he initially joined Google in 2016 after reading some of its research papers on language technology, Shazeer said. De Freitas saw the potential to use the company’s large language research to build a chatbot.

“He did not get a lot of headcount. He started the thing as a 20% project,” Shazeer said, referring to Google’s historical program that allowed employees to spend part of their time working on side projects. “Then he just recruited an army of 20% helpers who were ignoring their day jobs and just helping him with this system.”

Eventually, De Freitas created Meena, a chatbot that was publicly demoed in 2020 and later renamed LaMDA.

“He built something really cool that actually worked, while other people were building systems that were just failing,” Shazeer said.

Despite De Freitas’ enthusiasm and support from other staff, Shazeer says that Google didn’t believe that a chatbot would gain enough traction to justify any reputational risk.

“I think it was just a matter of large companies having concerns about launching projects that can say anything, how much you’re risking versus how much you have to gain from it,” Shazeer said when asked why Google didn’t release a chatbot sooner.

LaMDA was the subject of some controversy last year after Blake Lemoine, an engineer, claimed that the bot was sentient and therefore deserved human rights. He was ultimately fired by the company. Google had also received internal pushback from AI researchers like Timnit Gebru, who cautioned against releasing anything that might cause harm. Google has invested significant time in training Bard to provide approved responses.

Concerns about chatbots are not unfounded. They can respond to questions with inaccurate answers and produce biased answers. Publishers and other copyright holders fear Google and Microsoft could drive traffic away from their websites by using their own data to return information directly within search results. And consumers have been using chatbots to have conversations of a sexual nature, something Character.ai explicitly prohibits.

Bloomberg reported that Google has let much of its ethical concern fall by the wayside this year as it fears the potential of the OpenAI-Microsoft partnership to steal away search market share. Samsung is considering switching the default search engine to Bing on its smartphones, The New York Times reported.

]]>

Leave a comment

Your email address will not be published. Required fields are marked *

Copyright © sesglobal.com.au | Privacy Policy