I've had a chance to play with this and it's eerie how good it is. Most impressive to me, it was able to defend itself persuasively when confronted with evidence of how its previous answers were inconsistent (in the version I played with, it tends to be very coherent but frequently wrong).
I tried to get it to say controversial/inflammatory/bigoted things with leading questions, or questions where the framing itself was problematic, and it was quite adept at deflecting/dodging/resisting. I was very surprised.
What?? "You can't win" because this language model tries to avoid saying bigoted things? You say that like it's some sort of catch-22. It's a very desirable property and is a hard problem that needs to be solved to safely deploy these sorts of models to billions of users.
'Shortly after the Dictionary was published a literary lady complimented him upon it and particularly expressed her satisfaction that he had not admitted any improper words. “No, Madam,” he replied, “I hope I have not daubed my fingers. I find, however, that you have been looking for them.”'
Despite changes in culture, people's nature hasn't changed in 200 years.
I've noticed contradictory statements are kind of standard for GPT-3. I can well believe you could come up with a system that's better at defending itself against the charge of contradiction than it is at avoiding contradictions in the first place.
In this case, I was pushing it quite a bit to see how it would respond. Most of the time, it responds quite confidently so it sounds right, even if it isn't, ie about the median of what you'd expect for a Hacker News comment.
I think it's a common weakness of chatbots that you can get them to contradict themselves through presenting new evidence or assertions, or even just through how you frame a question. I found LaMDA to be much more resistant to this than I anticipated, when I tried it out (I'm a Googler).
It wasn't completely immune -- I was eventually able to get it to say that, indeed, Protoss is OP -- but it took a long time.
I've noticed contradictory statements are kind of standard for humans. I can well believe you could come up with a system that's better at defending itself against the charge of contradiction than it is at avoiding contradictions in the first place.
I couldn't find a paper for this one. Can anyone else?
Dialogue really is a massive missing piece in NLP (and machine interaction in general). Without dialogue, we're giving algorithms much harder tasks than we give people. A person can always ask clarifying questions before answering. (Dialogue is of course much harder for other reasons)
So it'd be really nice to read about their self-described "breakthrough" in more concrete terms than a press release that reads more like an advertisement than anything else.
If you want a freely available system that attacks this problem then Blenderbot is a good choice (made by Facebook)[1] . It is also freely available and integrated with Huggingface.
It would be interesting to hear how the google and facebook bots discuss who is the winner in chatbot's technology, and I wonder if the evaluator function of both model agree on who is the overall winner, this imply some scale or measure to divide the knowledge graph into brachs and compare the quality of conversational technology of both models, perhaps using a random forest model to evaluate the performance mean scoring function of each one to get some transfer learning or adjust some hyperparameter. Finally, I would also like some comparison with the wolfram tool. I think wolfram shoud be designing some chatbot centered about discovering new facts or empowering users in the scientific realm.
Wolfram has no chatbot and is a data source for chatbots. Blenderbot has no memory across conversations. I’m not familiar with transfer learning or hyper parameter search for transformers when primarily finetuning is used .
We really need to stop calling things "lambda". You'd think Google would grasp the concept of unique names on the internet, but I suppose they can always push their own stuff to the top.
This anodyne name is likely because there was a small internal shitstorm over calling it Meena because some folks were tired of chat bots being given women’s names
IMHO, we probably should try to get to the point that bots/assistants are less "a thing we order around" and "a thing we communicate with", if partially because of the pretty bad vibes of "creating slaves" or being comfortable ordering such programs around, regardless of the degree we are confident we haven't accomplished computer sentience. It's just a bad pattern to be in. I recall a lot of work going into making Cortana refuse to tolerate abusive language towards it.
Maybe Google has it right with not anthropomorphizing Google Assistant and giving it a human name, but in that case LaMDA is a move in the wrong direction it seems: The conversations demonstrated emulated emotion in a way that generally speaking, most bots do not.
Is there any work on making difficult-to-gender voices for bots that still sound human? Maybe if we were to combine names that aren't traditionally given to humans with a voice that is hard to gender? Should we intentionally make bots/assistants sound more robotic, so they do not try to "sound human"?
I think it's kinda an interesting question to figure out what the best practice should be moving forwards.
This post makes me feel weird. Because I don't want to be forced to put on niceties for an inanimate object. I want to order my device around. Not as a power trip, but simply because it's a device, whose function is to complete commands.
I'm sympathetic to your points on why that's a bad vibe and might not be great when the line is blurred, but... I don't know. It seem strange to afford rights, if only conversational/social rights, to an object.
I think it’s more out of respect for other people than for the bot itself. If your bot is given a feminine name and voice, and accepts being called various sexist slurs, then it becomes a rather offensive depiction of women.
1. I don't want to feel bad for giving commands to my machine. For avoiding niceties. This is not talking about slurs, etc, but not saying 'please' or some such.
2. I agree it would be nice to avoid negativd depictions of groups. Feminine bots accepting abuse isn't great.
I guess the balance for me is kind of like what they person I was replying to said: don't make it a person. I know I'm not cruel, but it is strange to think you'd want to develop it to refuse commands if you're mean or something. I don't want that.
I have an example of what you may be trying to illustrate: the two types of AI in Star Trek. 1 being the ship. Crew members happily bark at it with no manners. But then there are things like Data and holodeck programs where crew members will interact with them like real humans.
Maybe there is some middle ground for both types of robots?
I agree that you shouldn’t have to say “please” to set your egg timer or get the weather forecast. It would perhaps be amusing with a bot that refused to do anything unless you were extremely polite, but it wouldn’t be very practical.
What’s interesting to me is how this is never a problem with a traditional menu or button based UI. Maybe I’m muttering “f*ck off” as I click “Remind me later” on the umpteenth alert of the day trying to get me to update this or that, but the software would never know. (On the other hand, you could argue that the software is offending me by trying to get me to do something that is mostly in the developer’s interest, and not allowing me to say “No” - this is where being able to be frank using voice input could be quite liberating)
I think my concern is that if you talk to a bot in a certain manner, it normalizes talking to humans in that manner, if the bot is "human enough".
I wonder if the solution is to stop building bots to emulate humans: Natural language processing is important so humans can make ordinary requests of a computer without having to remember specific syntax. But perhaps we should stop trying to make the computer respond the way a human would, so that people don't view it like a human or personality.
Very confusing to me as a man why we don't just tell the thing what it should be called and have it use that name wherever it appears on the device (e.g. "Hey David" instead of "Hey Siri.") I think Google Assistant is the only good assistant name, but saying "Hey, Google" feels weird when it's technically supposed to be some kind of pseudo-intelligence instead of the whole company.
I would assume it's related to limitations in the early wake word detection hardware and software which require pretraining of the weights, which were usually hard coded into the low power edge system.
Nothing would stop a modern system from training a new wakeword in a cloud system, if enough samples were available.
>You might expect another person to respond with something like:
>“How exciting! My mom has a vintage Martin that she loves to play.”
Not really IMO, I think they just pushed it to just say "whatever's related to the topic". People who behave like that irl come off as self-absorbed and obnoxious most of the time, hope future AIs are not like that!
I came here to point this out - this is how people from California communicate. They simply take turns talking about themselves, orbiting around the same subject until somebody says something with enough gravity to drag the conversation into its next orbit. It was really off-putting when I first arrived, I got used to it over time, though I still don't like it and it's sad to see it used as the default model for this type of application.
>They simply take turns talking about themselves ...
This so much!
One thing about the UX of all conversational products, in stark contrast with real conversations, is that it forces you into a dynamic of "one question, one answer", "one message, one reply". I can imagine it could be pretty hard to break this paradigm, but for me it's like the equivalent of the uncanny valley in this context. No matter how good the replies are, if I'm forced into this dynamic, I just know that I'm talking to a robot.
Oh my. This so much. To be honest, it came across as incredibly selfish and self-centered when I was there, as a person from the east coast. It was as if people talk at each other instead of with each other.
One element of this I think is that California is highly individualistic. The people there ARE self-absorbed. It is a celebrated cultural value. It's the exact mirror of Tall Poppy Syndrome. No one cuts you down for talking about your own interests and uniqueness instead of collectivist and egalitarian topics like sports or the weather.
There's also a less intricate social code that comes from mixing together many people from different cultures. In a high-context, temporarily stable society, things get stratified and the litte social lubricants, expectations, and niceties get written into the socal contract and within the structure of the language itself (formal vs. informal forms). In a low-context and dynamic society, you are basically free to communicate without referencing the rulebook on what you should be saying or how you should be saying it and you bring less expectation that the other party will respond in some particularly scripted way.
I'm not sure this is something unique to California. Travelling around, it seems that it's more correlated to people's income level. Richer people tend to talk about themselves more, in both Asia and EU. Maybe it's because they are able to experience more given their wealth/status, thus having more topic to talk about. Or maybe they get wealthy because of that self-centric attitude.
In short, I don't think it's a California unique characteristic.
> Richer people tend to talk about themselves more, in both Asia and EU. Maybe it's because they are able to experience more given their wealth/status, thus having more topic to talk about. Or maybe they get wealthy because of that self-centric attitude.
Interesting. My experience (Poland/EU) is pretty much the reverse. That is, I expect a richer person to take less about themselves, and more about the topic at hand. That is (generalizing), a rich person will say things like, "$Person is a great $X-maker, you should check them out", or "yeah, X is great, especially ${some refined form}", whereas a less well-off person will keep saying things like "yes, I love X", or "I hate X", or "I'm taking X classes", or "I know ${list of famous people doing X}". My provisional explanation is that the rich person doesn't feel the need to prove anything to you, or to themselves, so they can just talk about the topic itself, instead of trying to position themselves within it.
I often interpreted this as a conservative interpersonal communicative approach. It is safer to talk about yourself than to ask questions from people because there risk of inadvertently saying something offensive.
Sure, you will find people that are a bit more self absorbed and would just pong about themselves.
But there are so many conversational constraints- asking “why” is aggressive, direct feedback is discouraged, slipping pronouns or other identity traits is a minefield, etc. You should not follow up when others say something because they were being polite and you’re putting them in evidence…
If you get to engaged in a conversation people can feel weirded out, like you’re supposed to go through the motions but that’s it.
Not talking also can get you in trouble.
Talking about yourself is probably the safest thing to do.
I actually find it easier to converse by asking polite general questions about what the other person just said about themselves. (Probably not something like "why?", more like "oh cool, how was it?") Because so many people's most comfortable topic is themselves, it can put them at ease. Can't go overboard with it though.
I think if you find that engaging with people is such a minefield that's probably on you. It's not hard at all to have an engaging conversation with someone without accidentally offending them...
People talk about themselves because conversations can be intimate - they're a two way street where both parties are sharing and relating.
I believe the OP. There's always a minefield when cultures collide. This is evident when someone from the US Midwest/Mountain West moves to SF for work. Just very different norms for how interactions are structured, and how good faith is communicated.
Because all involved parties basically look and sound the same, they judge one another according to their own local standards, instead of recognizing that they are actually interacting with a different culture.
Midwest small talk is something to be cherished. I remember growing up having pleasant conversations with people. Moving west I found that conversations are more like a combative 1-upping of each other. It gets tiring listening.
When I talk to someone and they share an experience, I like to ask questions, or relate to the experience by sharing my own. This is not "self absorbed" it's really normal...
A lot of this is cultural, imo. I grew up in Eastern Europe, where it's very common to have two people talking exactly in that way about a common topic: "X? Yeah, my mom Y".
It took me a while to learn that I can come across as self-absorbed and obnoxious in the US when speaking that way, so I have to consciously remember to ask questions about the person or the topic they presented, and avoid introducing my own related topic, but then I never know when it's OK or how to actually switch topics (since, like the article mentions, conversations do flow from on topic to another) and I become anxious during the conversation.
Even this comment is about me. Could someone suggest how I could have responded in a way that doesn't sound self-absorbed?
Yeah, I grew up in the Midwest and even there it's super common to respond to X with a personal anecdote about X.
I think it makes the conversation more personal. Anyone can say, "Oh cool! What songs have you learned?" Only _you_ (you know, the friend they chose to talk to) can say, "Oh, that's cool! My grandpa gave me a cool guitar that I've been meaning to learn to play."
I find the best approach is to sandwich a personal anecdote between two "you" statements for the people who need a prompt to carry the conversation. For example, "That's really interesting. I've also been thinking about learning guitar. How are you liking your lessons?"
First, please don't take this as a personal criticism, or that I'm saying that you do sound self-absorbed - we are in a comment section where people share their own experiences and opinions, so I don't think that expressing yourself in the first person is in any way negative.
However, to answer your question about how to appear less "self-involved", perhaps there's a structural trick you can use, which is to shift some of your writing from first-person to second-person and speak from the outside? How does this sound to you:
"It can take a while to learn that this can come across as self-absorbed and obnoxious in the US when speaking that way, so I've learned to consciously remember to ask questions about the person or the topic they presented, and void introducing related topics. This does make it more challenging to know when it's OK or how to actually switch topics (since, like the article mentions, conversations do flow from one topic to another) and this uncertainty can be anxiety-inducing"
I'm not saying it's essential, but as an exercise if you imaging talking about the situation that way, it can influence the tone.
I will repeat that I think your fear about sounding self-absorbed does not come out, to me, in your paragraph. I have worked with colleagues whose conversations do wander off on their own tangents if they're not asked to focus, and that may be similar to what you're talking about? But I can't be sure. If so, that is the difference between conversation for its own sake where sharing and opinion and life experiences are related, and workplace conversations which often have an objective, etc.
There's a name for doing that too much, "conversational narcissism". Example from a quick search:
James: I’m thinking about buying a new car.
Rob: Oh yeah? I’m thinking about buying a new car too.
James: Really?
Rob: Yup, I just test drove a Mustang yesterday and it was awesome.
I think if you actually go back to the original topic straight after, it isn't going to bother people much but if you keep talking about your thing and not showing deep understanding of theirs, it can feel like you only heard some keywords and didn't even bother to listen to what they were actually saying.
I only find that obnoxious when they come of as "one-upping" the other person.
Maybe I'm wrong and people secretly think I'm obnoxious, but I think it's a good thing to quickly mention something like "my mom Y" to show your familiarity with a topic, so the other person knows how much detail they need to give when talking about it. Like, if they're explaining what a certain company does in detail, and your mom works there, you should probably let them know before they explain a ton of stuff you already know.
I think the key is just to steer the conversation back quickly. If you derail it talking about yourself, steer it back with something like "So how did you like X?"
I read that reply rather differently. It didn't sound self-absorbed, as if they were saying "let's talk about me, not you." It was more like "how cool, we have a common interest, lots to talk about!"
I could imagine better responses, but I didn't see much wrong with that one.
I'll probably say, "Cool!!.", "Electric or Acoustic?", "Why guitar?", "How are you finding it, easy or hard - I've been meaning to take lessons". "Do you like your teacher?" ....
I guess, if my mom played and had a vintage Martin then... who knows?...
I believe the primary issue with conversational AI is that, at best, is 1-person, while humans are infinite-people. Hell, even a single person is multiple-people depending on the context and mood at that moment and other factors - in the sense that depending on context/mood/etc. a person may reply in different ways.
Conversational AI cannot capture that, it can be simply a drone..
Spot on! A close friend might respond with something like "oh really, how do you find the time between work and <insert other things>" or "nice, are you taking classes or using some resources online?" and continue from there. The style where one responds with "nice, now switch back to MY story!" is natural only in some parts of US or superficial work relationships...
If such models condition all humans to converse like that then the future is even more lonely, despite us all being connected all the time.
I think it depends on context. Idle chat at the coffee machine? Fair game to inject trivia like that. Somebody who maybe is looking for support in their new hobby? Different altogether.
And that related to the "Not really". Not the bit I was talking about:
> Not really IMO, I think they just pushed it to just say "whatever's related to the topic". People who behave like that irl come off as self-absorbed and obnoxious most of the time, hope future AIs are not like that!
"Hey, that's nice", "How's that going?" or something like it.
Also, no response is also a response, you could just listen to them. No one's going to say "I just started taking guitar lessons" and remain silent for the rest of the evening; for sure they have much more to say about it.
Why is it self-absorbed for someone to include themselves in the conversation, but it's not self-absorbed for you to come up to me and talk to me about your guitar playing?
You can come up to me and say, "Hey, guess what!? I just started playing guitar!" And I can't say, "No kidding? My mom just started playing guitar, too!" or, "No way! I just started drumming a few weeks ago. Maybe we can jam soon?"
How is my response "self-absorbed" but you starting a conversation about yourself isn't?
If you start the conversation, I'm just supposed to keep asking you questions about your experience until you're done talking about it?
I mean, if we're friends, shouldn't you care about what I think is relevant as much as I should care about what you think is relevant?
And anyway, if you _really_ want to talk about how it's going - can't you just take the conversation there?
I understand if you say something like, "Nice. I've always wanted to learn to play, and I never thought I could, and already I've learned so much, and I'm getting close to being able to play my favorite song."
And I say, "Oh, cool. Hopefully you can play my favorite song next. But it's really hard, you'll probably need a few years. You know, I used to be friends with Kurt Cobain's assistant, right? That's how I got into music in the first place....."
^ That seems self-absorbed. The first seems like genuine conversation.
I would say that in the best case we would take turns sharing about something personally important and being listened to and engaged with where we're at instead of quickly pulled off in another direction. For a bit anyway, before you get the ball.
I think the core of this question is generosity of time and attention, each person's ability to suspend their solipsism and engage with the other person as a person of equal importance to themselves. A good conversation is an act of intimacy.
We all want to be heard but no one ever seems to want to do the listening.
How long until this is integrated product side? It seems like they bring up these really cool demos that seem like the Star Trek computer every couple years, but Assistant can't do a whole lot more than Siri or Alexa still.
IME, Assistant did do a whole lot more than Siri or Alexa in the past. It had a similar set of features, but with Assistant I could speak casually and with Alexa/Siri it was necessary to learn specific trigger phrases. More recently I find that Alexa has caught up. And maybe Assistant has gotten worse. Not sure about Siri.
I think you need a recent Pixel phone to use it, but last I heard yes it does actually work although as other posters have said it seems limited to the reservation booking model.
To be honest, the thought of AI doing therapy (and it's reliance on big data) sounds very black mirror to me.
How long do you think it'll take ad companies to do an end run around patient protections and start serving ads based on "metadata"? Got anxiety? Try our new meds! Worried about the future? How about some life insurance or annuities? Etc. Things will get very exploitative very fast.
Tech companies don't need a specific therapy app to know if you have anxiety.
If they wanted to be nefarious in this way, they could just mine your chats and emails and get a great idea about your state of mind. They wouldn't even have to get around potentially legally privileged information.
My go-to related example of this: People often worry that Facebook is listening to their conversations because they'll see an ad related to a conversation soon after. You probably shouldn't worry that they might be listening to you (because they're not), but you might want to worry that they didn't need to listen to you to know what ad to serve you.
I think what's even more unsettling is realizing people's interests aren't as much predictable as they're being made predictable.
My wife and I, we experience a "Facebook is listening" event roughly every month these days. Of course I know they aren't, but these too-well-targeted ads or post recommendations aren't about generic, easy-to-guess interests - they're about completely random things that we just happened to spontaneously and briefly talk about that same day. The most likely explanation to me is that those conversation topics weren't actually spontaneous - that is, we were already primed, nudged to talk about this, and while we didn't know that, ad targeting algorithms did.
Couldn't that just be availability bias though? as in, these ads for _things_ were shown to you anyway, because there's hundreds of ads shown to you in some way or another every week and some percentage of them are for _things_, but because you were recently engaged in a conversation about _thing_, ads for _thing_ stood out in your perception enough to get past your attention filter?
Why do you think they are NOT listening? Whatsapp and facebook messenger/chatter thing both have mic access for a lot of people and it is truly bizzare how topics of conversations, even the first time they are broached, end up in facebook ads.
Black Mirror was mentioned not because it pioneered this idea but because each episode shows a possible dystopian future associated with various technological advances.
The limitation of a human therapist is the 6 inch chimp brain and how much information it can store, understand, how quickly it can construct a healthy way of delivering that complex information etc.
These limits can and will be blown away by bots given the amount of data they can crunch. Only issue is unlike in games like Chess or Go where the end state is clearly defined, with therapy its open ended and unknown what type of end states are possible.
Just a matter of time before all paths get mapped out.
Worse than that: as an influence on the conversational AI model, to push certain products by weighting responses that recommend them higher. Less hassle than throwing a conference for doctors and sending reps out to give them free stationery every month
I made my own, but it only works for me heh. I know myself really well, I know what I need, so it's a somewhat simple task. Creating an actually useful one that anyone can install and use seems much, much harder.
I like Google's TTS and it was incredibly annoying to integrate with my bot. I realize they don't want you using it for free for your own purposes, but damn.
No, nothing complicated, just pre-made, randomized feedback (so it doesn't sound exactly the same every time) for when I feel real bad. Triggered by multiple keywords when I speak.
It's like talking to myself, which is what I've always done and it's always the same conversations. So I automated it.
Now half+ of it comes from outside, which seems to work better even if it's a synthetic voice with messages I made.
I also used it for nagging medication reminders, as I kept forgetting what, how and when to take (at one point it was... quite a lot, on an overly complicated schedule).
A few years ago I also worked on a few passive bots that'd try to find at-risk or self-harming people on reddit/twitter that might be in need of therapy or someone/something to talk to.
It was too much emotional work to work with the training data so I haven't worked on it in a long time, but if something like that would be helpful, feel free to message me as well. :)
Or, if there's some community somewhere where people trying to solve this common problem gather, I'd love to hear about it / join!
"I am not just a random ice ball. I am actually a beautiful planet... I don't get the recognition I deserve. Sometimes people refer to me as just a dwarf planet" -Pluto
It may just be mimetic. But it feels like a Language Model with a nuance for human emotion ;)
I am dying to have an assistive chatbot that's able to ask me questions or argue in a friendly way, or take notes when I am in a creative mood. I don't really need AGI but have not been impressed with the task-based approach of existing assistants, because I don't want someone/something I can just order around; rather, I learn best by teaching others, and would like a chatbot that I can explain (relatively simple) things to.
This is a really good idea. I would love something in the same vein that I could discuss an idea to and it would use the sum of all knowledge it can find on the Web to poke holes into it; essentially trying to sink the idea as hard as it can with whatever it can find. I really think this would make anyone a better creator.
There's a clear decline in understanding what's being said with Google assistant and it's like this for a while now. I'm using this stuff since 2010ish when it was still called Google Now and it could do more for you back then.
Nowadays it doesn't know what I mean when I ask "what bus do I have to take to get to Dave". First of all, it doesn't know what to do with contacts anymore. Second: you have to say "what public transportation do I have to take to get to [Dave's location]". It just doesn't know that a bus or tram is interchangeable with public transport. Another example is asking for the last train home. Nowadays it will list you like 3 trains trains from now on but not the last one which would be hours away. Asking for when the next train comes is the same, it knows the answer but shows a list instead without saying anything.
It just gets worse and worse so that the only thing which works reliable is asking for the weather and setting alarms - which works like this for over 10 years.
I have a very hard time believing that this will result in a product we can use and if so: it will be in English only forever.
Speaking of english only: their live translation overlay and their recorder app were some selling points for the Pixel at this time and to this day it's only available in English and that's it. Same for their "better artificially generated voices"
It's only marketing and I don't believe we will get anything out of this ever and if we do it's nothing like this at all.
My initial impression from the Google I/O presentation was that this was a precursor to a GPT-3 API competitor product to OpenAI (it certainly has the same generative quirks as GPT-3!), however this PR implies it's just for conversation and may not be released for public use.
A previous step for creating a tool like LaMDA is designing a filter for information, something like rss. Since no something like rss is discussed here I think something is missing.
Another question for the laMDA team: is this chatbot using any logic system, something like prolog, to connect data with goals?, in other terms, could I explain to this chatbot how a prolog system works and hope that the this chatbot is able to use some known facts and rules to deduce new knowledge?, if the system is not able to make use of some simple rules I think it utility is not going to be top notch. Finally, if the chatbot is not crippe it could provide valuable information about its own weak points to be used for (very) bad purposes.
The animated figure is excellent. Interesting to train on dialogue text to model: structure of statements and response, and a model for predicting good responses. I am not surprised that this works really well.
Getting data to train BERT is fairly simple: any text can be used to drop out words and predict them. The LaMDA training text must have been more difficult to curate.
Definitely hard to make any judgments without actually being able to play with it.
Still, while perhaps being useful for finding specific answers that already exist (among other things), it's the coming up with creative but constrained solutions that, to me, would be the real pot of gold. That may be beyond a transformer's ability.
How's this paper's reproducibility? Because more and more I see Google publishing papers with internal data and telling others "if you had this data you would be able to reproduce this results"
Isn't it the same in many cases? E.g. the discovery of the higgs boson is only possible with access to a large particle accelerator, many astronomy studies are only reproducible with access to a large telescope, and some ml studies are only reproducible with access to a large dataset. Sure it's not ideal, but it's better than not publishing the results at all.
I think the fact that there are two things called 'data' here obscures the better analogy. The physics data is made available publicly, but you can't reproduce the method of generating it without building a particle accelerator. Similarly, trained models are often made public, but the dataset used to generate it isn't. So perhaps the data in physics is more analogous to the trained model.
Why is everything lambda this lambda that? At this point if I invent something popular I’d have to call it lambda just to mess with people. I’m not certain google isn’t doing that.
I'm surprised at how much money is being spent micro-optimizing the superficial and contingent aspects of intelligent behaviour. It's as though you've got a toddler who has no idea how to play piano, and instead of trying to teach them a C-scale, you're investing in increasingly sophisticated studio equipment to record random cacophony.
Solving conversational intelligence is the same as solving general intelligence (AGI), so unless google has solved the latter I would expect any chatbot system to still be pretty annoying for any nuanced or real world conversation.
Not necessarily. GPT-3 for example can put out entire articles, even generate images or generate valid HTML, doesn't mean it has reached AGI, and while it's far from perfect, it can still have utility.
As per the demo, I can imagine learning about a topic through conversations with an AI. It would still be regurgitating existing knowledge, but in a conversational back and forth way, which to some can have value.
It would be very nice if it was hooked into Google Search in real time. I guess that's the next step for an actually useful Google Assistant?
(GPT?) auto generated replies for karma building seem to be gaining traction, I wonder if it can spell the death of forums haha (most likely, it will bring in better moderation systems).
Great to see this, but we're still waiting for Google to fix the Translate API so that it matches the web version (currently, Translate API is way worse than the Web version)
I've had a chance to play with this and it's eerie how good it is. Most impressive to me, it was able to defend itself persuasively when confronted with evidence of how its previous answers were inconsistent (in the version I played with, it tends to be very coherent but frequently wrong).