Google on Tuesday unveiled its latest language breakthrough, the development of a conversational language model called LaMDA (Language Model for Dialogue Applications). Google discussed the new model during the keynote address at its I/O conference. 

Like other recently-developed language models, including BERT and GPT-3, LaMDA is built on Transformer, the neural network architecture that Google Research invented and open-sourced in 2017. 

However, unlike other language models, Google’s LaMDA was trained on dialogue, teaching it how to engage in free-flowing conversations. This training taught LaMDA to deliver responses that not only make sense given the context but are specific. 

Google gave an example with the prompt, “I just started taking guitar lessons.” A sensible and specific response might be, “How exciting! My mom has a vintage Martin that she loves to play.”

Google is also exploring how to add dimensions to responses, such as “interestingness,” which could include responses that are insightful, unexpected or witty.  It’s also working on ensuring that respones are factually correct and meet Google’s AI principles.



Source link