At its Google I/O conference, Google has unveiled a slew of updates for its existing apps along with some new intriguing AI and machine learning tech. The company’s CEO came on stage to unveil a very human-like language model called LaMDA that can converse on any topic.

LaMDA, short for “Language Model for Dialogue Applications”, is built on Transformer, the same neural network architecture used to create BERT and GPT-3. Google explained,

That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another, and then predict what words it thinks will come next. But unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. Basically: does the response to a given conversational context make sense?

During the demo, Pichai showcased how LaMDA can enable new ways of conversing with data, like asking a paper airplane about its worst travel experiences or chatting to Pluto about life in outer space.

The new language generates a natural conversation style by synthesizing concepts from its training data, rather than hand-programming them in the model. For the above-mentioned examples, it will generate concepts like the Horizon spacecraft or the coldness of the cosmos.

Since the responses are not pre-defined, LaMDA generates an open-ended dialogue that does not take the same path twice. In addition to this, the model also assumes an anthropomorphic form. During the demo, the AI paper plane addressed its interlocutor as “my good friend” while Pluto complained that it’s an under-appreciated planet.

Although the flow of the conversation was pretty impressive. It did generate some non-sensical responses. For example, Pluto claimed that it has been practicing flips in outer space and then abruptly ended the conversation to go flying.

It is still a work in progress and Google is trying to make it better by adding more training data.