News

GPT-3 sucks at pick-up lines — here’s what that tells us about computer-generated language

Have you ever wondered what flirting with artificial intelligence would look like? Research scientist and engineer Janelle Shane has given us an idea by training a neural network – an algorithm loosely inspired by biological brain structures – to produce chat-up lines.

Some of the results are hilarious and completely nonsensical, such as the inelegant: “2017 Rugboat 2-tone Neck Tie Shirt”. But some of them turned out pretty well. At least, if you’re a robot:

I can tell by your red power light that you’re into me.

You look like a thing and I love you.

Can I see your parts list?

But how were these lines generated, and why do the results vary so much in terms of quality and cohesiveness? That’s down to the types of neural networks Shane worked with: all based on GPT-3, the world’s largest language model to date.

Language modelling

GPT stands for generative pre-trained transformer. Its current version, developed by OpenAI, is the third in a line of ever-improving natural language processing systems trained to produce human-like text or speech.

Natural language processing, or NLP, refers to the application of computers to process and generate large amounts of coherent spoken or written text. Whether you ask Siri for a weather update, request Alexa to turn on the lights, or you use Google to translate a message from French into English, you’re able to do so because of developments in NLP.

Credit: Google