Can a robot learn what makes you laugh?

Introduction:

In our new smart work series, we hear from artificial intelligence experts about how they leverage machine learning to solve interesting problems and radically change the way we work for the better.

Robots have been making humans laugh for years. Before Bender became a fan favorite in the movie Futurama, Rosie the robot cast her shadow on The Jetsons, and R2D2 and C3P0 brought comedic flair amid the lightsaber battles in the Star Wars film series, and let’s not forget HAL from the classic 2001: A Space Odyssey with the dome door pranks!

So it’s no surprise that many were anticipating supernatural humor when ChatGPT was launched. Some tried to encourage it to mimic the styles of stand-up comedians. Others attempted to present some shows it had written. So far, chatbots have shown their impressive ability for jokes and dad humor, yet they still struggle to understand the nuances of current humor and cultural satire.

Some say humor is the “last frontier” for artificial intelligence. If that’s true, what will it take to get there?

Dr. Julia Taylor Rice is a professor of computer science and information technology at Purdue University who has been researching the possibility of teaching robots humor for nearly 20 years. She began her career as a software engineer, but when programming started to feel dull, she became interested in solving one of her field’s most enduring puzzles: why can’t computers recognize jokes?

Rice says, “The field of humor study is already old. There’s a lot of research about what humor is and what we value. You start looking at it and say, ‘How hard can it be? Surely we can find an algorithm to discover something like a traditional silly joke.’”

It seems that new large language models hold the promise of cracking the comedy code. Studies have shown the benefits of humor in the workplace. When we interact with AI as partners, can they become better co-thinkers if they learn to be funny like humans?

We asked Dr. Rice to give us an overview of the state of computational humor and what funny chatbots might mean for people welcoming AI into the workplace.

Why is it difficult for chatbots to be skilled in humor as they are in other tasks?

Rice: Because when we try to be funny, we don’t make all the information explicit. When you’re looking for information, you will certainly provide everything, right? “I’m looking for an apartment in Florence. I need an elevator.” I’ll give you two words just in case you don’t recognize one of them. But in jokes, I’ll give you the least amount of information possible because part of the fun is putting it all together. If everything is explicit, most people won’t appreciate it.

“Interaction is the same; if you’re following along, it thinks it needs to change its mind and give you extra information.” – Dr. Julia Taylor Rice

We can write an algorithm like “Here there are two cases. You must find both cases.” Then, according to some theories of humor, they should be compatible with each other but oppose in a strange way. Something unexpected must happen. So you need to jump from point A to point B, except I won’t tell you how you’ll jump there [or] how the two cases are related. So you can posit that A and B should overlap, and that A and B should contradict each other… but how do you get that information?

This brings us to discussing advancements in natural language processing. At least now we should be able to understand where we can fill gaps. If I’m talking about “I woke up in the morning and drove to work.” I didn’t drive to work in my pajamas. I did a series of activities – took a shower, brushed my teeth – learning that those would happen without me expressing them verbally. Now there is enough in these large language models to retrieve information and fill gaps… at least the most obvious ones. Then you can move forward with a chain of thoughts, correcting some things, and it will gradually improve. So it seems we are close, right? No.

What

What do you think is hindering them?

Rize: I think interaction is, too. If you are following along, it thinks you need something extra or that it needs to change its mind and give you additional information. I played with ChatGPT because it’s fast. Log in, give it a joke, and ask it to explain why something is a joke. I gave it a very old riddle taken from a different researcher’s thesis, a good friend of mine, Kiki Humbleman from Texas A&M University: “It’s a good deal for ten cents.” I asked ChatGPT, why is this funny? It kept changing its mind. “It’s funny because it uses a pun to create a funny twist. In everyday language, the phrase ‘ten cents a dozen’ is a common phrase.”

Twenty years ago, when we tried to write an algorithm, it had no prior knowledge. It couldn’t fill in the gaps, but at least it would give you a consistent response. If you asked it 10 times and gave it the same input, it would give you exactly the same result.

Of course, the same joke wouldn’t be funny in the same way if you said it 10 times, but at least there was some thought. You look at it now and say: what can I rely on? How do I know that the information I’m getting won’t be a random choice among these four responses until I hear what I actually like? Will it have some reliable thinking?

What are the potential benefits of teaching computers to discover and generate humor in the workplace? Could it help build trust between workers and chatbot partners?

Rize: There’s some research that suggests people with similar values share the same jokes. So on a human level, there’s a communication element. Is this relevant to computers? Yes, it can make you feel just a little better when interacting with it. It’s much better than it was even six months ago, but it’s still not smooth enough, at least not on a personal level.

It can write text, which is amazing. Often, I ask it to edit my own text. Some of the suggestions are very good, but that’s on the level of editing. For it to be humor, it has to be creative, right? I don’t want to see the stitching, even if it’s on the computer. I’d rather not see a joke than see something boring or awful.

Now that some chatbots can recall personal preferences based on past chats, do you think they can develop a kind of mutual relationship by learning computers to understand individual humor?

Rize: I suppose we’re talking about a scenario without other sensors – it doesn’t see my face, it doesn’t see when I smile, it doesn’t see when my eyes move. It doesn’t have the necessary feedback to know what we actually enjoy and what we don’t enjoy. What it can do is know what we’re communicating with it about. But I’m not at all sure that this will be even roughly honest regarding our interests, or what we care about.

The odds are, if something happens to change my sensitivities about a particular topic, I’m not going to talk about it to ChatGPT. If it told me a joke based on that, it’s likely I wouldn’t respond in a way that conveys why I’m sensitive to it. So this model for computers will depend on how open we are about the reasons we love something and why we don’t love it.

Computers have become better at detecting humor over the past five years. Where do you think we will be five years from now?

What I would like to see is the ability to think and explain why something is funny or not funny. I want it to have a compelling explanation. If I get into a car accident tomorrow morning and sustain minor injuries, I’m likely to have a completely different mood about car accident jokes than I do today. So I should be able to input this variable, whatever has happened to me, and the output should change. I want that decision-making mechanism.

Done

Edit this interview and summarize it for clarity.

Source: https://blog.dropbox.com/topics/work-culture/julia-taylor-rayz-on-AI-humor

“`css
}@media screen and (max-width: 480px) {
.lwrp.link-whisper-related-posts{

}
.lwrp .lwrp-title{

}.lwrp .lwrp-description{

}
.lwrp .lwrp-list-multi-container{
flex-direction: column;
}
.lwrp .lwrp-list-multi-container ul.lwrp-list{
margin-top: 0px;
margin-bottom: 0px;
padding-top: 0px;
padding-bottom: 0px;
}
.lwrp .lwrp-list-double,
.lwrp .lwrp-list-triple{
width: 100%;
}
.lwrp .lwrp-list-row-container{
justify-content: initial;
flex-direction: column;
}
.lwrp .lwrp-list-row-container .lwrp-list-item{
width: 100%;
}
.lwrp .lwrp-list-item:not(.lwrp-no-posts-message-item){

“`

}
.lwrp .lwrp-list-item .lwrp-list-link .lwrp-list-link-title-text,
.lwrp .lwrp-list-item .lwrp-list-no-posts-message{

};
}

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *