There has been a resurgence of popularity on a post about how an AI systemcame up with horrible recipes for humans, including the infamous swamp peef (I don’t know what that means either and a Google search on it will take you don’t some dark alleys).

We might think this is stupid AI but what we’re evaluating is the results of the process. What we need to understand is how the AI was trained. If we think of the results being the combination of the AI’s fundamental learning systems plus its training data, we can see how it could easily come up with such a result.

The results are similar if you read a poorly translated menu (think Engrish).

As more people start to use tools like Tensor Flow for building machine learning into different language-based applications, we’re likely going to see many more of these ridiculous scenarios.

In language applications, nuance is very important and not an easy thing to get from an AI system. What’s more effective is to use structured information to come up with tools that can then be used by humans who have the final say in editing.

Take Chef Watson for example — www.ibmchefwatson.com.

With their tool, you can pick any three ingredients and it will suggest the fourth. Of course, you can also pick the fourth and it will tell you the dish’s “synergy”. An easy hack is to use chocolate or bacon as the final ingredient — apparently these are always synergistic.

Chef Watson uses machine learning against an ingested database of thousands of cookbooks to see what matches. In the end, it’s AI (which is really just a calculator) is great to get ideas but use your judgement in making a final call.

If we’re to avoid Swamp Peef or incidents like those of Microsoft’s Tay.ai then we need to make sure we’re teaching the right materials.

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly