The dark side of AI at Amazon and Microsoft

The Dark Side Of Ai At Amazon And Microsoft The Dark Side Of Ai At Amazon And Microsoft
the dark side of ai at amazon and microsoft

Keep up to Date with Latin American VC, Startups News

Contxto – Ethics is a boring word. And yet it’s a critical topic that should be on the mind of anyone in a product development team.

Now, this year, it appears that artificial intelligence (AI) is the trending tech among venture capitalists (VCs). So much so, that even SoftBank launched a training program to reap more of these specialized programmers among its portfolio companies in Latin America.

And given the speed at which businesses are adopting automation, I’m not one to believe this tech is a mere trend, but rather it’s here to stay.

Because of this, it’s crucial we not overlook the ethical implications of AI development.

So here are two lessons in ethics on what not to do with AI tech brought to you by Amazon and Microsoft.

Related article: Fourth Industrial Revolution: How Mexico approaches Artificial Intelligence

Amazon’s recruitment woes

How many resumés might an HR person for the logistics giant review in a week? 

Tens, hundreds, maybe even thousands.

In an attempt to reduce this workload in an unbiased way, a team of developers at Amazon developed some AI recruitment software in 2014.

The algorithm would look over the resumés and then rate these jobseekers in a five-star grading system. But by 2015, its creators realized something was off.

The algorithm didn’t like female candidates for developer roles and technical positions.

In search of an answer, they realized the AI was biased against women because that’s what it had been taught to do. Specifically, it was fed resumés from candidates chosen by the company within the last 10 years, which were mostly men.

As a result, the algorithm “learned” that male candidates were more desirable than female ones, and acted accordingly. Recruiters used this tool as a reference point for getting their job done and didn’t rely on it entirely to make a decision.

In the end, Amazon pulled the plug on the project in 2017. But the scandal still surfaced in October of 2018.

Oopsie.

Tay, the racist AI

Many may remember the hilariously absurd case of Microsoft’s chatbot, Tay.

In a fail whale of an attempt to engage millennials, in 2016, Microsoft designed and released a chatbot into Twitter.

Known as “Tay” this bot was meant to provide “playful and engaging conversation.”

But in the end, this AI became too “playful” for the likes of the internet and within 24 hours was taken offline. 

What went wrong? Too much information.

The bot was programmed to learn from its interactions on Twitter. And the internet blew up messing with the chatbot.

So as the day progressed, and users “talked” to Tay, it quickly went from a milenialish chatbot into an absolute racist. Among its ridiculous tweets, was Tay presenting Hitler as the inventor of atheism.

Microsoft didn’t dawdle in taking Tay down.

When AI backfires

Human opinions can be biased and therefore inaccurate. Thus, turning to AI as the solution will lead to more “objectively” tackling problems, or at least this was the logic in Amazon’s case. 

With Microsoft, it was thought that the AI was capable of offering human interactions. The problem was the bot didn’t filter the information it was fed.

These assumptions, as logical as they sound, can be dangerous. Just as the duo of companies learned.

This is because AI learns from the patterns it identifies in pools of data. It’s important that the information that’s presented to it be a representative sample. Otherwise, it just parrots everything blindly, the way Tay did.

Likewise, developers should compare the results of what an AI processes with human decision makers. That way they can see the differences in their conclusions. Such a comparison can reveal if the algorithm needs tweaking.

This approach could apply for AI used in recruitment software and fintechs that use machine learning for credit loans, for example.

Otherwise, the effort will backfire when the algorithm replicates the very same biases we wanted to avoid in the first place.

Related articles: Tech and startups from Mexico!

-ML

Keep up to Date with Latin American VC, Startups News