Are Covid-19 measures freedom killers? DataCiencia’s crowd surveillance AI tests the limits

Are Covid-19 Measures Freedom Killers? Dataciencia’s Crowd Surveillance Ai Tests The Limits Are Covid-19 Measures Freedom Killers? Dataciencia’s Crowd Surveillance Ai Tests The Limits
Crowd Surveillance Ai Dataciencia Limits Covid World

Keep up to Date with Latin American VC, Startups News

Contxto – Chile’s DataCiencia is introducing visual AI surveillance software to tackle the problems of the day: the Covid-19 contingency.

Kitún.ai harnesses pretty established Artificial Intelligence (AI) and Machine Learning software and deploys it in an easy to use format.

It is made so that any company—large or small, no matter their capabilities—can use their product to look at, say, employees, customers, passers-by, and they can learn to discern various things about them.

It all seems pretty straightforward, but dig deeper and certain worrisome aspects begin to emerge. And no, I don’t mean like the folks who are saying that wearing a facemask is a violation of their rights…

How health surveillance AI works

The pandemic has made precautionary health measures a must and identifying non-compliant folks as a workplace hazard was something that just didn’t exist a few months ago.

The beauty of AI is that it is adaptable and can quickly learn to understand new problems. Kitún.ai can identify problems such as incorrectly placed facemasks, whether people are not standing far enough apart, or even if a venue has gone beyond its safe maximum capacity.  

And luckily for those companies worried about investing into a lot of newfangled hardware, DataCiencia has made it just so that you just have to purchase the software. Their tool can be used on pre-existing surveillance infrastructure, such as previously installed security cameras. 

For instance, the tool will be able to automatically detect troublespots. If too many people are gathering in one place, Kintún not only detects the problem area, but it also may be able to offer alternative gathering sites as a solution. 

DataCiencia and the ethics of AI surveillance 

True to their Chilean roots, Kitún means “to see beyond” in mapudungún, the language of the Mapuche. However, I wouldn’t be doing my job if I didn’t ask the company behind this software to go beyond the shiny new tech and answer some uncomfortable questions.

Thus, Contxto reached out to DataCiencia’s Founder, Rodrigo Hermosilla, who answered some questions on the less savory side of the AI surveillance revolution; one that is finally reaching Latin American shores.

I asked Hermosilla about DataCiencias rather blassé mention about how this surveillance technology was “being used such as in Asian societies”. This raised a red flag for me, since when DataCiencia says that, my brain says: 

The founder’s words sought to dispel any worry: 

We are the guarantors of [Kitún’s] ethical use […] For us it is mandatory that the information it processes does not become available for anything other than the uses determined by the “new normalcy”. Even in cases of an offshoot of Kintún (which is rare), it must retain this same spirit.

Rodrigo Hermosilla, Founder of DataCiencia

However, as is often the case in a region famous for its rule bending (to put it lightly), the “spirit” behind DataCiencia’s good intentions might not be good enough in Latin America. So, I asked about what they were doing concretely to prevent any malfeasance on the part of their companies and governments that hire them.

Hermosilla was clear on this front, saying that they only “gave enough information to deliver community action and protection, none of which could be applied to specific individuals.”

The worrying “new normal” of AI surveillance 

“The new normalcy” is a now widely used expression to talk about the rules and regulations that the pandemic has imposed onto us. Yet, many of these newly normal norms are reactions to what was already wrong with our societies. Covid-19 simply exacerbated these ills.

AI and the ethical implications behind its use in surveillance are no different.

This is clearly not a problem caused or to be resolved by DataCiencia. I get that and so does Hermosilla:

Technology has always been at the risk of being misused. Nevertheless, our responsibility as technological implementers is for access to it be awarded along the right ethical lines.

Rodrigo Hermosilla, Founder of DataCiencia

Very true, but is this not more of a Pandora’s Box kind of moment?

I hate to be melodramatic but this is exactly the sort of real dilemma faced by the likes of Andrew Oppenheimer after taking part in the creation of the first atomic bomb.

He deeply regretted its destructive power and concluded that the whole endeavor had been “a very grave mistake”.

AI isn’t as evidently destructive as an atomic bomb, but it is also far less inelegant than a weapon of mass construction

AI shifts our societies in ways that we have yet to comprehend. And, as in DataCiencia’s own words, it becomes “a software at everyone’s reach” we will increasingly and consistently be faced with the question of whether we use it well—and whether that’s possible.

I’d argue that a good starting point when approaching the morality of AI is whether it is used individually or collectively. Individual control versus social guidance are two very different possible outcomes when using AI.

For now, I’m at least happy that, at least on paper, Kitún.ai falls in the latter category.

But this is just one tiny step on a long road to AI adoption, and we are as of yet comprehending how its massive agglomerative power can be used for good… and otherwise. 

Related articles: Tech and startups from Chile!

-AG

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep up to Date with Latin American VC, Startups News