far from the goal

For Tesla, Facebook and others, the flaws in Artificial Intelligence (AI) can no longer be ignored

Investors are pouring money into artificial intelligence, despite clear delays in self-driving cars, social media and even healthcare.

25768 views 3 comment(s)
Photo: Pixabay.com
Photo: Pixabay.com
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

What do Facebook founder Mark Zuckerberg and Tesla CEO Elon Musk have in common? Both are grappling with major problems stemming, at least in part, from trust in artificial intelligence systems that have so far been shocked as failures. Zuckerberg addresses algorithms that fail to stop the spread of harmful content; A mask with software that has yet to drive a car in the way it has often promised.

There is one lesson to be learned from their experience: Artificial Intelligence is not yet ready for mass 'use', it says Bloomberg. Moreover, it is difficult to know when it will be. Companies should consider focusing on nurturing high-quality data and hiring people to do jobs that AI isn't ready for.

Designed to loosely mimic the human brain, deep learning AI systems can spot tumors, drive cars and write text, showing spectacular results under laboratory conditions. But that's the problem. When it comes to using the technology in the unpredictable real world, AI sometimes fails. That's troubling, especially when it's being promoted for use in high-stakes industries like healthcare. The stakes are also dangerously high for social media, where content can influence elections and fuel mental health disorders, as revealed in a recent disclosure of internal whistleblower documents. But Facebook's faith in artificial intelligence is clear on its own website, where it often 'brags' about machine learning algorithms, before mentioning its army of content moderators.

artificial intelligence
photo: Pixabay.com, Tech Times

Zuckerberg also said during testimony to the US Congress in 2018 that AI tools could be a "scalable way" to identify harmful content. Those tools do a good job of detecting nudity and terrorism-related content, but they still struggle to prevent the spread of disinformation. The problem is that human language is constantly changing. Anti-vaccine activists use gimmicks like typing "va((ine) to avoid detection, while private gun sellers post pictures of empty boxes on Facebook Marketplace with the caption "message me." They fool systems designed to stop such content. , and to make matters worse, AI often recommends that content as well.

No wonder the roughly 15.000 content moderators hired to support Facebook's algorithms are overworked. Last year, a study by New York's Stern University School of Business recommended Facebook double those workers to 30.000 to properly moderate posts if AI isn't up to the task. Cathy O'Neill, author of 'Weapons of Math Destruction' said Facebook's AI "doesn't work". Zuckerberg, for his part, told the MPs that it is difficult for Artificial Intelligence to moderate posts because of the nuances of speech.

MASK'S CONFIDENCE WITH AI

Musk's overestimation of artificial intelligence is practically legendary. In 2019, he told Tesla investors that he "feels very confident" that there will be a million Model 3s on the streets as driverless 'robo-taxis'. His timeline: 2020. Instead, Tesla customers currently have the privilege of paying $10.000 for special software that will one day (or who knows?) provide fully autonomous driving capabilities. Until then, cars can park, change lanes, and drive themselves onto the highway with the occasional serious mistake. Musk recently acknowledged in a tweet that generalized self-driving technology is a "hard problem."

HEALTHCARE AND AI

Even more surprising: AI has also failed in healthcare, an area that held promise for the technology. Earlier this year, a study in Nature analyzed dozens of machine learning models designed to detect signs of COVID-19 in X-rays and CT scans. None were found to be usable in a clinical setting due to various drawbacks. Another study published last month in the 'British Medical Journal' found that 94% of the VI systems they scanned for signs of breast cancer were less accurate than a single radiologist's analysis. "There was a lot of confusion that [VI scanning in radiology] was inevitable, but the emergence was somewhat ahead of the results," says Sian Taylor-Phillips, professor of health at the University of Warwick who also led the study.

Government advisers will draw from its results a decision on whether such artificial intelligence systems do more good than harm and are ready for use. In this case, the damage does not seem obvious. After all, AI-powered breast cancer detection systems are designed to be overly cautious and are much more likely to give false alarms than miss signs of a tumor. But even a small percentage increase in the recall rate for breast cancer screening, which is 9% in the US and 4% in the UK, means increased anxiety for thousands of women about false alarms. "That means we're accepting harm for women who are screened just so we can apply the new technology," says Taylor-Phillips.

The errors do not appear to be limited to just a few studies. "A few years ago there was a lot of promise and a lot of hype about AI being the first pass for radiology," says Kathleen Walsh, partner at market intelligence intelligence firm Cognilitica. "What we're starting to see is that the AI ​​isn't picking up these anomalies at any rate that would be helpful."

A LOT OF MONEY!

Still, none of these red flags have stopped the flood of money going into AI. Global venture capital investment in artificial intelligence startups has increased over the past year, according to 'PitchBook Data', a company that tracks private capital markets.

artificial intelligence
photo: Pitchbook

Mentions of "artificial intelligence" in company earnings reports have grown steadily over the past decade and have refused to die down, according to an analysis of the transcripts published by Bloomberg. With all these investments, why isn't AI where we hoped it would be? Part of the problem is the bloat of technology marketing. But AI scientists themselves may be partly to blame. Systems rely on two things: a functional model and the underlying data to train that model. To build good AI, developers need to spend the vast majority of their time, perhaps around 90%, on data - collecting, categorizing and cleaning it. It's boring and hard work. Today's machine learning community also overlooks it, as scientists place more value and prestige on the complexity of the VI architecture or how elaborate the model is.

One result: The most popular datasets used to build AI systems, such as 'computer vision' and 'language processing', are riddled with errors, according to a recent study by MIT scientists. The cultural focus on complex model building actually holds AI back. But there are encouraging signs of change. Google Alphabet scientists recently lamented the model and data problem in a conference paper, and suggested ways to create more incentives to fix it.

FURTHER STEPS

Enterprises are also shifting their focus away from “AI as a Service” vendors who promise to perform tasks out of the box like magic. Instead, they're spending more money on data preparation software, says Brendan Burk, senior analyst at PitchBook. He says AI companies like 'Palantir Technologies' and 'C3.ai' have "underperformed", while data science companies like 'Databricks' are "achieving higher valuations and superior results".

It's okay for AI to mess up occasionally in low-stakes situations, like movie recommendations or unlocking your smartphone with your face. But in areas like healthcare and social media content, more training and better data are still needed. Instead of trying to make AI work today, businesses need to lay the groundwork with data and people to make it work in the (hopefully) not-too-distant future.

Translated and edited by: Filip Ivanovic

Source: Bloomberg

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)