Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK – eWeek

During the recent Tesla AI Day event, Elon Musk said he discourages “machine learning, because it is really difficult. Unless you have to use machine learning, don’t do it.”

Well, Musk may be right in his assessment, because machine learning is quite difficult to implement. Most companies desire the benefits of what artificial intelligence can achieve for their business, but most don’t have what it takes to get it up and running. Therefore, as much as 85% of ML projects currently fail.

The takeaway from Musk’s startling statement is that organizations can’t treat AI, of which machine learning is a subset, like a part-time project. Many businesses are making some important mistakes when trying to do AI. But it doesn’t have to be this way. Below are five data points from Bin Zhao, Ph.D., Lead Data Scientist at Datatron, showing some common mistakes of AI implementation.

1. Careful: this isn’t traditional software development

Don’t treat AI/ML development like traditional software development. Developing AI/ML models is a much different process than software development, but many organizations try to apply the traditional software development lifecycle to manage AI/ML models.

Machine Learning development lifecycle (MLLC) takes much more time because of additional factors including translating AI algorithms to compatible software codes, unique infrastructure requirements, the need for frequent model iterations, and more. Compared to traditional programming languages, it can take more than five times as long. This means today’s typical application release processes are simply not applicable.

2. Using or standardizing the wrong tools can hamper data scientists’ productivity

This type of tools mistake introduces unnecessary delays and inefficiencies. In most IT situations, organizations can control the types of servers they buy, the software tools they use, the dependencies they build with and so on.

Not so with AI/ML; organizations must allow their data scientists to use their preferred tools based on what they think will get the job done in the best way. Otherwise, they’re likely to see all their data scientists leave.

3. IT/DevOps staff can lack ML expertise

DevOps is the union of software development and operations with the goals of reducing solution delivery time and sustaining a good user experience through automation (e.g. CI/CD and monitoring). But DevOps experts don’t know the nuances of working with ML models.

MLOps is a new term that expresses how to apply DevOps rules to automate the building, testing and deployment of ML systems. The goal of MLOps is to unite ML application development and the operation of ML applications, making it easier for groups to deploy finer models more often.

4. Beware of the misalignment of the skill sets of data scientists

Data scientists need the right raw data for modeling, and they excel in uncovering data to build the best models to solve business challenges. However, that does not mean they are experts in all the intricacies of deploying models to work with existing applications and infrastructure. This causes friction between them and the engineering team and business leaders, resulting in low job satisfaction for data scientists.

Though highly skilled and trained, they must rely on others for deployment and production, which also means that they can’t iterate rapidly. And since the projects shift to the engineering team, who don’t have the ML skill set, it’s easy for them to miss details – especially if the model is not making accurate predictions.

5. Don’t get too caught up in the romance of academic AI research vs. business reality

Academic AI research has historically focused on developing models and algorithms. Limited efforts have been devoted towards iterating and improving data sets for a specific business problem, operationalizing a machine learning model or monitoring models in production.

Building and deploying a machine learning model for solving a real world problem is much more than developing the algorithm itself.

A sound plan for ML success

Operationalizing ML models is hard but not impossible. Using a new model development life cycle will streamline the process of model development and model production. It does this by helping data scientists, engineering and other involved teams make effective decisions in a timely manner. It will also help teams to reduce production risks. A successful model governance tool can also help by standardizing processes, simplifying governance and significantly reducing risks.

About the Author:

Bin Zhao, Ph.D., Lead Data Scientist at Datatron

Spread the love

Leave a Reply

Your email address will not be published.