IDG Contributor Network: AI disruptions and the power of intelligent data

Machine learning and Artificial Intelligence brings in transformations through practically every domain and industry in an unprecedented way. Fueled by ever increasing computing resources, evolution of faster algorithms, developments in machine learning backed by vast amounts of data—AI is bringing rapid changes the existing business processes.

It is important that an AI system is engineered to interpret and demonstrate a general intelligence as humans, demonstrate a level of intelligence that is not specific to one category of tasks or at least be able to generalize those specifics, and relate those understandings in the context of real world tasks, issues and cases.

Ability to balance this interpretation in the right manner enables an AI system to deal with new situations which are very different that the ones system has encountered earlier.

The “intelligence” in the data

Companies are striving to bring transformations, areas such as operations optimization, fraud detection and prevention, and financial performance improvements are becoming more and more focused, and one of the key factors that’s drives these initiatives to success is the capability to drive them with intelligent data from trusted source of repositories. As for any other digital strategies organizations build, data is pivotal to success of artificial intelligence strategy. Said differently, AI systems can only be as intelligent as the data they deal with.

TensorFlow, an open-source library and part of Google brain project for machine learning, performs language modeling for sentiment analysis and predicting the next words in sentences given the history of previous words. Language modeling is one of the core tasks of natural language processing and is widely used in the areas of image captioning, machine translation, and speech recognition. TensorFlow creates predictive models by training machine learning algorithms with large data sets, and the algorithm iteratively makes predictions based on the training data, but desired result is not to create a model predicting the test data sets but to create a model with a good generalization. TensorFlow is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural networks, large datasets generated from YouTube videos, and photos repositories, to name a few.

Why is enterprise data so critical for AI?

Data is the primary source that builds knowledge base with timely and meaningful insights. Data velocity and volume, both play a crucial role in defining an accurate and information based decision. The logic applies to machine learning. AI systems modeled with data from wide variety of sources are expected to produce results with better accuracy, in a timely manner. This is important for any organization in any industry that wants to build AI capabilities.

Machine learning algorithms do the job to fit a specific AI model to data. First a model is built and then it gets trained by presenting large volumes of data sets to it. The model adjusts itself to tune to various datasets, however, affinity to one data set or training data may bring more challenges when a new scenario is encountered.

Generalization and specialization

People learn many things from examples. Teachers teach many things to children by citing examples. They learn to differentiate between two similar objects by knowing more smaller attributes about those objects. Barring the exception of some superkids, it is difficult for most two-year-old kids to differentiate cars models. Human intelligence at that age forms a model with the knowledge based on inductive learning. This technique is referred as generalization. Inductive learning requires the model to grow by exposing more cars to it, and build a belief based on the developing cognitive abilities and from observing more cars. Based on the initial belief, it can be assumed that a car has four wheels.

The specialization technique is exactly opposite of generalization. Human intelligence grows with age, and human cognition recognizes more attributes in people’s model of understanding to differentiate things based on those attributes. Likewise, AI agents need specialization when inductive learning on a very large data make the model overgeneralized. For example, after observing restaurants and their menu, we may build a general belief that all restaurants serve food. But not all restaurants serve kosher food. Here, we must use specialization to restrict the generalization of the belief to find special restaurants that serve kosher food. In simple words, specialization can be achieved by adding extra conditions to the predefined model.

AI strategies to teach machines to think like people do

AI based systems utilize generalization and specialization to build their core knowledge base and train their model. The idea is to maintain a single model based on the training data and to adjust it as new scenarios arrive to maintain consistency. This requires them to process data or information about a subject area in different manner based on the need. To create better predictive models that are capable of generalizing, systems also need to know when to stop training the model so that it doesn’t overfit.

Overfitting can happen when the model becomes very large and the AI agent starts making decisions based on training data. If the training data sets are too narrow and do not contain many scenarios, it may cause underfitting problem as well. In the opposite strategy, AI system needs to evaluate many scenarios and build the model in a way that it can represent some sort of specialization.

Role of quality enterprise data

Not all the data sets are equally useful though, consider the AI agent in a self-driving car trained only on straight highways will have a hard time navigating in a desert unless it was trained by using similar models. Good AI agents need simple data sets to build a generalized model first and then also need diverse data sets to address tasks where specialization is needed. This requires a constant inflow and wide variety of metadata and data to train the machine learning system, and provide a trusted and effective artificial intelligence.

Enterprises can build great AI systems by using data across repositories and by building rich training data sets and catalogues and further train AI agents using those training data sets. Meta data can also be integrated from range of systems like mainframes, HR, CRM, and social media and can be gathered in to the training data sets and catalogs.

This article is published as part of the IDG Contributor Network. Want to Join?

Source: InfoWorld Big Data