The 3 genres of AI you should know about

Artificial Intelligence makes its way into everything – from autonomous vehicles over self-responding emails to smart homes. It seems like you could take just about any commodity (healthcare, flying, travelling, etc.,) and make it smarter through a specialized application of AI. So unless you believe in a Terminator-style turn of events, you will probably ask yourself what benefits AI could herald for your workplace or line of business overall.

Basically there are three main branches of AI:

1) Cognitive AI

Cognitive computing is the most popular one and responsible for all interactions that are meant to feel “human-like”. Cognitive AI must be able to handle complexity and ambiguity with ease, while also continuously learn from its experience through data mining, NLP (natural language processing) and intelligence automation.

It is increasingly favoured to regard cognitive AI’s as a hybrid of “best bets” made by the AI and decisions made by humans which are working alongside, to oversee more tricky or inconclusive events. This can help broaden the applicability of the AI and produces faster, more reliable answers.

2) Machine Learning AI

Machine Learning (ML) AI is the one that drives your Tesla down the highway. It is at the cutting edge of computer science still, but promises to yield the biggest impact for everyday workplaces one day. Machine Learning is about finding patterns in big data where commonplace statistical analysis doesn’t see any, and taking such pattern to predict results without much human interpretation.

Machine Learning however requires three key ingredients to become effective:

a) Data, and lots of it
To teach the AI new tricks it requires bucket loads of data into its model input to make reliable output scoring. Tesla for example has deployed an auto steering feature to its cars which simultaneously sends home all the data points it collects, interventions by the driver, successful evasions, false alarms, etc. to learn from the mistakes and gradually sharpen the senses. A great way to produce a lot of input is through sensors: Either your hardware has built-in ones like radar, cameras, steering wheel, etc. (if it's a car) – or you lean on IoT (Internet of Things). Bluetooth beacons, health trackers, smart home sensors, public databases, etc. are just a small fraction of the ever growing number of internet-connected sensors that can generate much data (too much for any normal human to process)

b) Discovery
To make sense of your data and cut through the noise Machine Learning puts algorithms at work that can sort, slice and translate a data chaos into comprehensible insights. (If you want to weird out your colleagues listen to sound of different sorting algorithms at work: https://www.youtube.com/watch?v=kPRA0W1kECg)

There are two ways for algorithms to learn about the data, unsupervised or supervised.

Unsupervised ones deal with figures and raw data only, so there are no descriptive labels or dependant variables established. The aim for the algorithm is to find an intrinsic structure where humans didn’t think there would be one. This is useful for gaining new insights into market segmentation, correlations, outliers, etc.

On the other hand, supervised algorithms have knowledge about relations between different data sets through labels and variables and use their power primarily to extrapolate and predict future data. This might come in handy for anything from climate change models, predictive analytics, content recommendations, etc.

c) Deployment
Machine Learning needs to find its way from the computer science labs into software. More vendors like CRM, Marketing, ERP, etc. are building up competencies in either embedding Machine Learning or integrating tightly with services that offer it.

3) Deep Learning

If Machine Learning is cutting-edge, than Deep Learning is the bleeding edge. It’s the kind of AI that you would send to Jeopardy! It combines big data and analytics with unsupervised algorithms. The applications usually center around gigantic unlabeled data sets that need structuring into inter-connected clusters, inspired by nothing less than neural networks in our brains – therefore fittingly called artificial neural networks.

Deep Learning is the basis for many modern speech and image recognition approaches and benefits from a much higher accuracy over time than non-learning approaches offered in the past.

It is hoped that in the future Deep Learning AIs can autonomously answer customer enquiries and fulfills orders over chats or emails. Or they could assist marketing in suggesting new products and specifications based on their enormous data pool. Or maybe they can one day be Omni-present assistants at the workplace that entirely blur the line between robots and humans.

AIs live and improve through the scale of data that is thrown at them, which means that we see better AIs over time but also that their development will center around those organizations that can tap into the largest sets of data.