Cognitive computing re-defines BI and Information Technology. It is a combination of simplified analytical algorithms, natural language processing, machine learning, and massive computer processing power resulting in increased predictive analysis and pattern discovery.
Machine Learning Machine Learning (ML)
Is a discipline where a program or system can learn from existing data and dynamically alter its behaviour based on the ever-changing data. Therefore, the system has the ability to learn without being explicitly programmed. Machine Learning algorithms can be broadly categorized as classification, clustering, regression, dimensionality reduction, anomaly detection, etc. The Machine Learning module acts as the core computing engine, which using algorithms and techniques, helps Cognitive Systems identify patterns, and perform complex tasks like prediction, estimation, forecasting, and anomaly detection.
Machine Reasoning (MR)
The systems generate conclusions from available knowledge by using logical techniques like deduction and induction. Machine Reasoning acts as the brain or decision engine within a Cognitive System. Machine Reasoning systems are mainly employed to reason / validate the outcomes of other modules like ML, Statistical Analysis, NLP, etc. Apart from validating the outcomes of other modules, they can also function as a standalone module by individually solving a problem. Some of the most common types of reasoning systems include rules engine, case-based reasoning, procedural reasoning systems, deductive classifiers, and machine learning systems.
Natural Language Processing
Wikipedia defines Natural Language Processing (NLP) as a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages, and, in particular, concerned with programming computers to fruitfully process large natural language corpora. Natural Language Understanding (NLU) and Natural Language Generation (NLG) are two of the most prominent sub-fields within NLP. NLP helps cognitive systems comprehend natural language data sources, as well as present insights in the form of Natural Language. NLP is critical for applications like Search, Text Mining, Sentiment Analytics, Large Scale Content Analysis, Text Summarization, Narrative / Dialog Generation, Chatbots, and Virtual Assistants.
Speech Recognition as the ability of a machine or program to identify words and phrases in spoken language, and convert them to a machine-readable form. Speech Recognition is also commonly known as speech-to-text, automatic speech recognition, or computer speech recognition. Common applications of speech recognitions include voice search, Home Automation (like Amazon Echo, Google Home), Virtual Assistants, Speech Analytics, Interactive Voice Response, Contact Centre Analytics, etc.
The British Machine Vision Association and Society for Pattern Recognition (BMVA) defines Computer Vision as a field concerned with the automatic extraction, analysis, and understanding of useful information from a single image, or a sequence of images. Computer Vision deals with the creations of theoretical and algorithmic foundations to achieve automatic visual understanding. Some key applications of computer vision include facial recognition, medical image analysis, self-driving vehicles, asset management, industrial quality management, content-based image retrieval, etc.
Human Computer Interaction
Interaction Design Foundation defines Human-Computer Interaction (HCI) as “a field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers.” It encompasses multiple disciplines, such as computer science, cognitive science, and human-factors engineering. The goal of HCI is to ensure that human–computer interaction is very similar to that of human–human interaction. Some popular examples of modern HCI include voice-based systems, gesture controls, facial recognition systems, and Natural Language Question Answering (NLQA).
Key Enablers of Cognitive Computing
/Big Data & Cloud Computing - Some Cognitive Computing applications like computer vision or speech recognition require good storage and computing infrastructure. Enterprises can now elastically scale their storage and processing infrastructure with Big Data Platforms like Hadoop, and Cloud Computing Platforms like Azure, AWS, and Google Cloud.
/Cheaper Processing Technology - Exponential decrease in processing cost is also one of the key factors enabling cognitive computing adoption. Higher processing costs in the 1970s were one of the major inhibitors that prevented further research and adoption of AI. Nick Ingelbrecht from Gartner, in a Financial Review article, explains that in the past eight years, there has been a 10,000-fold increase in processing speeds.
/Data Availability - IDC predicts that there is around 160 ZB of data in the present digital universe. This data is available across multiple formats like machine logs, text, voice, and video, waiting for enterprises to exploit their potential. Data Availability is also a key factor for enterprises wishing to embrace cognitive computing.