Table of Contents
- Introduction
- Future scopes
- Artificial intelligence and cognitive computing
- Cognitive in image recognition
- Conclusion
Introduction
Image recognition, in the context of machine vision, is the ability of software to identify objects, places, people, writing and actions in images. Computers can use machine vision technologies in combination with a camera and artificial intelligence software to achieve image recognition. Image recognition is used to perform a large number of machine-based visual tasks, such as labeling the content of images with meta-tags, performing image content search and guiding autonomous robots, self-driving cars and accident avoidance systems. While human and animal brains recognize objects with ease, computers have difficulty with the task. Software for image recognition requires deep machine learning. Performance is best on convolutional neural net processors as the specific task otherwise requires massive amounts of power for its compute-intensive nature. Image recognition algorithms can function by use of comparative 3D models, appearances from different angles using edge detection or by components. Image recognition algorithms are often trained on millions of pre-labeled pictures with guided computer learning.
Future scopes
Current and future applications of image recognition include smart photo libraries, targeted advertising, the interactivity of media, accessibility for the visually impaired and enhanced research capabilities. Google, Facebook, Microsoft, Apple and Pinterest are among the many companies that are investing significant resources and research into image recognition and related applications. Privacy concerns over image recognition and similar technologies are controversial as these companies can pull a large volume of data from user photos uploaded to their social media platforms. Artificial intelligence is already present in plenty of applications, from search algorithms and tools you use every day to bionic limbs for the disabled. Cognitive computing is a term used by IBM. Computers aren’t really cognitive, however.
Artificial intelligence and cognitive computing
Although artificial intelligence (as a set of technologies, not in the sense of mimicking human intelligence) is here since a long time in many forms and ways, it’s a term that quite some people don’t like to use that much anymore – but artificial intelligence is real, for your business too. Instead of talking about artificial intelligence (AI) many describe the current wave of AI innovation and acceleration with – admittedly somewhat differently positioned – terms and concepts such as cognitive computing or focus on several real-life applications of artificial intelligence that often start with words such as “smart” (omni-present in anything related to the IoT as well), “intelligent”, “predictive” and, indeed, “cognitive”, depending on the exact application – and vendor. Despite the term issues, artificial intelligence is essential for and in, among others, information management, healthcare, life sciences, data analysis, digital transformation, security (cybersecurity and others), various consumer applications, next gen smart building technologies, FinTech, predictive maintenance, robotics and so much more. On top of that, AI is added to several other technologies, including IoT and big, as well as, small data analytics.
Cognitive in image recognition
Deep learning, image recognition, hypothesis generation, artificial neural networks, they’re all real and parts are used in various applications. According to IDC, cognitive computing is one of six Innovation Accelerators on top of its third platform and the company expects global spending on cognitive systems to reach nearly $31. 3 billion in 2019.
The foundation of that so-called 3rd platform consists of 4 sets of technologies that are interconnected and de facto inherently connected with AI as well. As a reminder: the high interconnectivity of technologies and processes in real-life applications is a core trait of what we’ve come to known as the digital transformation or DX economy. Each of these sets of technologies (they are not things either but just as AI consist of several technologies and, more importantly, applications and consequences) are technological drivers of digital transformation as such.
One of these innovation accelerators, as one can see in the image of the 3rd platform, are so-called cognitive systems technologies themselves. Cognitive computing is really term that has been popularized by mainly IBM to describe the current wave of artificial intelligence and, specifically also machine learning, with a twist of purpose, adaptiveness, self-learning, contextuality and human interaction. Human is key in here and without a doubt also easier to digest than all those AI-related doomsday movie scenarios. Essentially, cognitive systems analyze the huge amount of data which is created by connected devices (not just IoT) with diagnostic, predictive and prescriptive analytics tools which observe, learn and offer insights, suggestions and even automated actions. The term ‘cognitive computing’ strictly speaking is a conundrum. Cognition, for instance, also includes the subconscious which is in fact a major part of cognition. Although this would bring too far it needs to be said that IBM does make exaggerated claims about what its ‘cognitive’ platform Watson can do. Marketing indeed. People have some typical characteristics that AI isn’t able to understand. A simple example: as far as we know we’re still the only species that knows it exists (within the limits of what human knowledge is able to know, more food for discussion with transhumanists). Human emotions are also about more than brains and intelligence. Emotions, often irrationally conflicting, can’t be reduced to math and the whole comparison of people as machines is really flawed. Data universe is exploding with unstructured data growing much and much faster than other data (strictly speaking all data is structured one way or other but when thinking structured data mainly think text, images and so forth). This is, among others due to, mobile data traffic and the Internet of Things.
The typical thing with unstructured data is that it doesn’t have a predefined data model as you have with data sitting in a relational database, for instance. F Unstructured data and content as such has no meaning or context because in principle we don’t know what it is. It comes in many shapes and forms and from several sources and is often text-intensive. From paper documents that need to get digitized to Twitter messages or email, also a major source of unstructured data/content. And it’s here that – again – we see various artificial intelligence techniques such as Intelligent Document Recognition or IDR, text mining, self-learning knowledge base technology, machine learning, natural language processing and whole cognitive computing aspect come into the picture.
Conclusion
For now, let’s say it’s clear there is no harm in an algorithm enabling people to find something better and there is no harm in having a system that helps you process and understand information faster and better to improve anything worth improving such as customer service (with a growing usage of IDR applications and Knowledge Base technology) and, Dcybersecurity or people’s health, to name just a few. But cognitive computing, as “whole”, is not as far as we tend to believe with already ample applications of artificial intelligence in business and AI, machine learning and deep learning increasingly being used in a combined approach with related technologies, ranging from advanced analytics and IoT to robotics, edge computing and more.