[Editor’s Note: This is the second of two blog articles focusing on how industries are adapting artificial intelligence (AI). Over two parts, we look at artificial intelligence for IT operations (AIOps), ethics of artificial intelligence (ethical AI), explainable AI (XAI) and edge AI.]
In part one of this two-part blog article about artificial intelligence trends (AI), we briefly discussed the recent history of AI and amplified artificial intelligence for IT operations (AIOps) and ethics of artificial intelligence (ethical AI).
In part two, we continue the conversation, with a spotlight on explainable AI (XAI) and edge AI.
Explainable AI (XAI) is when an AI solution is understood by humans in a meaningful way, with trust. In the regulation of algorithms, particularly AI and machine learning (ML), it’s an explanation for an output of a ML algorithm.
Essentially, XAI’s importance is to explain AI decisions in an understandable and interpretable way. That way, humans can build trust in AI decisions.
Furthermore, it explains how and where data is coming, and how it’s used with required regulatory compliance to gain trust.
According to 451 Research’s Voice of the Enterprise: AI/ML Use Cases 2020, “92% of enterprises believe that explainable AI is important; at the same time, few of them have built or purchased ‘explainability’ tools for their AI systems.”
There are many concerns raised by organizations and their consumers about the concept of a “black box” in machine learning. This is where even designers can’t explain why the AI arrived at a specific decision because AI does it by itself. And the parameters or hyperparameters for arriving at the solution is unknown.
Hence, many concerns are raised during an implementation of AI. Organizations and their consumers do have the right to an explanation, so they can understand, comprehend, understand artificial intelligence trends and trust the solution.
The algorithms used in AI can be differentiated into “white box” and “black box” ML algorithms. White box models are the ML models that provide understandable results for experts in the domain. Meanwhile, black box models are difficult to explain and rarely understood by domain experts.
Principles and Domains
XAI algorithms typically follow the three principles of transparency, interpretability and explainability.
Transparency. This happens if the processes extracting model parameters from training data and generating labels from testing data can be described and motivated by the approach designer.
Interpretability. This happens if there’s the possibility to comprehend/understand a ML model and present its underlying approach for decision-making understandable to consumers or humans.
Explainability. This is the collection of features of the interpretable domain, which have contributed an example or use case to produce a decision.
Some of the domains where XAI is predominant include:
- Medical diagnoses
- Criminal justice
- Autonomous vehicles
- Text analytics
- ML based trading
From its start in the mid-20th century, AI has come a long way. What was once purely a topic of sci-fi and academic discussions is now a widespread technology adopted by organizations worldwide.
AI is versatile, with applications ranging from drug discovery and patient data analysis to fraud detection, customer engagement and workflow optimization. The technology’s scope is indisputable. And organizations looking to stay ahead of their competition are increasingly adopting it into their business operations.
Again, according to 451 Research’s Voice of the Enterprise: AI/ML Use Cases 2020, “92% of enterprises believe that explainable AI is important; however, less than half of them have built or purchased ‘explainability’ tools for their AI systems. This leaves them open to significant risk; without a human looped into the development process, AI models can generate biased outcomes that may lead to both ethical and regulatory compliance issues later.”
Edge artificial intelligence, or edge AI, is AI algorithms locally processed on hardware using data (sensor data or signals) captured or created on the device.
Cloud computing vs. edge computing. Cloud involves transmitting data over a network to a centralized server for processing. Edge computing is the practice of doing that same processing but on the edge of the network, either on or near the sensors or devices generating the data. Both works together.
In edge AI, the device doesn’t need to be properly connected to your network. It can process data and make decisions independently, without a connection.
To use edge AI, you must have a device, including a microprocessor and sensors, to capture and send data.
Why Is Edge AI Important?
Edge AI can drive real-time operations, including data creation, decision and action, when milliseconds matter. Real-time operation, for example, is crucial for self-driving cars, robots, drones, and many other domains.
Some other benefits and examples are:
- Reduction of power consumption to improve battery life, which is critical for wearable devices.
- Data communication cost reduction due to less data being transmitted.
- As data is locally processed, it helps avoid problems of streaming and storing data on cloud, which is vulnerable from a privacy perspective.
- Analysts predict, on average, 40% of Internet of Things (IoT) data will need to be processed at the edge.
- Gartner says, 91% of today’s data is processed in centralized data centers. But by 2022, approximately 74% of data needing analysis and action will performed on the edge.
So, there you have it. Some of the AI characteristics, including artificial intelligence trends for IT operations or AIOps, ethics of artificial intelligence or ethical AI, explainable AI or XAI and edge AI, laid out for your study.
[For a refresher, read: Industry Trends in Artificial Intelligence, Part 1]
System Soft Technologies can help your organization streamline business processes using AI (and robotic process automation). Our intent is for you to hit revenue goals, efficiently use resources and lower operational costs.
[Watch on-demand webinar: How Small and Mid-Sized Businesses Overcome Automation Barriers]
System Soft holds a two- to three-hour workshop with you to examine your situation. We then partner with you to find answers to your questions and strategize how to tackle your issues.
[Watch on-demand webinar: Cutting Through the Hype of Hyperautomation]
About the Author: Rajesh Patil
Rajesh Patil is the Director of Intelligent Automation at System Soft Technologies. Rajesh has more than 20 years of industry experience enabling organizations through extensive transformation journeys, with a focus on process improvement. He’s proficient in robotic process automation (RPA), intelligent automation, E2E sales lifecycle, solution architecture and automation delivery across various verticals.