[Editor’s Note: This is the first of two blog articles focusing on how industries are adapting artificial intelligence (AI). Over two parts, we look at artificial intelligence for IT operations (AIOps), ethics of artificial intelligence (ethical AI), explainable AI (XAI) and edge AI.]
As artificial intelligence (AI) quickly adapts to various business functions, many organizations are planning to shape the way they do business with AI and implement AI. As this happens, there are several aspects of AI trends that must be considered, evaluated and discussed.
Some of these AI characteristics include artificial intelligence for IT operations or AIOps, ethics of artificial intelligence or ethical AI, explainable AI or XAI and edge AI.
Let’s go through each of these in more detail. But first, a bit of recent AI history.
Recent AI History
During the past few years, artificial intelligence has shown enormous potential and a promising course. But the unexpected developments during a turbulent 2020 (coronavirus pandemic) has enforced rapid digital transformation at organizations, cramming years of innovation into several months.
The disruption has made organizations transform and adopt innovative technologies, including AI trends, in the blink of an eye. And AI experts predict this trend won’t slow down and it will even accelerate more.
People well versed in the AI domain forecast AI will vastly expand in meaningful ways during 2021 and beyond. Compared to previous years when organizations could experiment with AI, machine learning (ML) and process automation, 2020 proved to be the year to dive in headfirst, Forbes Magazine reports.
While 2020 saw the accelerated deployment of many unique platforms, research and tools using AI to a great extent, 2021 is expected to deliver so much more. So much so, it’s been fittingly named the Golden Year of AI implementation, experts emphasize.
Overall, 2021 will be marked by AI growth defined by the launch of many AI applications, delivering insights, efficiency and cost-effectiveness in the digital era. And while it’s going to continue permeating all parts of your life, the following are areas where AI trends are predicted to have the biggest impact on your organization in 2021.
Artificial intelligence for IT operations (AIOps) is a term coined by Gartner in 2016 for ML analytics to enhance IT operations analytics. AIOps is the acronym of “Algorithmic IT Operations.”
AIOps tasks include automation, performance monitoring, event correlations, predictive analysis, anomaly detection, root cause analysis (RCA), alerting mechanisms and cloud management functions.
There are two types of AIOps: Domain centric and domain agnostic. Domain centric AIOps can be applied to a specific domain. Meanwhile, domain agnostic AIOps can be applied to any domain.
Here are some best practices or recommendations when implementing AIOps in infrastructure, operations and cloud management.
- Focus on a specific use case, which can be replaced using rules-based event analytics. Then, implement into domain centric workflows by ingesting events, metrics and traces.
- Implement in IT service management (ITSM), as managing service levels in time become crucial.
AIOps Model Functions
- Collect data from various sources, such as cloud, platforms, systems, networks, databases, etc.
- Ingest data into a centralized data lake.
- Classify and categorize data into meaningful classes and categories.
- Analyze historical data and generate enterprise graphs from historical data.
- Predictively analyze future events from real-time data based on historical data.
- Perform actions based on the predictive analysis.
- Monitor and measure the accuracy of the model, then provide feedback to the model, improving accuracy of prediction and action.
Ethical AI (ethics of artificial intelligence) is the branch of technology ethics specific to AI systems. It’s related to how the AI system’s moral behavior compares to human behavior, as machine moral behavior is designed by humans.
The basic point is to design systems that are unbiased, treating all equal and fair. Organizations must plan for mitigating risk. That is, how to use data and develop AI products, without falling into ethical pitfalls along the way.
There are 11 ethics guidelines for AI to follow:
- Justice and fairness
- Freedom and autonomy
It’s further divided into fewer categories: Machine ethics, singularity and super intelligent AI.
- Machine ethics (machine morality, computational morality, computational ethics) is part of the ethics of AI, which adds or ensures the moral behavior of manufactured machines. These are referred to as AI agents, using artificial intelligence.
- Singularity is a hypothetical point in time, when technological growth becomes uncontrollable and irreversible, and possibly impacting human civilization. By way of an “intelligence explosion,” a self-improving AI could become so powerful humans would be unable to stop it from achieving its goals. That’s because they become capable of independent initiative and of making its own plans to achieve goals.
- Super intelligence is a hypothetical agent, which has intelligence surpassing the most gifted and brightest human minds. This might lead to machines becoming more powerful than humans, then displacing humans.
AI has become increasingly inherent in voice and facial recognition systems. These have genuine business applications and directly affect people.
These systems are vulnerable to biases and errors intentionally or non-intentionally introduced by its human creators. This can be due to biased data used for AI model training. Or by the way a model is created. Any bias or error because of gender, voice or accent can tangibly and intangibly change your business.
Threat to Human Dignity
In some cases, as we are humans, we need and expect authentic feelings of empathy from a certain category of people and particular positions. If machines replace them, humans feels alienated and frustrated as the AI system is incapable of empathy. In this way, AI can lead to and represent a threat to human dignity.
Liability for Self-Driving Cars
As the widespread use of autonomous, or driverless, cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. This is to manage legal complications and regulations. There’s a lot of debate about the responsible party’s legal liability, if such vehicles get into accidents.
It can bring up a few questions, such as:
- Who was at fault? Driver or car?
- Is it due to the fault of software or the driver?
- Who’s responsible for the liability? Owner, car manufacturer or government?
Currently, self-driving cars are considered semi-autonomous. They require the driver to pay attention and take control when necessary. In these cases, therefore, the driver is liable for any accident.
Now, it falls on our federal and/or state governments to regulate the driver who relies on autonomous features. It’s best to educate drivers these are just technologies enabling more convenient driving but aren’t a complete substitute for hands-off driving.
In short, before autonomous cars become widely used globally, these challenges and issues of liability, regulations, training and technology must be addressed through new policies and procedures.
Weaponization of AI
Just like the challenges outlined for autonomous cars and legal lability of accidents, there are concerns about AI use in military robots.
As military robots become more complex and intelligent, our federal government and Department of Defense (DoD) should pay more attention to robots’ ability to make autonomous decisions. Robots may make their own logical decisions on whom to kill or destroy, but there must be a defined, set and followed moral framework the AI can’t override. AI weapons are more dangerous than human operated ones.
Actors in AI Ethics
There are many organizations concerned with AI ethics and policy—government, public, corporate and social. Microsoft, Google, Amazon, IBM and Facebook have created a nonprofit partnership to formulate best practices on AI technologies. This is to help outline a common public understanding and serve as a platform for AI.
Currently, the Institute of Electrical and Electronics Engineers (the world’s largest technical professional organization) and governments are setting guidelines on AI ethics. And “Guidance for Regulation of Artificial Intelligence Applications” is an order emphasizing the need to invest in AI applications, increase common public trust in AI, reduce barriers for AI usage, and keep American AI technology competitive in a global market.
Operationalizing Data, AI Ethics
AI ethics doesn’t come in a predefined box ready to implement. As values vary by organization across different verticals, the data and AI ethics program must be tailored to specific business and regulatory needs. Those that are relevant to each organization and vertical.
Here are some steps to help build operationalized, customized, scaled and sustained data for an AI ethics program.
Find existing infrastructure a data and AI ethics program can use and scale. The key to the successful creation of a data and AI ethics program is using the power and authority of existing infrastructure. This includes a data governance board to govern and manage data privacy, data compliance, data security, cybersecurity and any data-related risks. If your organization lacks a data governance team, it’s time to create one.
Create a data and AI ethical risk framework tailored to your industry. Create a framework perfect for your industry (for example, HIPAA, GDPR, SOX, etc.), including internal and external stakeholders, with the right industry KPIs. Once you find ethical risks, also create a mitigation plan.
Research and study other organizations in your industry. Learn and change your ethics thinking, taking cues from industry successes. For example, in healthcare there’s HIPAA, which is privacy and respect for patients treated only after they grant informed consent.
Optimize guidance and tools for product managers. Create tools that are granular, supply high-level guidance and can evaluate the importance of “explainability” for any given product.
Build organizational awareness. Conduct training sessions, so employees, contractors and anyone who touches your data or AI products understand and adhere to your organization’s data and AI ethics framework.
Formally and informally incentivize employees to play a role in finding AI ethical risks. As we’ve learned from many infamous examples, ethical standards are compromised when people are financially incentivized to act unethically.
Similarly, failing to financially incentivize ethical actions can lead to them being deprioritized.
An organization’s values are partly written based on how it directs financial resources. When employees don’t see a budget behind scaling and keeping a strong data and AI ethics program, they will turn their attention to what moves them forward in their career.
Therefore, rewarding people for their efforts to promote a data ethics program is essential.
Monitor impacts and engage stakeholders. Creating organizational awareness, ethics committees, informed product managers, owners, engineers and data collectors are part of the development and, ideally, procurement processes.
But limited resources, time and a general failure to imagine all ways things can go wrong means it’s vital to monitor the impacts of data and AI products on the market. Consider a car can be built with air bags and crumple zones, but that doesn’t mean it’s safe to drive it at 100 mph. Similarly, AI products can be ethically developed but unethically deployed.
There’s still both qualitative and quantitative research to be done. This especially requires engaging stakeholders to decide how the product has affected them. Indeed, in an ideal scenario, relevant stakeholders are named early in the development process and help articulate what the product does and doesn’t do.
Operationalizing data and AI ethics isn’t an easy task. It requires buy-in from senior leadership, along with cross-functional collaboration. Organizations making the investment, however, won’t only see mitigated risk but also more efficient adoption of the technologies needed to move forward.
And finally, they will be exactly what your clients, consumers and employees are seeking: Trustworthiness.
If you’re already on your AI journey, but wandering, not sure where to go next or just setting out, it’s best practice to partner with a trusted advisor like System Soft Technologies.
System Soft can help your organization streamline business processes using AI (and robotic process automation). Our intent is for you to hit revenue goals, efficiently use resources and lower operational costs.
[Watch on-demand webinar: How Small and Mid-Sized Businesses Overcome Automation Barriers]
System Soft holds a two- to three-hour workshop with you to examine your situation. We then partner with you to find answers to your questions and strategize how to tackle your issues.
[Watch on-demand webinar: Cutting Through the Hype of Hyperautomation]
About the Author: Rajesh Patil
Rajesh Patil is the Director of Intelligent Automation at System Soft Technologies. Rajesh has more than 20 years of industry experience enabling organizations through extensive transformation journeys, with a focus on process improvement. He’s proficient in robotic process automation (RPA), intelligent automation, E2E sales lifecycle, solution architecture and automation delivery across various verticals.