Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection
We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.
Services for Technology Vendors
We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.
Governance, risk management and compliance are essential tactics for a successful organization. Effective GRC practices help organizations achieve business objectives, mitigate risks and ensure compliance with laws and regulations. As a chief information officer or IT leader, it is important to evaluate new technologies and determine their impact on the business, including whether they fit within the scope of current GRC programs and processes.
Managing governance and risk requires accountability by operational leaders for the review and compliance of operating policies and standards. Artificial intelligence technologies, including generative AI, can be valuable additions to an organization’s technology strategies supporting governance and risk. It is important, however, to carefully evaluate the potential risks and benefits of incorporating AI technologies into the organization and to ensure their use is consistent with the overall business objectives and goals.
Regulations and disclosures for generative AI may not yet exist for your industry or location. How can you safely use the technology? And how can your organization adapt to regulations and disclosure requirements once legislation is passed?
Laws for monitoring and regulating AI technologies vary widely. In some U.S. states, AI regulation is part of comprehensive consumer privacy bills mirroring legislation enacted in California, Colorado, Connecticut and other states. As part of consumer privacy, proposed bills would limit the use of AI profiling, establish assessments and controls for automated decision-making and eliminate the use of facial recognition in public locations.
Some states have proposed legislation to safeguard consumers from discrimination caused by AI-powered applications and services. Specific industries mentioned include education, employment, financial services, healthcare, housing, insurance, utilities and voting.
Automated decisions during the hiring process are of particular concern in several states. Proposed bills would require employers to issue notice to applicants if automated decision tools are used to make employment decisions. Other legislation would require bias audits for decision-making tools as well as declaring which hiring criteria were processed using algorithms.
Regulations governing the use of AI technology in healthcare are the focus for no less than seven U.S. states. Some drafts aim to protect patients from discrimination due to automated decision systems, while others would prevent hospitals from adopting policies that reject the judgment of nurses in favor of AI recommendation systems. Some states are also assessing the effect of AI-powered applications on mental health treatment.
State agencies are also examining the potential impact that AI technology could have on internal systems and processes. Many states are investigating areas where AI could be beneficial, the need for disclaimers when virtual assistants are used by agencies and inventorying “high risk” automated decision systems involved in procurement and implementation.
Generative AI content has become a common topic of discussion, and state governments have proposed legislation to regulate agencies from potential harm caused by AI-generated images or videos. Several of the bills refer to this content as “synthetic media” and would require disclosures to audiences or registration of generative AI models to uphold operating standards.
A large group of U.S. states have proposed the formation of task forces and commissions to study the impact of AI technologies on jobs and the economy. And finally, states are looking to protect younger users related to targeted advertising and gambling.
Legislation of AI technology is occurring in many areas of the world beyond the United States. The European Parliament recently passed initial legislation on AI regulations. The EU AI Act introduced formal rules for generative AI technologies. The members of Parliament have decided to impose stricter regulations on generative AI tools, such as ChatGPT. Developers of these tools must submit systems for review before they can be released commercially. Additionally, Parliament has reaffirmed its ban on real-time biometric identification systems and social scoring systems. These measures aim to guarantee the responsible development and use of AI technologies.
The U.K. has also shared its plans to become a center of innovation as well as a rules-maker for AI technologies. Instead of proposing tech-specific regulations, the British government published a white paper outlining a principles-based approach to the technology. An upcoming AI safety summit is designed to build community interest in this approach.
Global regulators are working to understand and standardize generative AI technology to mitigate potential risks to society, including those related to job security and political integrity. By taking proactive measures, they aim to verify that the development and use of this technology is responsible and beneficial for all.
For organizations yet to incorporate AI technologies into the GRC program, here are a few recommendations to address this responsibly. It is important to avoid a fragmented approach to achieve operational resilience. Ventana Research asserts that by 2026, one-third of organizations will ensure workforce readiness by operating a unified governance and risk program to guide workforce compliance to policies and standards.
First, conduct a thorough assessment of the potential risks and benefits of incorporating AI technologies into the GRC program. Assess the potential impact on the organization’s business objectives and goals as well as legal or regulatory implications.
Next, develop a strategy for integrating AI technologies into the GRC program, including a road map for implementation and a plan for ongoing monitoring and evaluation. Ensure relevant stakeholders, including senior management, IT staff and workers, are involved in the decision-making process and are fully informed about the potential risks and benefits of incorporating AI technologies into the GRC program. As part of the internal roll-out of generative AI technology, provide training and support to workers so they can use AI technologies effectively. Finally, regularly review and update the organization’s GRC efforts so the program remains effective and up to date with the latest technological developments.
If AI technologies are already covered within the scope of an existing GRC program, follow this process to understand the impact and risk associated with generative AI.
Government and industry regulations will take time, and organizations will need an inordinate amount of patience. The workforce will likely be clamoring for opportunities to try generative AI in its workflows. It may seem straightforward to inform workers they cannot use generative AI until an organizational policy is set, but this could have negative consequences.
Alternatively, we recommend starting a dialogue with the workforce to:
For many organizations, AI technologies can be a valuable tool to help achieve business value. Generative AI has also gained awareness among workers interested in how it impacts their roles. Organizations, industries and governments are working to establish guidelines and regulations to ensure that AI technologies are developed and used in responsible ways. Organizations can take the initiative by assessing the risk and processes to govern the use of generative AI within existing GRC programs. The assessment should involve the workforce as an ally in identifying and enacting organizational resilience.
Regards,
Jeff Orr
Jeff Orr leads the research and advisory for the CIO and digital technology expertise at ISG Software Research, with a focus on modernization and transformation for IT. Jeff’s coverage spans cloud computing, DevOps and platforms, digital security, intelligent automation, ITOps and service management, intelligent automation and observation technologies across the enterprise.
Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business,
Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@isg-research.net