How companies can create and capture value from generative AI
ARTICLE | September 07, 2023
Authored by RSM US LLP
Successful organizations have a keen sense of where to create and capture value for their customers and employees. Now, with broad adoption of generative artificial intelligence, organizations can enhance not only how they create value but also their methods of capturing it.
Large software businesses are creating platforms that allow companies to build plug-ins using generative AI. No longer are organizations having to start from the ground up by building a large language model (LLM) themselves. Instead, they can quickly tap into existing LLMs and reduce the barrier of entry to the use of AI technology.
But this requires the right talent, culture and team structure. An organization that lacks any of these three elements will find it more difficult to create and capture sustained value.
A lack of qualified talent will hold a company back from using generative AI. This can be solved through upskilling existing workers who have an aptitude for and interest in the technology or by hiring external employees.
But these employees will need to understand what generative AI is and the value that can be created and captured from its use. They also need to be aware of the regulations surrounding its use and have a technical understanding of how to deploy it.
To harness the benefits of generative AI, a company must be willing to change its processes. Its willingness and ability to implement change will determine the value it derives from any new technology.
A key cultural attribute of an organization is understanding its appetite for the risk that comes with the adoption of new technology.
Companies that have the right talent and culture but lack an effective team structure will find it difficult to create the highest value from generative AI. Even if a company has the willingness and the people to use the technology, new ideas will be siloed across the organization.
Key attributes for a team include the ability to create value quickly, capture that value across the organization, and create and capture sustained value. These four team structures can help organizations use generative AI:
- Tiger team: This is a cross-functional group formed to solve a specific problem or address a critical issue, usually in a high-profile project. Once the project is complete, the members disperse and return to their original roles. A team focused on generative AI would address risk management, adoption strategy, process improvement, IT requirements and business considerations. The goal of this team is to move quickly but stay within the organization’s risk profile.
- Skunkworks team: This group, tasked with working on advanced projects, enjoys a high degree of autonomy unhampered by bureaucracy. Through it, an organization can test and validate ideas that would normally not get approval through existing systems. A team focused on generative AI would create multiple offerings across the organization and then quickly test scenarios.
- Cross-functional team: Though similar to a tiger team in structure, this team is permanent and designed to create value. In the case of generative AI, a cross-functional team would focus on methodically going through the organization to identify talent that can create additional value using the technology.
- Functional team: Focused on capturing sustained value, a functional team would seek to maintain generative AI capabilities on an ongoing basis.
Assessing the risk appetite
Organizations must understand their risk appetite. Generative AI has its drawbacks, after all, and is still a work in progress.
- Risk-averse: An organization in this category should consider whether a generative AI project is appropriate. While the concepts around generative AI are not new, regulations are only now emerging. Risk-averse organizations should focus on internal projects that have the least external exposure. A company’s culture will be a big factor in whether it can capture any value created by generative AI projects. Organizations that are risk-averse tend not to adopt new processes quickly.
- Risk-seeking: Organizations with a higher risk profile are more likely to take on generative AI. They typically are also more willing to adapt and change their processes. Both internal and external generative AI projects may be viable. These organizations should consider projects with the highest potential value.
Establishing a baseline
Before deploying generative AI, an organization must understand what value it is trying to create or capture, as well as the difference between machine learning and generative AI. Generative AI creates images, video, music, speech, text and software code based on input. Understanding an organization’s current capabilities in these areas provides a baseline from which generative AI can make improvements.
Going to the next level requires creativity, which involves adopting new processes but maintaining guardrails.
Organizations should create a set of AI principles that transcend any one project.
The Software Engineering Institute and Microsoft have established key principles that organizations can use as a jumping-off point for adopting AI.
Software Engineering Institute
- Human-centered: “Human-centered AI systems are designed to work with, and for, people. As the desire to use AI systems grows, human-centered engineering principles are critical to guide system development toward effective implementation and minimize unintended consequences.”
- Scalable: “Scalable AI is the ability of AI algorithms, data, models, and infrastructure to operate at the size, speed, and complexity required for the mission. Scalability is a critical concept in many engineering disciplines and is crucial to realizing operational AI capabilities."
- Robust and secure: “Robust and secure AI systems are AI systems that reliably operate at expected levels of performance, even when faced with uncertainty and in the presence of danger or threat. These systems have built-in structures, mechanisms, or mitigations to prevent, avoid, or provide resilience to dangers from a particular threat model.”
- Fairness: “How might an AI system allocate opportunities, resources, or information in ways that are fair to the humans who use it?”
- Reliability and safety: “How might the system function well for people across different use conditions and contexts, including ones it was not originally intended for?”
- Privacy and security: “How might the system be designed to support privacy and security?”
- Inclusiveness: “How might the system be designed to be inclusive of people of all abilities?”
- Transparency: “How might people misunderstand, misuse, or incorrectly estimate the capabilities of the system?”
- Accountability: “How can we create oversight so that humans can be accountable and in control?”
An organization needs a strong technical framework to optimize implementation of generative AI across the enterprise. Without this framework, it risks inconsistent results, a lack of scalability, and missed opportunities to gain a competitive advantage.
With the continued updates to generative AI, business leaders must find trusted partners and resources to stay current. Organizations like Microsoft, Google and the World Economic Forum provide resources for upskilling workers, as well as guidance on best practices for AI adoption.
Awareness of laws that govern both the creation and use of LLMs is paramount. Organizations should take advantage of partnerships with data privacy companies and consult their legal counsel as they look to roll out LLMs.
Contact us at one of our locations or fill out the form below and we'll contact you to discuss your specific situation.
This article was written by Seth Bacon and originally appeared on 2023-09-07.
2022 RSM US LLP. All rights reserved.
RSM US Alliance provides its members with access to resources of RSM US LLP. RSM US Alliance member firms are separate and independent businesses and legal entities that are responsible for their own acts and omissions, and each is separate and independent from RSM US LLP. RSM US LLP is the U.S. member firm of RSM International, a global network of independent audit, tax, and consulting firms. Members of RSM US Alliance have access to RSM International resources through RSM US LLP but are not member firms of RSM International. Visit rsmus.com/about us for more information regarding RSM US LLP and RSM International. The RSM logo is used under license by RSM US LLP. RSM US Alliance products and services are proprietary to RSM US LLP.
Johnson & Sheldon, PLLC is a proud member of the RSM US Alliance, a premier affiliation of independent accounting and consulting firms in the United States. RSM US Alliance provides our firm with access to resources of RSM US LLP, the leading provider of audit, tax and consulting services focused on the middle market. RSM US LLP is a licensed CPA firm and the U.S. member of RSM International, a global network of independent audit, tax and consulting firms with more than 43,000 people in over 120 countries.
Our membership in RSM US Alliance has elevated our capabilities in the marketplace, helping to differentiate our firm from the competition while allowing us to maintain our independence and entrepreneurial culture. We have access to a valuable peer network of like-sized firms as well as a broad range of tools, expertise and technical resources.