Since its release in November 2022, almost everyone involved with technology has experimented with ChatGPT: students, faculty, and professionals in almost every discipline. Almost every company has undertaken AI projects, including companies that, at least on the face of it, have “no AI” policies. Last August, OpenAI stated that 80% of Fortune 500 companies have ChatGPT accounts. Interest and usage have increased as OpenAI has released more capable versions of its language model: GPT-3.5 led to GPT-4 and multimodal GPT-4V, and OpenAI has announced an Enterprise service with better guarantees for security and privacy. Google’s Bard/Gemini, Anthropic’s Claude, and other models have made similar improvements. AI is everywhere, and even if the initial frenzy around ChatGPT has died down, the big picture hardly changes. If it’s not ChatGPT, it will be something else, possibly something users aren’t even aware of: AI tools embedded in documents, spreadsheets, slide decks, and other tools in which AI fades into the background. AI will become part of almost every job, ranging from manual labor to management.

With that in mind, we need to ask what companies must do to use AI responsibly. Ethical obligations and responsibilities don’t change, and we shouldn’t expect them to. The problem that AI introduces is the scale at which automated systems can cause harm. AI magnifies issues that are easily rectified when they affect a single person. For example, every company makes poor hiring decisions from time to time, but with AI all your hiring decisions can quickly become questionable, as Amazon discovered. The New York Times’ lawsuit against OpenAI isn’t about a single article; if it were, it would hardly be worth the legal fees. It’s about scale, the potential for reproducing their whole archive. O’Reilly Media has built an AI application that uses our authors’ content to answer questions, but we compensate our authors fairly for that use: we won’t ignore our obligations to our authors, either individually or at scale.

It’s essential for companies to come to grips with the scale at which AI works and the effects it creates. What are a corporation’s responsibilities in the age of AI—to its employees, its customers, and its shareholders? The answers to this question will define the next generation of our economy. Introducing new technology like AI doesn’t change a company’s basic responsibilities. However, companies must be careful to continue living up to their responsibilities. Workers fear losing their jobs “to AI,” but also look forward to tools that can eliminate boring, repetitive tasks. Customers fear even worse interactions with customer service, but look forward to new kinds of products. Stockholders anticipate higher profit margins, but fear seeing their investments evaporate if companies can’t adopt AI quickly enough. Does everybody win? How do you balance the hopes against the fears? Many people believe that a corporation’s sole responsibility is to maximize short-term shareholder value with little or no concern for the long term. In that scenario, everybody loses—including stockholders who don’t realize they’re participating in a scam.

How would corporations behave if their goal were to make life better for all of their stakeholders? That question is inherently about scale. Historically, the stakeholders in any company are the stockholders. We need to go beyond that: the employees are also stakeholders, as are the customers, as are the business partners, as are the neighbors, and in the broadest sense, anyone participating in the economy. We need a balanced approach to the entire ecosystem.

O’Reilly tries to operate in a balanced ecosystem with equal weight going toward customers, shareholders, and employees. We’ve made a conscious decision not to manage our company for the good of one group while disregarding the needs of everyone else. From that perspective, we want to dive into how we believe companies need to think about AI adoption and how their implementation of AI needs to work for the benefit of all three constituencies.

Being a Responsible Employer

While the number of jobs lost to AI so far has been small, it’s not zero. Several copywriters have reported being replaced by ChatGPT; one of them eventually had to “accept a position training AI to do her old job.” However, a few copywriters don’t make a trend. So far, the total numbers appear to be small. One report claims that in May 2023, over 80,000 workers were laid off, but only about 4,000 of these layoffs were caused by AI, or 5%. That’s a very partial picture of an economy that added 390,000 jobs during the same period. But before dismissing the fear-mongering, we should wonder whether this is the shape of things to come. 4,000 layoffs could become a much larger number very quickly.

Fear of losing jobs to AI is probably lower in the technology sector than in other business sectors. Programmers have always made tools to make their jobs easier, and GitHub Copilot, the GPT family of models, Google’s Bard, and other language models are tools that they’re already taking advantage of. For the immediate future, productivity improvements are likely to be relatively small: 20% at most. However, that doesn’t negate the fear; and there may well be more fear in other sectors of the economy. Truckers and taxi drivers wonder about autonomous vehicles; writers (including novelists and screenwriters, in addition to marketing copywriters) worry about text generation; customer service personnel worry about chatbots; teachers worry about automated tutors; and managers worry about tools for creating strategies, automating reviews, and much more.

An easy reply to all this fear is “AI is not going to replace humans, but humans with AI are going to replace humans without AI.” We agree with that statement, as far as it goes. But it doesn’t go very far. This attitude blames the victim: if you lose your job, it’s your own fault for not learning how to use AI. That’s a gross oversimplification. Second, while most technological changes have created more jobs than they destroyed, that doesn’t mean that there isn’t a time of dislocation, a time when the old professions are dying out but the new ones haven’t yet come into being. We believe that AI will create more jobs than it destroys—but what about that transition period? The World Economic Forum has published a short report that lists the 10 jobs most likely to see a decline, and the 10 most likely to see gains. Suffice it to say that if your job title includes the word “clerk,” things might not look good—but your prospects are looking up if your job title includes the word “engineer” or “analyst.”

The best way for a company to honor its commitment to its employees and to prepare for the future is through education. Most jobs won’t disappear, but all jobs will change. Providing appropriate training to get employees through that change may be a company’s biggest responsibility. Learning how to use AI effectively isn’t as trivial as a few minutes of playing with ChatGPT makes it appear. Developing good prompts is serious work and it requires training. That’s certainly true for technical employees who will be developing applications that use AI systems through an API. It’s also true for non-technical employees who may be trying to find insights from data in a spreadsheet, summarize a group of documents, or write text for a company report. AI needs to be told exactly what to do and, often, how to do it.

One aspect of this change will be verifying that the output of an AI system is correct. Everyone knows that language models make mistakes, often called “hallucinations.” While these mistakes may not be as dramatic as making up case law, AI will make mistakes—errors at the scale of AI—and users will need to know how to check its output without being deceived (or in some cases, bullied) by its overconfident voice. The frequency of errors may go down as AI technology improves, but errors won’t disappear in the foreseeable future. And even with error rates as low as 1%, we’re easily talking about thousands of errors sprinkled randomly through software, press releases, hiring decisions, catalog entries—everything AI touches. In many cases, verifying that an AI has done its work correctly may be as difficult as it would be for a human to do the work in the first place. This process is often called “critical thinking,” but it goes a lot deeper: it requires scrutinizing every fact and every logical inference, even the most self-evident and obvious. There is a methodology that needs to be taught, and it is the employers’ responsibility to ensure that their employees have appropriate training to detect and correct errors.

Related work from others:  Latest from MIT : SMART launches research group to advance AI, automation, and the future of work

The responsibility for education isn’t limited to training employees to use AI within their current positions. Companies need to provide education for transitions from jobs that are disappearing to jobs that are growing. Responsible use of AI includes auditing to ensure that its outputs aren’t biased, and that they are appropriate. Customer service personnel can be retrained to test and verify that AI systems are working correctly. Accountants can become auditors responsible for overseeing IT security. That transition is already happening; auditing for the SOC 2 corporate security certification is handled by accountants. Businesses need to invest in training to support transitions like these.

Looking at an even broader context: what are a corporation’s responsibilities to local public education? No company is going to prosper if it can’t hire the people it needs. And while a company can always hire employees who aren’t local, that assumes that educational systems across the country are well-funded, but they frequently aren’t.

This looks like a “tragedy of the commons”: no single non-governmental organization is responsible for the state of public education, public education is expensive (it’s usually the biggest line item on any municipal budget), so nobody takes care of it. But that narrative repeats a fundamental misunderstanding of the “commons.” The “tragedy of the commons” narrative was never correct; it is a fiction that achieved prominence as an argument to justify eugenics and other racist policies. Historically, common lands were well managed by law, custom, and voluntary associations. The commons declined when landed gentry and other large landholders abused their rights to the detriment of the small farmers; the commons as such disappeared through enclosure, when the large landholders fenced in and claimed common land as private property. In the context of the 20th and 21st centuries, the landed gentry—now frequently multinational corporations—protect their stock prices by negotiating tax exemptions and abandoning their responsibilities towards their neighbors and their employees.

The economy itself is the biggest commons of all, and nostrums like “the invisible hand of the marketplace” do little to help us understand responsibilities. This is where the modern version of “enclosure” takes place: in minimizing labor cost to maximize short-term value and executive salaries. In a winner-take-all economy where a company’s highest-paid employees can earn over 1000 times as much as the lowest paid, the absence of a commitment to employees leads to poor housing, poor school systems, poor infrastructure, and marginalized local businesses. Quoting a line from Adam Smith that hasn’t entered our set of economic cliches, senior management salaries shouldn’t facilitate “gratification of their own vain and insatiable desires.”

One part of a company’s responsibilities to its employees is paying a fair wage. The consequences of not paying a fair wage, or of taking every opportunity to minimize staff, are far-reaching; they aren’t limited to the people who are directly affected. When employees aren’t paid well, or live in fear of layoffs, they can’t participate in the local economy. There’s a reason that low income areas often don’t have basic services like banks or supermarkets. When people are just subsisting, they can’t afford the services they need to flourish; they live on junk food because they can’t afford a $40 Uber to the supermarket in a more affluent town (to say nothing of the time).  And there’s a reason why it’s difficult for lower-income people to make the transition to the middle class. In very real terms, living is more expensive if you’re poor: long commutes with less reliable transportation, poor access to healthcare, more expensive food, and even higher rents (slum apartments aren’t cheap) make it very difficult to escape poverty. An automobile repair or a doctor’s bill can exhaust the savings of someone who is near the poverty line.

That’s a local problem, but it can compound into a national or worldwide problem. That happens when layoffs become widespread—as happened in the winter and spring of 2023. Although there was little evidence of economic stress, fear of a recession led to widespread layoffs (often sparked by “activist investors” seeking only to maximize short-term stock price), which nearly caused a real recession. The primary driver for this “media recession” was a vicious cycle of layoff news, which encouraged fear, which led to more layoffs. When you see weekly announcements of layoffs in the tens of thousands, it’s easy to follow the trend. And that trend will eventually lead to a downward spiral: people who are unemployed don’t go to restaurants, defer maintenance on cars and houses, spend less on clothing, and economize in many other ways. Eventually, this reduction in economic activity trickles down and causes merchants and other businesses to close or reduce staff.

There are times when layoffs are necessary; O’Reilly has suffered through those. We’re still here as a result. Changes in markets, corporate structure, corporate priorities, skills required, and even strategic errors such as overhiring can all make layoffs necessary. These are all valid reasons for layoffs. A layoff should never be an “All of our peers are laying people off, let’s join the party” event; that happened all too often in the technology sector last year. Nor should it be an “our stock price could be higher and the board is cranky” event. A related responsibility is honesty about the company’s economic condition. Few employees will be surprised to hear that their company isn’t meeting its financial goals. But honesty about what everyone already knows might keep key people from leaving when you can least afford it. Employees who haven’t been treated with respect and honesty can’t be expected to show loyalty when there’s a crisis.

Employers are also responsible for healthcare, at least in the US. This is hardly ideal, but it’s not likely to change in the near future. Without insurance, a hospitalization can be a financial disaster, even for a highly compensated employee. So can a cancer diagnosis or any number of chronic diseases. Sick time is another aspect of healthcare—not just for those who are sick, but for those who work in an office. The COVID pandemic is “over” (for a very limited sense of “over”) and many companies are asking their staff to return to offices. But we all know people who at workplaces where COVID, the flu, or another disease has spread like wildfire because one person didn’t feel well and reported to the office anyway. Companies need to respect their employees’ health by providing health insurance and allowing sick time—both for the employees’ sakes and for everyone they come in contact with at work.

We’ve gone far afield from AI, but for good reasons. A new technology can reveal gaps in corporate responsibility, and help us think about what those responsibilities should be. Compartmentalizing is unhealthy; it’s not helpful to talk about a company’s responsibilities to highly paid engineers developing AI systems without connecting that to responsibilities towards the lowest-paid support staff. If programmers are concerned about being replaced by a generative algorithm, the groundskeepers should certainly worry about being replaced by autonomous lawnmowers.

Given this context, what are a company’s responsibilities towards all of its employees?

Providing training for employees so they remain relevant even as their jobs changeProviding insurance and sick leave so that employees’ livelihoods aren’t threatened by health problemsPaying a livable wage that allows employees and the communities they live in to prosperBeing honest about the company’s finances when layoffs or restructuring are likelyBalancing the company’s responsibilities to employees, customers, investors, and other constituencies

Responsibilities to Business Partners

Generative AI has spawned a swirl of controversy around copyright and intellectual property. Does a company have any obligation towards the creators of content that they use to train their systems? These content creators are business partners, whether or not they have any say in the matter. A company’s legal obligations are currently unclear, and will ultimately be decided in the courts or by legislation. But treating its business partners fairly and responsibly isn’t just a legal matter.

We believe that our talent—authors and teachers—should be paid. As a company that is using AI to generate and deliver content, we are committed to allocating income to authors as their work is used in that content, and paying them appropriately—as we do with all other media. Granted, our use case makes the problem relatively simple. Our systems recommend content, and authors receive income when the content is used. They can answer users’ questions by extracting text from content to which we’ve acquired the rights; when we use AI to generate an answer, we know where that text has come from, and can compensate the original author accordingly. These answers also link to the original source, where users can find more information, again generating income for the author. We don’t treat our authors and teachers as an undifferentiated class whose work we can repurpose at scale and without compensation. They aren’t abstractions who can be dissociated from the products of their labor.

Related work from others:  Latest from MIT : Success at the intersection of technology and finance

We encourage our authors and teachers to use AI responsibly, and to work with us as we build new kinds of products to serve future generations of learners. We believe that using AI to create new products, while always keeping our responsibilities in mind, will generate more income for our talent pool—and that sticking to “business as usual,” the products that have worked in the past, isn’t to anyone’s advantage. Innovation in any technology, including training, entails risk. The alternative to risk-taking is stagnation. But the risks we take always account for our responsibilities to our partners: to compensate them fairly for their work, and to build a learning platform on which they can prosper. In a future article, we will discuss our AI policies for our authors and our employees in more detail.

The applications we are building are fairly clear-cut, and that clarity makes it fairly easy to establish rules for allocating income to authors. It’s less clear what a company’s responsibilities are when an AI isn’t simply extracting text, but predicting the most likely next token one at a time. It’s important not to side-step those issues either. It’s certainly conceivable that an AI could generate an introduction to a new programming language, borrowing some of the text from older content and generating new examples and discussions as necessary. Many programmers have already found ChatGPT a useful tool when learning a new language. Such a tutorial could even be generated dynamically, at a user’s request. When an AI model is generating text by predicting the next token in the sequence, one token at a time, how do you attribute?

While it’s not yet clear how this will work out in practice, the principle is the same: generative AI doesn’t create new content, it extracts value from existing content, and the creators of that original content deserve compensation. It’s possible that these situations could be managed by careful prompting: for example, a system prompt or a RAG application that controls what sources are used to generate the answer would make attribution easier. Ignoring the issue and letting an AI generate text with no accountability isn’t a responsible solution. In this case, acting responsibly is about what you build as much as it is about who you pay; an ethical company builds systems that allow it to act responsibly. The current generation of models are, essentially, experiments that got out of control. It isn’t surprising that they don’t have all the features they need. But any models and applications built in the future will lack that excuse.

Many other kinds of business partners will be affected by the use of AI: suppliers, wholesalers, retailers, contractors of many types. Some of these affects will result from their own use of AI; some won’t. But the principles of fairness and compensation where compensation is due remain the same. A company should not use AI to justify short-changing its business partners.

A company’s responsibilities to its business partners thus include:

Compensating business partners for all use of their content, including AI-repurposed content.Building applications that use AI to serve future generations of users.Encouraging partners to use AI responsibly in the products they develop.

Responsibilities to Customers

We all think we know what customers want: better products at lower prices, sometimes at prices that are below what’s reasonable. But that doesn’t take customers seriously. The first of O’Reilly Media’s operating principles is about customers—as are the next four. If a company wants to take its customers seriously, particularly in the context of AI-based products, what responsibilities should it be thinking about?

Every customer must be treated with respect. Treating customers with respect starts with sales and customer service, two areas where AI is increasingly important. It’s important to build AI systems that aren’t abusive, even in subtle ways—even though human agents can also be abusive. But the responsibility extends much farther. Is a recommendation engine recommending appropriate products? We’ve certainly heard of Black women who only get recommendations for hair care products that White women use. We’ve also heard of Black men who see advertisements for bail bondsmen whenever they make any kind of a search. Is an AI system biased with respect to race, gender, or almost anything else? We don’t want real estate systems that re-implement redlining where minorities are only shown properties in ghetto areas. Will a resume screening system treat women and racial minorities fairly? Concern for bias goes even farther: it is possible for AI systems to develop bias against almost anything, including factors that it wouldn’t occur to humans to think about. Would we even know if an AI developed a bias against left-handed people?

We’ve known for a long time that machine learning systems can’t be perfect. The tendency of the latest AI systems to hallucinate has only rubbed our faces in that fact. Although techniques like RAG can minimize errors, it is probably impossible to prevent them altogether, at least with the current generation of language models. What does that mean for our customers? They aren’t paying us for incorrect information at scale; at the same time, if they want AI-enhanced services, we can’t guarantee that all of AI’s results will be correct. Our responsibilities to customers for AI-driven products are threefold. We need to be honest that errors will occur; we need to use techniques that minimize the probability of errors; and we need to present (or be prepared to present) alternatives so they can use their judgement about which answers are appropriate to their situation.

Respect for a customer includes respecting their privacy, an area in which online businesses are notably deficient. Any transaction involves a lot of data, ranging from data that’s essential to the transaction (what was bought, what was the price) to data that seems inconsequential but can still be collected and sold: browsing data obtained through cookies and tracking pixels is very valuable, and even arcana like keystroke timings can be collected and used to identify customers. Do you have the customer’s permission to sell the data that their transactions throw off? At least in the US, the laws on what you can do with data are porous and vary from state to state; because of GDPR, the situation in Europe is much clearer. But ethical and legal aren’t the same; “legal” is a minimum standard that many companies fail to meet. “Ethical” is about your own standards and principles for treating others responsibly and equitably. It is better to establish good principles that deal with your customers honestly and fairly than to wait for legislation to tell you what to do, or to think that fines are just another expense of doing business. Does a company use data in ways that respect the customer? Would a customer be horrified to find out, after the fact, where their data has been sold? Would a customer be equally horrified to find that their conversations with AI have been leaked to other users?

Every customer wants quality, but quality doesn’t mean the same thing to everyone. A customer on the edge of poverty might want durability, rather than expensive fine fabrics—though the same customer might, on a different purchase, object to being pushed away from the more fashionable products they want. How does a company respect the customer’s wishes in a way that isn’t condescending and delivers a product that’s useful? Respecting the customer means focusing on what matters to them; and that’s true whether the agent working with the customer is a human or an AI. The kind of sensitivity required is difficult for humans and may be impossible for machines, but it no less essential. Achieving the right balance probably requires a careful collaboration between humans and AI.

A business is also responsible for making decisions that are explainable. That issue doesn’t arise with human systems; if you are denied a loan, the bank can usually tell you why. (Whether the answer is honest may be another issue.) This isn’t true of AI, where explainability is still an active area for research. Some models are inherently explainable—for example, simple decision trees. There are explainability algorithms such as LIME that aren’t dependent on the underlying algorithm. Explainability for transformer-based AI (which includes just about all generative AI algorithms) is next to impossible. If explainability is a requirement—which is the case for almost anything involving money—it may be best to stay away from systems like ChatGPT. These systems make more sense in applications where explainability and correctness aren’t issues. Regardless of explainability, companies should audit the outputs of AI systems to ensure that they’re fair and unbiased.

Related work from others:  Latest from Google AI - Improving traffic evacuations: A case study

The ability to explain a decision means little if it isn’t coupled with the ability to correct decisions. Respecting the customer means having a plan for redress. “The computer did it” was never a good excuse, and it’s even less acceptable now, especially since it’s widely known that AI systems of all types (not just natural language systems) generate errors. If an AI system improperly denies a loan, is it possible for a human to approve the loan anyway? Humans and AI need to learn how to work together—and AI should never be an excuse.

Given this context, what are a company’s responsibilities to its customers? These responsibilities can be summed up with one word: respect. But respect is a very broad term; it includes:

Treating customers the way they would want to be treated.Respecting customers’ privacy.Understanding what the customer wants.Explaining decisions as needed.Providing a means to correct errors.Respecting customer privacy.

Responsibilities to Shareholders

It’s long been a cliche that a company’s primary responsibility is to maximize shareholder value. That’s a good pretext for arguing that a company has the right—no, the duty—to abuse employees, customers, and other stakeholders—particularly if the shareholder’s “value” is limited to the short-term. The idea that shareholder value is enshrined in law (either legislation or case law) is apocryphal. It appeared in the 1960s and 1970s, and was propagated by Milton Friedman and the Chicago school of economics.

Companies certainly have obligations to their shareholders, one of which is that shareholders deserve a return on their investment. But we need to ask whether this means short-term or long-term return. Finance in the US has fixated on short-term return, but that obsession is harmful to all of the stakeholders—except for executives who are often compensated in stock. When short-term returns cause a company to compromise the quality of its products, customers suffer. When short-term returns cause a company to layoff staff, the staff suffers, including those who stay: they are likely to be overworked and to fear further layoffs. Employees who fear losing their jobs, or are currently looking for new jobs, are likely to do a poor job of serving customers. Layoffs for strictly short-term financial gain are a vicious cycle for the company, too: they lead to missed schedules, missed goals, and further layoffs. All of these lead to a loss of credibility and poor long-term value. Indeed, one possible reason for Boeing’s problems with the 737 Max and the 787 has been a shift from an engineering-dominated culture that focused on building the best product to a financial culture that focused on maximizing short-term profitability. If that theory is correct, the results of the cultural change are all too obvious and present a significant threat to the company’s future.

What would a company that is truly responsible to its stakeholders look like, and how can AI be used to achieve that goal? We don’t have the right metrics; stock price, either short- or long-term, isn’t right. But we can think about what a corporation’s goals really are. O’Reilly Media’s operating principles start with the question “Is it best for the customer?” and continue with “Start with the customer’s point of view. It’s about them, not us.” Customer focus is a part of a company’s culture, and it’s antithetical to short-term returns. That doesn’t mean that customer focus sacrifices returns, but that maximizing stock price leads to ways of thinking that aren’t in the customers’ interests. Closing a deal whether or not the product is right takes priority over doing right by the customer. We’ve all seen that happen; at one time or another, we’ve all been victims of it.

There are many opportunities for AI to play a role in serving customers’ interests—and, in turn, serving shareholders’ interests. First, what does a customer want? Henry Ford probably didn’t say that customers want faster horses, but that remains an interesting observation. It’s certainly true that customers often don’t know what they really want, or if they do, can’t articulate it. Steve Jobs may have said that “our job is to figure out what they want before they do”; according to some stories, he lurked in the bushes outside Apple’s Palo Alto store to watch customers’ reactions. Jobs’ secret weapon was intuition and imagination about what might be possible. Could AI help humans to discover what traditional custom research, such as focus groups (which Jobs hated), is bound to miss? Could an AI system with access to customer data (possibly including videos of customers trying out prototypes) help humans develop the same kind of intuition that Steve Jobs had? That kind of engagement between humans and AI goes beyond AI’s current capabilities, but it’s what we’re looking for. If a key to serving the customers’ interests is listening—really listening, not just recording—can AI be an aid without also become creepy and intrusive? Products that really serve customers’ needs create long term value for all of the stakeholders.

This is only one way in which AI can serve to drive long-term success and to help a business deliver on its responsibilities to stockholders and other stakeholders. The key, again, is collaboration between humans and AI, not using AI as a pretext for minimizing headcount or shortchanging product quality.

It should go without saying, but in today’s business climate it doesn’t: one of a company’s responsibilities is to remain in business. Self-preservation at all costs is abusive, but a company that doesn’t survive isn’t doing its investors’ portfolios any favors. The US Chamber of Commerce, giving advice to small businesses asks, “Have you created a dynamic environment that can quickly and effectively respond to market changes? If the answer is ‘no’ or ‘kind of,’ it’s time to get to work.” Right now, that advice means engaging with AI and deciding how to use it effectively and ethically. AI changes the market itself; but more than that, it is a tool for spotting changes early and thinking about strategies to respond to change. Again, it’s an area where success will require collaboration between humans and machines.

Given this context, a company’s responsibility to its shareholders include:

Focusing on long-term rather than short-term returns.Building an organization that can respond to changes.Developing products that serve customers’ real needs.Enabling effective collaboration between humans and AI systems.

It’s about honesty and respect

A company has many stakeholders—not just the stockholders, and certainly not just the executives. These stakeholders form a complex ecosystem. Corporate ethics is about treating all of these stakeholders, including employees and customers, responsibly, honestly, and with respect. It’s about balancing the needs of each group so that all can prosper, about taking a long-term view that realizes that a company can’t survive if it is only focused on short-term returns for stockholders. That has been a trap for many of the 20th century’s greatest companies, and it’s unfortunate that we see many technology companies traveling the same path. A company that builds products that aren’t fit for the market isn’t going to survive; a company that doesn’t respect its workforce will have trouble retaining good talent; and a company that doesn’t respect its business partners (in our case, authors, trainers, and partner publishers on our platform) will soon find itself without partners.

Our corporate values demand that we do something better, that we keep the needs of all these constituencies in mind and in balance as we move our business forward. These values have nothing to do with AI, but that’s not surprising. AI creates ethical challenges, especially around the scale at which it can cause trouble when it is used inappropriately. However, it would be surprising if AI actually changed what we mean by honesty or respect. It would be surprising if the idea of behaving responsibly changed suddenly because AI became part of the equation.

Acting responsibly toward your employees, customers, business partners, and stockholders: that’s the core of corporate ethics, with or without AI.

Similar Posts