The relentless hype surrounding generative AI in the past few months has been accompanied by equally loud anguish over the supposed perils — just look at the open letter calling for a pause in AI experiments. This tumult risks blinding us to more immediate risks — think sustainability and bias — and clouds our ability to appreciate the real value of these systems: not as generalist chatbots, but instead as a class of tools that can be applied to niche domains and offer novel ways of finding and exploring highly specific information.

This shouldn’t come as a surprise. The news that a dozen companies have developed ChatGPT plugins is a clear demonstration of the likely direction of travel. A “generalized” chatbot won’t do everything for you, but if you’re, say, Expedia, being able to offer customers a simple way to organize their travel plans is undeniably going to give you an edge in a marketplace where information discovery is so important.

Whether or not this really amounts to an “iPhone moment” or a serious threat to Google search isn’t obvious at present — while it will likely push a change in user behaviors and expectations, the first shift will be organizations pushing to bring tools trained on large language models (LLMs) to learn from their own data and services.

And this, ultimately, is the key — the significance and value of generative AI today is not really a question of societal or industry-wide transformation. It’s instead a question of how this technology can open up new ways of interacting with large and unwieldy amounts of data and information.

Related work from others:  Latest from MIT : This 3D printer can watch itself fabricate objects

OpenAI is clearly attuned to this fact and senses a commercial opportunity: although the list of organizations taking part in the ChatGPT plugin initiative is small, OpenAI has opened up a waiting list where companies can sign up to gain access to the plugins. In the months to come, we will no doubt see many new products and interfaces backed by OpenAI’s generative AI systems.

While it’s easy to fall into the trap of seeing OpenAI as the sole gatekeeper of this technology — and ChatGPT as the go-to generative AI tool — this fortunately is far from the case. You don’t need to sign up on a waiting list or have vast amounts of cash available to hand over to Sam Altman; instead, it’s possible to self-host LLMs.

This is something we’re starting to see at Thoughtworks. In the latest volume of the Technology Radar — our opinionated guide to the techniques, platforms, languages and tools being used across the industry today — we’ve identified a number of interrelated tools and practices that indicate the future of generative AI is niche and specialized, contrary to what much mainstream conversation would have you believe.

Unfortunately, we don’t think this is something many business and technology leaders have yet recognized. The industry’s focus has been set on OpenAI, which means the emerging ecosystem of tools beyond it — exemplified by projects like GPT-J and GPT Neo — and the more DIY approach they can facilitate have so far been somewhat neglected. This is a shame because these options offer many benefits. For example, a self-hosted LLM sidesteps the very real privacy issues that can come from connecting data with an OpenAI product. In other words, if you want to deploy an LLM to your own enterprise data, you can do precisely that yourself; it doesn’t need to go elsewhere. Given both industry and public concerns with privacy and data management, being cautious rather than being seduced by the marketing efforts of big tech is eminently sensible.

Related work from others:  Latest from Google AI - Constrained Reweighting for Training Deep Neural Nets with Noisy Labels

A related trend we’ve seen is domain-specific language models. Although these are also only just beginning to emerge, fine-tuning publicly available, general-purpose LLMs on your own data could form a foundation for developing incredibly useful information retrieval tools. These could be used, for example, on product information, content, or internal documentation. In the months to come, we think you’ll see more examples of these being used to do things like helping customer support staff and enabling content creators to experiment more freely and productively.

If generative AI does become more domain-specific, the question of what this actually means for humans remains. However, I’d suggest that this view of the medium-term future of AI is a lot less threatening and frightening than many of today’s doom-mongering visions. By better bridging the gap between generative AI and more specific and niche datasets, over time people should build a subtly different relationship with the technology. It will lose its mystique as something that ostensibly knows everything, and it will instead become embedded in our context.

Indeed, this isn’t that novel. GitHub Copilot is a great example of AI being used by software developers in very specific contexts to solve problems. Despite its being billed as “your AI pair programmer,” we would not call what it does “pairing” — it’s much better described as a supercharged, context-sensitive Stack Overflow.

As an example, one of my colleagues uses Copilot not to do work but as a means of support as he explores a new programming language — it helps him to understand the syntax or structure of a language in a way that makes sense in the context of his existing knowledge and experience.

Related work from others:  Latest from MIT : Computing for the health of the planet

We will know that generative AI is succeeding when we stop noticing it and the pronouncements about what it might do die down. In fact, we should be willing to accept that its success might actually look quite prosaic. This shouldn’t matter, of course; once we’ve realized it doesn’t know everything — and never will — that will be when it starts to become really useful.

Provided by Thoughtworks

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Similar Posts